content
stringlengths
86
994k
meta
stringlengths
288
619
Baby Rudin problem 1.5 December 27th 2010, 11:20 PM #1 Baby Rudin problem 1.5 the whole problem and solution is at: Solutions – Baby Rudin – 1.5 (work in progress) « DaFeda&#039;s Blog. I'm sorry for posting an external link, just trying to avoid writing it all again. However, I do not think you even need to go there to answer my question. In my proof I write; If $\alpha > \beta$ then $-inf(A) > sup(-A)$. Since $sup(-A)\geq -x$ for all $x\in A$ this would imply that $-inf(A) > -x$ , which is a contradiction since $-inf(A) \geq -x$ for all $x\in A$. Can I use $\geq$ and $>$ in such a way? I have three different solutions available to me from different people, so I am not looking for another way of solving this. Hope someone has the time to take a look! Thanks. I don't see a contradiction here. The known fact that $a\ge b$ (where $a = -\inf(A)$ and $b=-x$) does not preclude that $a>b$. Moreover, the inequality $\inf(A)<x$ (and, correspondingly, $-\inf (A)>-x$) can be strict for all $x\in A$. This happens when $A$ does not have a minimum, e.g., when $A=\{x\in\mathbb{R}\mid x>0\}$. I don't see a contradiction here. The known fact that $a\ge b$ (where $a = -\inf(A)$ and $b=-x$) does not preclude that $a>b$. Moreover, the inequality $\inf(A)<x$ (and, correspondingly, $-\inf (A)>-x$) can be strict for all $x\in A$. This happens when $A$ does not have a minimum, e.g., when $A=\{x\in\mathbb{R}\mid x>0\}$. My thinking was that since there is some set $A$ such that $-inf(A)=-x$ for all $x \in A$, then saying that $-inf(A)> -x$ would be a contradiction. I will look for a different approach. I think I can use the fact that since $sup(-A) \geq -x$ then $-sup(-A) \leq x$ which is the definition of $inf(A)$ and so $inf(A)=-sup(-A)$. This shows that $-\sup(-A)$ is a lower bound. Now, you must show that it is greatest among all lower bounds. So, suppose there is a lower bound of A that is greater and use this to get an upper bound of -A which is smaller than sup(-A). This contradicts the leastness of sup(-A). In general, to show something is a sup/inf, you need to check two things: (1) it is a upper/lower bound and (2) it is (nonstrictly) smaller/larger than any particular upper/lower bound. Here is a different approach. Suppose that $\lambda =\inf(A)$, $\left( {\forall a \in A} \right)\left[ { - a \leqslant - \lambda } \right]$. Therefore, there exist $\gamma=\sup(-A)$. Moreover $\gamma \le -\lambda.$. So suppose that $\gamma < -\lambda$. That means $\lambda < -\gamma$ which implies $\[\left( {\exists a' \in A} \right)\left[ {\lambda \leqslant a' < -\gamma } \right]$. But that gives a contradiction: $\gamma < -a'$. Do you see what & why? So $\gamma= -\lambda$ This shows that $-\sup(-A)$ is a lower bound. Now, you must show that it is greatest among all lower bounds. So, suppose there is a lower bound of A that is greater and use this to get an upper bound of -A which is smaller than sup(-A). This contradicts the leastness of sup(-A). In general, to show something is a sup/inf, you need to check two things: (1) it is a upper/lower bound and (2) it is (nonstrictly) smaller/larger than any particular upper/lower bound. Let $\gamma$ be a lower bound of $A$ ( $\gamma \leq x$ for all $x\in A$), and let $\gamma > -sup(-A)$. Then $x \geq \gamma > -sup(-A)$ and so $-x \leq -\gamma < sup(-A)$ which makes $-\gamma$ an upper bound of $-A$ that is smaller than $sup(-A)$. Contradiction. Something like this Guy? As for your solution Plato, I will take a good look at it later today. Gotta run to work! Last edited by Mollier; December 29th 2010 at 12:28 AM. $\lambda = inf(A)$ and $inf(A) \leq a \; \forall a\in A$. $-\gamma = -sup(-A)$ and $-sup(-A) \leq a \; \forall a\in A$. So what you assuming is that, $inf(A) < -sup(-A)$, but since $a$$\geq -sup(-A)$ then how can there exists an $a'$ in $A$ such that $inf(A) \leq a' < -sup(-A)$ ? Let $\gamma$ be a lower bound of $A$ ( $\gamma \leq x$ for all $x\in A$), and let $\gamma > -sup(-A)$. Then $x \geq \gamma > -sup(-A)$ and so $-x \leq -\gamma < sup(-A)$ which makes $-\gamma$ an upper bound of $-A$ that is smaller than $sup(-A)$. Contradiction. Something like this Guy? As for your solution Plato, I will take a good look at it later today. Gotta run to work! Yeah, that should be fine. I would be a little more explicit with the quantifiers, but the idea is there. December 28th 2010, 02:38 AM #2 MHF Contributor Oct 2009 December 28th 2010, 02:57 AM #3 December 28th 2010, 03:50 AM #4 December 28th 2010, 06:30 AM #5 Mar 2010 December 28th 2010, 07:14 AM #6 December 28th 2010, 09:16 PM #7 December 28th 2010, 10:49 PM #8 December 29th 2010, 06:11 AM #9 Mar 2010
{"url":"http://mathhelpforum.com/differential-geometry/166993-baby-rudin-problem-1-5-a.html","timestamp":"2014-04-19T08:10:00Z","content_type":null,"content_length":"75004","record_id":"<urn:uuid:4b6ae63f-fadf-4a8b-88d1-0a68e581e355>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
NSA: What's a language? (thing) (You really want to know more about mathematical logic , or even model theory , to understand these concepts; this is just a explanation which should help you keep your head above the water in the formal language talks about . To do this, it uses a few symbols: things like "&" for " or " ", "~" for " " or " ", a symbol used to say for all , a symbol "=" for " equal to ", and maybe a few more convenient symbols which may be defined in terms of this small group ("v" for " " or " ", a symbol for , a symbol for " not equal to ", ...). These symbols let it say generalities of , but it cannot yet talk about the world! It also needs to be able to talk about some of the objects in the world. This is achieved by adding 3 categories of names to the language: A constant is a symbol c which refers to some object in the world. For instance, in a language to describe the real numbers, "0", "1" and "e" may all be constants. Not every object of the world must have a name! The connection between a constant and the actual object is given by the model. predicate names A predicate name is a symbol P and an arity (a natural number) k such that P(x[1],...,x[k]) is a valid sentence whenever the x's are all valid terms. In other words, we give names to some predicates on objects. For instance, for our real numbers language, we could have a 2-place predicate name "<" and a 1-place predicate name "Integer". Again, the connection to actual predicates is given by the model. function names A function name is a symbol f and an arity k such that f(x[1],...,x[k]) is a valid term whenever the x's are all valid terms. Here we're giving names to some functions of objects. For our real numbers language, we could have 2-place function names "+" and "*", and a 1-place function name "sin". Once more, the model connects the names to actual functions.
{"url":"http://everything2.com/user/ariels/writeups/NSA%253A+What%2527s+a+language%253F?lastnode_id=","timestamp":"2014-04-18T08:54:34Z","content_type":null,"content_length":"22775","record_id":"<urn:uuid:c55187ac-1113-495c-b50b-424ce4a42c80>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
$(0,1)$-Category theory An implication may be either an entailment or a conditional statement; these are closely related but not quite the same thing. 1. Entailment is a preorder on propositions within a given context in a given logic. We say that $p$ entails $q$syntactically, written as a sequent $p \vdash q$, if $q$ can be proved from the assumption $p$. We say that $p$ entails $q$semantically, written $p \vDash q$, if $q$ holds in every model in which $p$ holds. (These relations are often equivalent, by various soundness? and completeness? theorems.) 2. A conditional statement is the result of a binary operation on propositions within a given context in a given logic. If $p$ and $q$ are propositions in some context, then so is the conditional statement $p \to q$, at least if the logic has a notion of conditional. Notice that $p$, $q$, and $p \to q$ are all statements in the object language (the language that we are talking about), whereas the hypothetical judgements $p \vdash q$ and $p \vDash q$ are statements in the metalanguage (the language that we are using to talk about the object language). Relations between the definitions Depending on what logic one is using, $p \to q$ might be anything, but it's probably not fair to consider it a conditional statement unless it is related to entailment as follows: If, in some context, $p$ entails $q$ (either syntactically or semantically), then $p \to q$ is a theorem (syntactically) or a tautology (semantically) in that context, and conversely. In particular, this holds for classical logic and intuitionistic logic. You can think of entailment as being an external hom (taking values in the poset of truth values) and the conditional as being an internal hom (taking values in the poset of propositions). In particular, we expect these to be related as in a closed category: • $q \to r \vdash (p \to q) \to (p \to r)$, • $p \equiv \top \to p$, • $\top \vdash p \to p$, where $\top$ is an appropriate constant statement (often satisfying $p \vdash \top$, although not always, as in linear logic with $\multimap$ for $\to$ and $1$ for $\top$). Most kinds of logic used in practice have a notion of entailment from a list of multiple premises; then we expect entailment and the conditional to be related as in a closed multicategory. Just as we may identify the internal and external hom in Set, so we may identify the entailment and conditional of truth values. In the $n$Lab, we tend to write this as $\Rightarrow$, a symbol that is variously used by other authors in place of $\vdash$, $\vDash$, and $\rightarrow$. In various formalizations In Heyting algebras Although Heyting algebras were first developed as a way to discuss intuitionistic logic, they appear in other contexts; but their characterstic feature is that they have an operation analogous to the conditional operation in logic, usually called Heyting implication and denoted $\rightarrow$ or $\Rightarrow$. If you use $\to$ and replace $\vdash$ above with the Heyting algebra's partial order $\ leq$, then everything above applies. In type theory In type theory
{"url":"http://ncatlab.org/nlab/show/implication","timestamp":"2014-04-20T00:43:05Z","content_type":null,"content_length":"46899","record_id":"<urn:uuid:3df5d917-026f-478b-8755-8ee7749a6bdf>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
1911 Encyclopædia Britannica/Cylinder From Wikisource ←Cygnus 1911 Encyclopædia Britannica, Volume 7 Cyllene→ CYLINDER (Gr. κύλινδρος, from κυλίνδειν, to roll). A cylindrical surface, or briefly a cylinder, is the surface traced out by a line, named the generatrix, which moves parallel to itself and always passes through the circumference of a curve, named the directrix; the name cylinder is also given to the solid contained between such a surface and two parallel planes which intersect a generatrix. A “right cylinder” is the solid traced out by a rectangle which revolves about one of its sides, or the curved surface of this solid; the surface may also be defined as the locus of a line which passes through the circumference of a circle, and is always perpendicular to the plane of the circle. If the moving line be not perpendicular to the plane of the circle, but moves parallel to itself, and always passes through the circumference, it traces an “oblique cylinder.” The “axis” of a circular cylinder is the line joining the centres of two circular sections; it is the line through the centre of the directrix parallel to the generators. The characteristic property of all cylindrical surfaces is that the tangent planes are parallel to the axis. They are “developable” surfaces, i.e. they can be applied to a plane surface without crinkling or tearing (see Surface). Any section of a cylinder which contains the axis is termed a “principal section”; in the case of the solids this section is a rectangle; in the case of the surfaces, two parallel straight lines. A section of the right cylinder parallel to the base is obviously a circle; any other section, excepting those limited by two generators, is an ellipse. This last proposition may be stated in the form: — “The orthogonal projection of a circle is an ellipse”; and it permits the ready deduction of many properties of the ellipse from the circle. The section of an oblique cylinder by a plane perpendicular to the principal section, and inclined to the axis at the same angle as the base, is named the “subcontrary section,” and is always a circle; any other section is an ellipse. The mensuration of the cylinder was worked out by Archimedes, who showed that the volume of any cylinder was equal to the product of the area of the base into the height of the solid, and that the area of the curved surface was equal to that of a rectangle having its sides equal to the circumference of the base, and to the height of the solid. If the base be a circle of radius r, and the height h, the volume is πr^2h and the area of the curved surface 2πrh. Archimedes also deduced relations between the sphere (q.v.) and cone (q.v.) and the circumscribing cylinder. The name “cylindroid” has been given to two different surfaces. Thus it is a cylinder having equal and parallel elliptical bases; i.e. the surface traced out by an ellipse moving parallel to itself so that every point passes along a straight line, or by a line moving parallel to itself and always passing through the circumference of a fixed ellipse. The name was also given by Arthur Cayley to the conoidal cubic surface which has for its equation z(x^2 + y^2) = 2mxy; every point on this surface lies on the line given by the intersection of the planes y = x tan θ, z = m sin 2θ, for by eliminating θ we obtain the equation to the surface.
{"url":"https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Cylinder","timestamp":"2014-04-17T19:57:03Z","content_type":null,"content_length":"27812","record_id":"<urn:uuid:be8326ba-3ce4-4368-b321-a6e3c1a7f8e0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
2.2 Analysis of ARMA Time Series Given a time series, we can analyze its properties. Since many algorithms for estimating model parameters assume the data form a zero-mean, stationary process, appropriate transformations of the data are often needed to make them zero-mean and stationary. The function does appropriate differencing on ARIMA or SARIMA data. Note that all time series data should be input as a list of the form {x[1], x[2], ... } is a number for a scalar time series and is itself a list, x[i]={x[i1], x[i2], ... , x[im]} , for an -variate time series. ListDifference[data, d] difference data d times ListDifference[data, {d, D}, s] difference data d times with period 1 and D times with period s Sample mean and transformations of data. After appropriate transformations of the time series data, we can calculate the properties of the series. The sample covariance function for a zero-mean time series of observations is defined to be . The sample correlation function is the sample covariance function of the corresponding standardized series, and the sample partial autocorrelation function is defined here as the last coefficient in a Levinson-Durbin estimate of AR coefficients. The sample power spectrum is the Fourier transform of the sample covariance function . The smoothed spectrum using the spectral window {W(0), W(1), ... , W(M)} is defined to be where , while the smoothed spectrum using the lag window is defined by where . CovarianceFunction[data, n] CovarianceFunction[data[1], data[2], n] give the sample covariance function of data or the cross-covariance function of data[1] and data[2] up to lag n CorrelationFunction[data, n] CorrelationFunction[data[1], data[2], n] give the sample correlation function of data or the cross-correlation function of data[1] and data[2] up to lag n PartialCorrelationFunction[data, n] give the sample partial correlation function of data up to lag n Spectrum[data] Spectrum[data[1], data[2]] give the sample power spectrum of data or the cross-spectrum of data[1] and data[2] SmoothedSpectrumS[spectrum, window] give the smoothed spectrum using the spectral window window SmoothedSpectrumL[cov, window, ] give the smoothed spectrum as a function of using the lag window window Properties of observed data. This loads the package. The increasing magnitude of the data indicates a nonstationary series. We know this is the case, since it is generated from an ARIMA(1, 2, 0) process. Note that since the model has no MA part, the list of the MA parameters can be specified by { } or by 0, 0 0, 0 Differencing the above time series twice yields a new series that is stationary. A SARIMA(2, 2, 0)(1, 1, 0) series is transformed into a stationary series by differencing. This generates a time series of length 200 from an AR(2) model. This computes the sample covariance function of the above series up to lag 5. Here is the sample correlation function of the same series up to lag 5. Note that the correlation at lag 0 is 1. This is the sample partial correlation function of the same series at lags to 5. Note that the lag starts from 1 and not 0. The power spectrum of the series is calculated here. This is the plot of the spectrum in the frequency range [0, ]. This gives the smoothed spectrum using the Daniell spectral window. We plot the smoothed spectrum. This yields the smoothed spectrum using the Hanning lag window. This is the plot of the smoothed spectrum. We calculate the correlation function up to lag 2 from data generated from a vector MA(1) model. When fitting a given set of data to a particular ARMA type of model, the orders have to be selected first. Usually the sample partial correlation or sample correlation function can give an indication of the order of an AR or an MA process. An AIC or a BIC criterion is also used to select a model. The AIC criterion chooses to minimize the value of by fitting the time series of length to an ARMA( ) model (here is the noise variance estimate, usually found via maximum likelihood estimation). The BIC criterion seeks to minimize . AIC[model, n] give the AIC value of model fitted to data of length n BIC[model, n] give the BIC value of model fitted to data of length n AIC and BIC values. Given a model, various methods exist to fit the appropriately transformed data to it and estimate the parameters. uses the Hannan-Rissanen procedure to both select orders and perform parameter estimation. As in the long AR method, the data are first fitted to an AR( ) process, where (less than some given ) is chosen by the AIC criterion. The orders are selected among all p≤Min[pmax, k] using BIC. YuleWalkerEstimate[data, p] give the Yule-Walker estimate of AR(p) model LevinsonDurbinEstimate[data, p] give the Levinson-Durbin estimate of AR(i) model for i = 1, 2, ..., p BurgEstimate[data, p] give the Burg estimate of AR(i) model for i = 1, 2, ..., p InnovationEstimate[data, q] give the innovation estimate of MA(i) model for i = 1, 2, ..., q LongAREstimate[data, k, p, q] give the estimate of ARMA(p, q) model by first finding the residuals from AR(k) process HannanRissanenEstimate[data, kmax, pmax, qmax] give the estimate of the model with the lowest BIC value HannanRissanenEstimate[data, kmax, pmax, qmax, n] give the estimate of the n models with the lowest BIC values Estimations of ARMA models. This generates a vector AR(2) series. Observe how the vector time series data are defined and input into functions below. The Yule-Walker estimate of the coefficients of the AR(2) series and the noise variance are returned inside the object Here the AR( ) ( ) coefficients and noise variance are estimated using the Levinson-Durbin algorithm. Note that the small magnitude of for AR(3) indicates that is the likely order of the process. The same parameters are estimated using the Burg algorithm. This gives the estimate of MA( ) ( ) coefficients and noise variance using the innovations algorithm. This estimates the same parameters using the long AR method. The data are first fitted to an AR(10) model. The residuals together with the data are fitted to an MA(2) process using regression. Here the parameters of the ARMA(1,2) model are estimated using the long AR method. This calculates the AIC value for the model estimated above. The Hannan-Rissanen method can select the model orders as well as estimate the parameters. Often the order selection is only suggestive and it should be used in conjunction with other methods. Here we select three models using the Hannan-Rissanen method. This gives the BIC value for each model estimated above. gives the maximum likelihood estimate of an ARMA type of model by maximizing the exact likelihood function that is calculated using the innovations algorithm. The built-in function is used and the same options apply. Two sets of initial values are needed for each parameter, and they are usually taken from the results of various estimation methods given above. Since finding the exact maximum likelihood estimate is generally slow, a conditional likelihood is often used. gives an estimate of an ARMA model by maximizing the conditional likelihood using the Levenberg-Marquardt algorithm. ConditionalMLEstimate[data, p] fit an AR(p) model to data using the conditional maximum likelihood method ConditionalMLEstimate[data, model] fit model to data using the conditional maximum likelihood method with initial values of parameters as the arguments of model MLEstimate[data, model, {, {[1], [2]}}, ...] fit model to data using maximum likelihood method with initial values of parameters {[1],[2]}, ... LogLikelihood[data, model] give the logarithm of Gaussian likelihood for the given data and model Maximum likelihood estimations and the logarithm of Gaussian likelihood. option name default value MaxIterations 30 maximum number of iterations in searching for minimum Option for ConditionalMLEstimate. A vector AR(1) series of length 200 is generated. This yields the conditional maximum likelihood estimate of the parameters of a vector AR process. In the absence of an MA part, this estimate is equivalent to a least squares estimate and no initial values for parameters are needed. This gives the conditional maximum likelihood estimate of an ARMA(2,1) model. The initial parameter values ( [1]=0.4, [2]=-0.35, [1]=0.6 ) have to be provided as the arguments of The Hannan-Rissanen method is used to select a model. The estimated model parameters can serve as the initial values for parameters in maximum likelihood estimation. The above result is input here for the conditional maximum likelihood estimate. A SARIMA(1, 0, 1)(0, 0, 1) series is generated. This yields the maximum likelihood estimate of the parameters of a SARIMA(1, 0, 1)(0, 0, 1) model. Note that the parameters to be estimated are entered symbolically inside and two initial values are needed for each of them. Since the calculation of the likelihood of a univariate series is independent of the noise variance, it should not be entered in symbolic form. A bivariate MA(1) series is generated. For a vector ARMA series, the calculation of maximum likelihood is not independent of the covariance matrix; the covariance matrix has to be input as symbolic parameters. This gives the logarithm of the Gaussian likelihood. =([1], [2], ... , [p], [1], [2], ... , [q])^ be the parameters of a stationary and invertible ARMA( ) model and the maximum likelihood estimator of . Then, as , we have . For a univariate ARMA model, calculates the asymptotic covariance . The function InformationMatrix[data, model] gives the estimated asymptotic information matrix whose inverse can be used as the estimate for the asymptotic covariance. AsymptoticCovariance[model] give the covariance matrix V of the asymptotic distribution of the maximum likelihood estimators InformationMatrix[data, model] give the estimated asymptotic information matrix Asymptotic covariance and information matrix. This gives the asymptotic covariance matrix of the estimators of an MA(3) model. The above asymptotic covariance is displayed in matrix form. This gives the estimate of the information matrix. The above information matrix is displayed here. There are various ways to check the adequacy of a chosen model. The residuals of a fitted ARMA( ) process are defined by where t=1, 2, ... , n . One can infer the adequacy of a model by looking at the behavior of the residuals. The portmanteau test uses the statistic where is the sample correlation function of the residuals; is approximately chi-squared with degrees of freedom. for an -variate time series is similarly defined, and is approximately chi-squared with degrees of freedom. Residual[data, model] give the residuals of fitting model to data PortmanteauStatistic[residual, h] calculate the portmanteau statistic Q[h] from residual Residuals and test statistic. The adequacy of the fitted model for the earlier example is accepted at level 0.05 for since . After establishing the adequacy of a model, we can proceed to forecast future values of the series. The best linear predictor is defined as the linear combination of observed data points that has the minimum mean-square distance from the true value. gives the exact best linear predictions and their mean squared errors using the innovations algorithm. When the option is set to the approximate best linear predictor is calculated. For an ARIMA or SARIMA series with a constant term, the prediction for future values of can be obtained from the predicted values of , where , using BestLinearPredictor[data, model, n] give the prediction of model for the next n values and their mean square errors IntegratedPredictor[xlist, {d, D}, s, yhatlist] give the predicted values of {X[t]} from the predicted values yhatlist Predicting time series. option name default value Exact True whether to calculate exactly Option for BestLinearPredictor. This gives the prediction for the next three data points and their mean square errors for the vector ARMA(1, 1) process. Here is the prediction for the next four data points and their mean square errors for the previously fitted SARIMA(1, 0, 1)(0, 0, 1) Here is the prediction for the next four data points and their mean square errors for an ARIMA(2, 2, 2) model.
{"url":"http://reference.wolfram.com/applications/timeseries/SummaryOfTimeSeriesFunctions/2.2.html","timestamp":"2014-04-21T10:48:56Z","content_type":null,"content_length":"80855","record_id":"<urn:uuid:a65d0c3a-b9b2-4d6e-b9ea-814bd157b609>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Magic numbers: A meeting of mathemagical tricksters skip to main content Magic numbers: A meeting of mathemagical tricksters May 25, 2010 3:20 PM Subscribe posted by andoatnp (233 comments total) 31 users marked this as a favorite OK, I fully understand Monty Hall (including how the right answer works intuitively) but I admit I'm having trouble getting this one. It's definitely going to give me some food for thought on the drive home. Thanks! posted by kmz at 3:34 PM on May 25, 2010 This makes me lament being smart enough to recognize that these folks are having fun because they're a lot smarter than I am. posted by maxwelton at 3:35 PM on May 25, 2010 [6 favorites] I clicked the link to see this word and was not disappointed. posted by I_pity_the_fool at 3:37 PM on May 25, 2010 That is, I see how the math works and I don't disagree with it, but I'm not getting the intuition on it works like that. It also reminds me of an old logic puzzle (also controversial) that involves a conversation providing way more information about one person's kids than you would think, but I don't remember exactly how it goes. Anybody have a clue what I'm talking about? posted by kmz at 3:39 PM on May 25, 2010 Goddamnit! If you have one boy, and one other kid, then there's a 50% chance your other kid is a boy. My brain will brook no argument about this. posted by mrnutty at 3:43 PM on May 25, 2010 [16 favorites] I don't like his question. The tuesday part bugs me in particular, and the explanation in the article seems wrong. Mainly because in the simplified version (without the Tuesday part), the answer should be 1 in 2, not 1 in 3, since the question did not specify whether the boy/girl or girl/boy ordering mattered. The question was the probability of two boys. You have a total set of 2, the number of combinations are Boy/Boy or Boy/Girl, since semantically Boy/Girl is the same as Girl/Boy in the context of the stated question. The born on a Tuesday part is irrelevant, given that the day of the week in which a child is born does nothing to determine the gender of the child. I can understand how maybe you could construe a difference between Boy/Girl and Girl/Boy as being relevant to the question, only if it was stipulated as a concern. His mention that "Tuesday is the most relevant part" only confuses the logical steps to create a formula. The main reason being that the day of the week that a child is born on does not determine the gender of the child. Am I missing something here? Plate of beans? posted by daq at 3:44 PM on May 25, 2010 [14 favorites] I see I'm not the only one... posted by daq at 3:45 PM on May 25, 2010 Goddamnit! If you have one boy, and one other kid, then there's a 50% chance your other kid is a boy. My brain will brook no argument about this. Nope. If you look at the sample of all people who have 2 children, one of whom is a boy, one third will have two boys, because whether the original boy is the older or younger is not specified. You're thinking about a case where you two children, the older of whom is a boy, and trying to calculate the probability the younger child is also a boy. That's extra information. posted by kmz at 3:47 PM on May 25, 2010 [15 favorites] Heh. I brought up the very word probelmn Foshee posed on another internet forum and it produced the kind of polarization and dogged stubbornness that I usually associate with threads on vegetarianism or Israel/Palestine. Good luck, all! posted by ricochet biscuit at 3:48 PM on May 25, 2010 [1 favorite] "To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So we can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3." GG is of course rightly eliminated but since the problem didn't state the order that the children were born in then GB and BG both describe the same situation. If you want to use GB and BG then you should also use something like bB and Bb which would mean that likelihood of the gender of the other child is equally divided between male and female. Which is abundantly reflected by reality. The error lies in the construction of the problem. posted by vapidave at 3:48 PM on May 25, 2010 [13 favorites] How about this, daq: The set of all families with two kids consists of 25% bb, 25% gg, 25% bg, and 25% gb. I think I buy the difference between bg and gb when stated like that. posted by mrnutty at 3:49 PM on May 25, 2010 [1 favorite] Fascinating, awesome. Now, how can I get a Jigazo, which is mentioned in the article? The Jigazo is a jigsaw puzzle that can be customised to display any picture you want. It contains 300 identically shaped pieces that are all a different shade of blue, and all have a unique symbol on the back. If you take a portrait of yourself and email the image to the Jigazo website, you will receive an email back with a map of the arrangement of the symbols such that when the pieces are assembled in this way the jigsaw shows your portrait.posted by nevercalm at 3:51 PM on May 25, 2010 daq, vapidave: Consider somebody doing this activity 1000 times: flipping a fair coin twice, and recording the results. Among the results with at least one head , how many results are both heads? posted by kmz at 3:51 PM on May 25, 2010 [1 favorite] the answer should be 1 in 2, not 1 in 3, since the question did not specify whether the boy/girl or girl/boy ordering mattered. He could be choosing the first or second child from the sample space of GG, BB, GB, BG. If you choose any one of those with a B, then only B, G, G remain, or 1 out of 3 for a boy. posted by Brian B. at 3:54 PM on May 25, 2010 [1 favorite] OK, so the maths does seem to work, not arguing with that. It doesn't look like it works in the real world, though, which is where I'm getting confused. Consider the situation where Alice tells Bob, "I have two children, at least one of whom is a boy." Bob can then calculate the probability that Alice has two boys (and I'm happy to accept that that's 1/3, although I originally went for 1/2). But if Alice then says "Oh, and that boy was born on a Tuesday," this new information isn't relevant, because (unlike in the Monty Hall problem) it doesn't touch on any of the factors involved in the first calculation (probability of any one birth being a boy, number of children in total). So it's still 1/3. To my mind, that's an exact restatement of the original problem. If the answers are different, I suppose it can't be exactly the same, but I'm stumped if I can see the difference. Anyone got any bright ideas? posted by ZsigE at 4:03 PM on May 25, 2010 What about cultures that value boys over girls? Having a boy first makes it less likely that a second child will be born. The majority of two-child families will be GB or GG, and BB would be less common than mathematics would suggest. posted by bgrebs at 4:04 PM on May 25, 2010 [1 favorite] Where you're getting hung up is the difference between a "generic" boy, and a "specific" boy. The more specifically the boy is identified, the closer to 50% the answer becomes, as you can get closer to being able to "factor out" a specific boy, which turns it into a simple "what are the odds a single child is a boy or a girl" problem -- 50%. I have two children. One of them is a boy. How likely is the other child to be a boy? 1/3. I have two children. One of them is a boy born on Tuesday. How likely is the other a boy? 13/27. I have two children. One of them is a boy named Sue who walks with a limp, has three nipples and is majoring in tirebiting at the school of hard knocks. How likely is the other a boy? 1/2. In the generic case, there's three of four cases where there's "a boy" existent -- and it could be either the younger or older sibling being discussed, but only one of those three cases has ANOTHER Okay, I need MOAR COFFEE. posted by seanmpuckett at 4:04 PM on May 25, 2010 [18 favorites] I'm with daq ... the question is semantically flawed if the purpose was to create a brainteaser. "I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?" It doesn't mention birth order. Everything else being brought up is being inferred the question by the listener. I have two children. One is a boy. What're the odds on the other one? Fifty-fucking-fifty. There's only one answer, not math gymnastics. posted by Cool Papa Bell at 4:04 PM on May 25, 2010 [5 favorites] Yeah, I just figured out what seanmpuckett mentions above. Born on Tuesday narrows the sample space, leading to a closer to 1/2 chance. I wonder if this thread will be like Wikipedia talk threads about Monty Hall or .999... = 1. posted by kmz at 4:11 PM on May 25, 2010 Err - yes, even stated as it is: ""I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?"" This does not exclude the children being twins. Both born on a Tuesday. Nobody said the other child was NOT born on the same Tuesday. He's just saying that one boy was born on a Tuesday... and could have followed it up with "and the other boy - or girl - was born on the same Tuesday". This also says nothing about birth order... what if it was a Caesarean simultaneous? posted by VikingSword at 4:13 PM on May 25, 2010 This seems like an annoying trick, not a puzzle. The difference is between the cases where order is important and where it is not. By specifying a distinguishing characteristic for the boy, you are forcing people to consider order and BB becomes bB and Bb. But really nothing has changed. GB and GB are not equally probable to BB or GG. They are exactly half as probable. posted by Nothing at 4:17 PM on May 25, 2010 [2 favorites] For those who are looking to confirm this for themselves: think of the gender and birthday of the two children as independent events: what happens with the older one has no effect on the younger. Now, we can make a chart of all possible combinations, and cross out the ones where no boys are born on a Tuesday. The remaining combinations can be filled in with 1s (for one boy) and 2s (for two bM .2............ bT 22222221111111 bW .2............ Y bTh .2............ o bF .2............ u bS .2............ n bSu .2............ g gM .1............ e gT .1............ r gW .1............ gTh .1............ gF .1............ gS .1............ gSu .1............ h u h u 14 ones, 13 twos, 27 in total: 13/27 chance of two boys. It's also interesting that if no information is given about the birthday, it goes back to 1/3 chance of two boys: bM 22222221111111 bT 22222221111111 bW 22222221111111 Y bTh 22222221111111 o bF 22222221111111 u bS 22222221111111 n bSu 22222221111111 g gM 1111111....... e gT 1111111....... r gW 1111111....... gTh 1111111....... gF 1111111....... gS 1111111....... gSu 1111111....... h u h u posted by Upton O'Good at 4:18 PM on May 25, 2010 [28 favorites] It's easier to conceptualize if you place both the boy and the other child on an infinite, frictionless plain, in a vacuum. Just make sure they have spacesuits. posted by Ritchie at 4:20 PM on May 25, 2010 [3 favorites] My favorite follow-up to this puzzle is, "You have a brother. Is it more likely that you're male or female?" The answer is left as an exercise to the reader. posted by LSK at 4:20 PM on May 25, 2010 [2 favorites] I have two children. One is a boy. What're the odds on the other one? Fifty-fucking-fifty. There's only one answer, not math gymnastics. Yeah, it's completely irrelevant that the one of the children is a boy, and it's certainly irrelevant what day it was born on, or how far it shot out. Without detailed genetic information and family history data pushed through a mathputer, the probability of one child being a boy is 50%, and the probability of the other child being a boy is also 50%. posted by turgid dahlia at 4:21 PM on May 25, 2010 [1 favorite] The reasoning in the article is sound, but they overlooked the 0.2 percent chance that the boy born on tuesday is one of two identical twins. So, it should be slightly higher than 13/27. posted by esprit de l'escalier at 4:21 PM on May 25, 2010 [2 favorites] Birth order is a red herring. The key point is that they're two distinct independent events. You can't mix them up. Nothing: Are you saying 100 coin tosses doesn't give 25 each of HH, TH, HT, and TT? You might want to rethink that. posted by kmz at 4:23 PM on May 25, 2010 [1 favorite] The reasoning in the article is sound, but they overlooked the 0.2 percent chance that the boy born on tuesday is one of two identical twins. So, it should be slightly higher than 13/27. Eponysterical. See above :) posted by VikingSword at 4:24 PM on May 25, 2010 The coins are clearer. #1. If you have two coins, toss one of them and it comes up heads, then the chance of the second one also being heads is 1/2. #2. If you know a randomly selected pair of coins have been tossed, and that one of them, randomly selected, came up heads, then the chance of the other one being heads as well is 1/3. He's worded a #2 instance to look like a #1 instance to make himself look smarter than you. posted by imperium at 4:26 PM on May 25, 2010 [19 favorites] *sigh* Note to self: don't try to argue about conditional probability on the Internet. It's not worth the trouble. (Probably.) posted by kmz at 4:27 PM on May 25, 2010 [4 favorites] You live in China so the probability is zero. HA! What if you grew up in Hong Kong? posted by polymodus at 4:30 PM on May 25, 2010 And twins don't matter a whit to the calculations. Ok, now I'm really done. posted by kmz at 4:32 PM on May 25, 2010 [1 favorite] But if Alice then says "Oh, and that boy was born on a Tuesday," this new information isn't relevant, because (unlike in the Monty Hall problem) it doesn't touch on any of the factors involved in the first calculation (probability of any one birth being a boy, number of children in total). So it's still 1/3. If they had two boys born on a tuesday, then it could be either one they're talking about, so we only need to count it once in the sample space, which results in one less than 28. posted by Brian B. at 4:35 PM on May 25, 2010 it's completely irrelevant that the one of the children is a boy No, it isn't. That's exactly the first point that changes the result. In your example, the options are binary: boy or girl. Therefore, 50%. In the original example (minus Tuesday), the options include: 2 boys, 1 boy + 1 girl, 2 girls, for 3 possible combinations, and therefore any given child having a 1 in 3 chance of being a boy. For each additional factor, the odds change. This is what odds are all about. While it remains true that (all other factors being equal) the odds of any given result of two possible combinations is 50%, it's also true that adding factors changes the possible combinations, which changes the odds. In other words, your different answers are talking about different things. You're kind of all right (if you ignore strict definitions). Or you're kind of all wrong. Or exactly one of you is right. posted by It's Raining Florence Henderson at 4:36 PM on May 25, 2010 [2 favorites] Yeah, it's completely irrelevant that the one of the children is a boy, and it's certainly irrelevant what day it was born on, or how far it shot out. Without detailed genetic information and family history data pushed through a mathputer, the probability of one child being a boy is 50%, and the probability of the other child being a boy is also 50%. Do you have any reasoning to refute the math in the other answer? posted by OmieWise at 4:37 PM on May 25, 2010 So, I’m aware of the reasoning for why the Tuesday-free version of the question has the answer 1/3, and I do follow why the answer for this version is 13/27. But the problem I have is that questions worded like this are exceptionally hard to read in the intended way — which in this case is something along these lines: Assuming that the independent probabilities of a child being a boy and being born on any particular day of the week are exactly 1/2 and 1/7 respectively, what is the probability that, in families with exactly two (child × day-of-birth) pairs of which at least one is (boy × Tuesday), the other is (boy × D) for any weekday D?” posted by Tetch at 4:38 PM on May 25, 2010 [1 favorite] GB and BG both describe the same situation. Perhaps they do but this "same situation" is twice as likely as 2 boys. posted by Obscure Reference at 4:42 PM on May 25, 2010 Isn't the problem here that there are two interpretations of the original statement? English uses "one" as both a pronoun and a number: , whom I am specifically referring to] is a boy born on a Tuesday. , or maybe more, but definitely at least one] is a boy born on a Tuesday. These are two completely different questions. It's easy to see how people can be confused by the non-50% answers since they're answering a different question . Grade-school word trickery masquerading as collegiate-level probability. posted by 0xFCAF at 4:42 PM on May 25, 2010 [2 favorites] Having discovered kmz's rule long ago myself, I will content myself just to muddy the waters further with this curious paradox [pdf] of Smullyan. posted by Wolfdog at 4:46 PM on May 25, 2010 [1 favorite] I'm having trouble parsing your comment, OxFCAF. The explanation in the article does account for the possibility that both children are boys born on Tuesday. No word-trickery involved. posted by roll truck roll at 4:51 PM on May 25, 2010 I normally love riddles, but this is seriously confusing and aggravating me. I don't "get" it. Is the 'trick' that by implying one boy is born on a Tuesday, the other child is not? Is that what this is about? In the original example (minus Tuesday), the options include: 2 boys, 1 boy + 1 girl, 2 girls, for 3 possible combinations, and therefore any given child having a 1 in 3 chance of being a boy. But the question already mentions that one is a boy, so isn't GG ruled out...? I still think the probability is 50%! posted by Gordafarin at 4:51 PM on May 25, 2010 Wow, the amazing thing, and I swear I'm not just making this up to make myself seem all smart, but my first answer to the Tuesday's Boy question was 1/2. In all probability, I was simply calculating something the wrong way, and incorrectly at that, and it's just a coincidence. posted by Civil_Disobedient at 4:56 PM on May 25, 2010 Okay, after reading the blog post I think I understand it marginally better. So thanks for that. However, my brain is still telling me that any number but 50% makes no sense. posted by Gordafarin at 4:58 PM on May 25, 2010 To properly analyze the problem, don't we need to know the odds of him saying he has a boy over saying he has a girl? For example, if the man is equally likely to tell us about a boy as a girl and ignoring the Tuesday bit, we have 2 girls at a 25% chance with a 100% chance of being told about a girl, a boy and a girl at a 50% chance with a 50% of being told he has a boy and a 50% chance of telling us he has a girl, and a 25% chance of having two boys with a 100% chance of telling us about a boy. So in the end, weeding out the cases where we are informed he has a girl, as that did not happen, we have a .25*1 out of .5*.5+.25*1 probability of the second being a boy, or 50/50 as one would naively expect. Though if the man did value boys more highly than girls, the probability the other child would be a boy would be lower, as he would have been more likely to brag about having a boy when only one child was male. posted by Zalzidrax at 4:59 PM on May 25, 2010 Agh! Martin Gardner died 2 days ago?! I literally just finished Calculus Made Easy which shockingly did actually make calculus quite easy. posted by Damienmce at 5:02 PM on May 25, 2010 "Perhaps they do but this "same situation" is twice as likely as 2 boys." Ask yourself why do they introduce both GB and BG. I don't know other than to confuse the situation because there is no difference between them. One is a G and the other is a B is not different from one is a B and the other a G. I first read a version of this problem in Marilyn vos Savant's column in the late 90's and it has been confusing people for a long time now. In any case the reality is that the split for the gender* of any other child are in fact 50-50 and if the reality doesn't match then the phrasing must be in error. *Used here of course in an oversimplified sense. posted by vapidave at 5:06 PM on May 25, 2010 [1 favorite] This is easy easy easy. "I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?" The probability that you have two boys is either exactly zero or exactly one, because the outcome has already taken place and the only possible probabilities for realized events are zero and one. posted by ROU_Xenophobe at 5:07 PM on May 25, 2010 [9 favorites] I think I get it (I didn't at first, even after reading the FA). The probability of ONE child (either child) being a boy is 50%. And would continue to be so, no matter how many kids you have, or what you know about them, if you consider them individually, in But the question is about the SET of BOTH children. Of all the possible gender combinations available in a set of two, what are the odds for the two-boys combination? And further, what are the odds of the SET being BB, knowing one is B? And further further, what are the odds of the set being BB, knowing one of them is a boy born on a Tuesday (here, we get into counting handshakes, and the odds of two people having the same birthday, and all that other stuff that I'm really bad at thinking about)? posted by Mister Moofoo at 5:16 PM on May 25, 2010 See, this is why I did so badly in statistics. Too often my answer to a proof is, "I don't believe you." This is from someone who did well in differential equations in high school. I mean, I can't deny hundreds of years of thought about this, but I can' shake the feeling that it's just a math game that doesn't cross over into reality either. I wish I could be educated away from under this burden. posted by cmoj at 5:18 PM on May 25, 2010 [1 favorite] I'm not getting the intuition on why it works like that. Probability doesn't lend well to intuition. Which is why casinos, insurance companies, etc. are so profitable. posted by Blazecock Pileon at 5:19 PM on May 25, 2010 [5 favorites] Wow, the article on the Gathering includes a link to the blog of the 'youngest attendee', Nicholas Bickford . He sounds like an awesome 12 year old! posted by jacalata at 5:30 PM on May 25, 2010 [1 favorite] Do you have any reasoning to refute the math in the other answer? Yes. It's called Math Of The Gut, or Stomach-Reasoning, learned only at the School Of Hard Knocks, and not some fancy ivy league institute. posted by turgid dahlia at 5:32 PM on May 25, 2010 [2 favorites] The probability that you have two boys is either exactly zero or exactly one, because the outcome has already taken place and the only possible probabilities for realized events are zero and one. Are you just throwing out Schrodinger's Box, cat unseen? posted by Lemurrhea at 5:35 PM on May 25, 2010 "Nothing" had it correct above. Let's take the first instance: you have two kids and one is a boy - what is the chance that both are boys? The probability is 1/3: GB, BG, BB -- I think everyone agrees with that. Now, when you add that one is born on tuesday watch how birth order sneaks into the probabilities. Now you have: GBTu, BTuG, BTuB, BBTu Now the probability is 1/2 as everyone thinks it should be intuitively. The calculation in the article is wrong, and is playing a really sneaky trick. The final tabulation should be 14/28. If the boy we don't know about is also born on Tuesday, there are in fact two ways that could happen. He could have been born first, or born after the boy we already know about. Therefore you can't drop that from the tabulation. posted by spaceviking at 5:35 PM on May 25, 2010 [2 favorites] It's also interesting that if no information is given about the birthday, it goes back to 1/3 chance of two boys: Oh, poppycock. The child had to have been born on a day , if not a Tuesday, so nothing changes. posted by Sys Rq at 5:41 PM on May 25, 2010 [1 favorite] Oh, well, here's a figure that might be of interest . If you want my own explanation, there it is: the chart shows 2*7*2*7 possibilites for the sex and birthday of someone's first- and second-born children. If all we know is that someone has two children, any one of those squares is equally likely. (If you don't agree to that, you're thinking about a different or more complicated one that was intended and you can work on that problem with whatever assumptions you like on your own.) When the speaker says "I have a boy born on tuesday", we know his case belongs to one of the - count them - 27 highlighted green boxes which include a Boy born on Tuesday. Any one of those 27 is still equally likely. Of the 27, 13 have two boys (bolded). This is pretty much a direct translation of what Tetch has written above. (If you look carefully at the diagram, you can probably refine you sense of "what's really going on here." A temptation in thinking this through casually is to think that there are 4*7 = 28 squares which include "B,Tue" -- the thinking might be, "2 choices for which position has B,Tue; 2 choices for the other sex; 7 choices for the other day". But that's a slight overcount as it counts the {{B,Tue},{B,Tue}} pair twice, incorrectly. Spaceviking has just demontstated precisely the temptation to doublecount that square.) Ask yourself why do they introduce both GB and BG You have two options in dealing with a problem like this: you can work with three cases ("2B, 2G, and 1B1G") which have different probabilities of occurring (1B1G is twice as likely as either of the other two cases, as has already been explained to death); or you can temporarily introduce an element of order - which is unnecessary but convenient, like an extra construction line in a geometry problem - and work with four cases (BB, BG, GB, GG) which are all equally likely. Both approaches, if worked through carefully, give the same answer. The advantage to the latter is that often reduces the calculation to simply counting boxes in a grid. posted by Wolfdog at 5:41 PM on May 25, 2010 [3 favorites] The question doesn't address birth order. It simply asks what are the odds my two children are both boys. If we know nothing, then the possible, and equally likely, scenarios are that I have two boys, that I have two girls, or that I have one of each. If we know one child is a boy, we've eliminated the two girls scenario, leaving two equally likely possibilities. posted by Naberius at 5:48 PM on May 25, 2010 Aren't we ignoring the sex ratio ? We wouldn't want to get the math wrong after all that work. posted by ecurtz at 5:52 PM on May 25, 2010 : " The question doesn't address birth order. It simply asks what are the odds my two children are both boys. Reread Wolfdog's comment. You're right that birth order doesn't matter, but it's a way of helping yourself understand that 1b1g and 2b don't have the same probability . If you don't want to call it birth order, then call it alphabetical order, or call it "which one sits on the left in the family photo." posted by roll truck roll at 5:52 PM on May 25, 2010 Ask yourself why do they introduce both GB and BG. I don't know other than to confuse the situation because there is no difference between them. No. Take an analogous situation like coin flips. The probability of getting two tails is not the same as getting only one tail for two flips. in the set of possible outcomes, {TT, HT, TH, HH} there is a very real difference between them. posted by Avelwood at 5:56 PM on May 25, 2010 Reality DOES work like this! It is not some word puzzle. We can illustrate it with coins. Lets play a game. This is something you can try at home: You and I and a friend. The friend has two coins. 1. He will secretly flip both coins, under a hat or behind a screen, and take a look at them. 2. When at least one of the coins is heads he will say "OK. AT LEAST ONE OF THE COINS is HEADS!" 3. Now, he will grab one of the coins with heads up, show it to us, and HAND IT TO US. Now, there is one coin sitting there behind the screen. It is either HEADS or TAILS. That is, my OTHER COIN/CHILD is either BOY or GIRL. Both are equally probable, right? RIGHT? If you try this at home and repeat this a 100 times, I guarantee that 1/3 OF THE TIME THAT OTHER COIN is HEADS. NOT 1/2. There is no word game here. In fact, it is probably easy to construct a casino game based on the above. Then people can continue to argue that it is semantics - that that other child is equally likely to be a boy or girl - as they lose the shirt off their back. posted by vacapinta at 5:56 PM on May 25, 2010 [3 favorites] *gropes for calculator* 1 Boy 1 X? Carry the Tuesday... Total: Ghostbusters 2 I'd better get back to you... posted by zarq at 5:59 PM on May 25, 2010 [7 favorites] What is the probability I have two boys? So the question boils-down to... "What is the probability that the one child whose sex I have not disclosed is a boy?" A: 1/2. posted by ZenMasterThis at 6:02 PM on May 25, 2010 [2 favorites] Ok the problem here is that in the article they double count the BB combination giving them equal weight with the BG combos. It should just be: BtuG, GBtu, BBtu (BtuB) The order in which the two boys are born shouldn't matter and shouldn't create another set of probabilities. As vacapinta so clearly described. So there are 3 combos with 7 possibilities each with only 7 satisfying the two boy requirement, so 7/21 = 1/3 posted by spaceviking at 6:05 PM on May 25, 2010 "I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?" My stats prof would likely tell me that the probability is either 0 or 1, for while we do not know if the other child is a boy, the child's sex/gender is already set, thus no chance. How rare it is could be gaged by probability, IMHO. If the question was something like "I have a child, soon to have another one which is yet conceived. What is the probability that this one will be a male?" then we would have something, for nothing is set in stone yet. Sorry if this has already been said upthread, I skimmed and really did not see this solution popping out. posted by JoeXIII007 at 6:05 PM on May 25, 2010 There is no word game here. Actually, you point out the problem here right in your step two. 2. When at least one of the coins is heads he will say "OK. AT LEAST ONE OF THE COINS is HEADS!" In the boy on Tuesday puzzle, this is only going to similar if the guy has some reason to talk about the boy born on Tuesday over the girl born on Tuesday or the boy born on Wednesday. A proper formulation where all this tricky probability work would actually apply would go something like this: "I was talking to my friends recently who just had a boy--he was born on Tuesday--which is quite the coincidence since I also have a boy who was born on Tuesday. Knowing that, what is the probability that the other of my two children is a boy?" posted by Zalzidrax at 6:12 PM on May 25, 2010 Well, according to this genealogy site , Gary Foshee has FOUR children, and only one of them is a boy. So, I say the probability is 0%, and I SAY HE IS A LIAR. Seriously, though, I was taught that when you work on probabilities, you make charts to help, and my chart would look like this: B G And then I would cross out the 0, 2 solution, since I already know that one of the children is a boy. So I am left with the possibility of one boy and one girl, or two boys. So I would have said the answer is 50%, and all the math people would have laughed and pointed their fingers and I'd have been shamed in front of the world. posted by misha at 6:17 PM on May 25, 2010 vacapinta, the point is that a reasonable interpretation of "One is a boy" is more like a coin game where your friend flips a nickel and a quarter and says "The nickel was heads. What are the odds the quarter was heads?". In code: Intepretation 1: "One" means "a specific one of them": int boys = 0, girls = 0; for (int i = 0; i < 1000; i++) { int gender1 = random.Next(2), gender2 = random.Next(2); if (gender1 == 0) { if (gender2 == 0) { } else { Console.WriteLine("{0} / {1}", boys, girls); Output: 255 / 256, i.e. 50/50 Interpretation 2: "One" means "one or more of them": int boys = 0, girls = 0; for (int i = 0; i < 1000; i++) { int gender1 = random.Next(2), gender2 = random.Next(2); if (gender1 == 0 || gender2 == 0) { if (gender2 == 0) { } else { Console.WriteLine("{0} / {1}", boys, girls); Output: 517 / 209, i.e. 1/3 posted by 0xFCAF at 6:24 PM on May 25, 2010 [3 favorites] For those who want to see it more intuitively, here's a pretty clear illustration of the sample space (which I now see somewhat resembles Wolfdog's). Just count up the area in the BB section and divide it by the rest -- either yellow for the BB probability, or the orange for the BB conditional on one boy having been born on a Tuesday. You can see that, as the width of the orange stripes shrinks (eg, if you were specifying the day of the year, rather than the day of the week), the ratio of the stripe inside the BB square to the stripe outside the BB square approaches 1/2 (ie, the intersection at the center of the cross goes to 0). So I'd say I understand it mathematically, and with the picture even understand it intuitively -- but I admit, something in me still balks. Not at the logic or truth of the result, but at some ill-defined conflict between my intuitions and the result. I do think it's worth thinking about why, for those of us who are perfectly fine with the 1/3 result, the Tuesday 13/27 bit is still unsettling. Variants show this even more clearly: I have two dice, which may be either black or red; one of them is red. What color is the other? Oh, and I just rolled a 3 with that red one: now what color is the other one? Weird. (Or, "oh, and that red one was in Reagan's pocket when he was shot. Now what color is the other one?") posted by chortly at 6:24 PM on May 25, 2010 [6 favorites] That's a lovely, well-organized illustration, chortly. Now what happens next is someone will look at it briefly and say "But the other child could be a boy or a girl! So it's 50%, obviously!" and walk away shaking their head at how gosh-darned silly and complicated you're making it. posted by Wolfdog at 6:34 PM on May 25, 2010 Dammit, and I was so proud that I got all those arithmetic questions right . I understand why the answer is not 1/2 even though my instinct wants to bludgeon anyone who says otherwise. So now I have a better idea of how mathematicians can see the world...and it seems so . Magic? ? Having your hallways tiled in sequences? How can I get my brain operating on that level so I can get in on that action? posted by zix at 6:35 PM on May 25, 2010 Ok fuck, I'm wrong. 13/27 is correct. Damnnit. Ok I'm trying to make this make sense and I think this is the closest I've come. The question should be worded: What are the chances that I have two boys and at least one is a boy born on tuesday? You have 2 chances with 14 different possibilities for each chance (7 days boy + 7 days girl). Giving 196 total possible outcomes. Your chances of having at least one boy born on tuesday is 27/196 = 13.8% Your chances of having two boys: 25% Your chances of having two boys and at least one born on tues is 13/196 = 6.6% This only helps me a little... time to drink posted by spaceviking at 6:37 PM on May 25, 2010 [2 favorites] For those of you that still don't buy the 1/3 explanation, consider it like this: In the year 2000 the parents have their first child which is either a boy or a girl. That's two possible outcomes. Then later in 2004 they have a second child, which also has equal chances of being a boy or a girl. That's another two possible outcomes. Those two events were not related in any way, which means in 2010 there are FOUR possible configurations: 1. a ten year old boy and a six year old girl 2. a ten year old boy and a six year old boy 3. a ten year old girl and a six year old girl 4. a ten year old girl and a six year old boy Now, can you see that #4 and #1 are completely different scenarios ? They both involve having one of each gender, but there were two ways of getting to that result which means it was twice as likely . If you know that one of the kids is a boy you know that #3 was not possible, leaving three possible equally likely scenarios, each with a 1/3 chance. Or another way of stating this: when the problem says "I have two kids, one of which is a boy, what the the probability that the other is a boy", it is not the same thing as asking "my younger child is a boy, what are the chances that my older child is a boy?" posted by Rhomboid at 7:00 PM on May 25, 2010 [4 favorites] Ok fuck, I'm wrong. 13/27 is correct. Damnnit. Try to think of it this way: The reason you are not double-counting the "two boys, both Tuesday" case is that there is no other ordering. For another case, let's say "boy-Tuesday, girl-Friday", you are also counting "girl-Friday, boy-Tuesday". You can't do that when they are both boys born on the same day. The "boy-Tuesday, boy-Tuesday" case is same no matter how you order it. That overlap seems to be confusing the probability issue. posted by Avelwood at 7:03 PM on May 25, 2010 Whoops -- the math is fine and the picture is solid, but my words were a bit sloppy: I didn't mean "divide it by the rest" but "divide it by the total". Similarly, the ratio of orange inside the BB square to orange goes to 1/2 as the stripe's width shrinks. posted by chortly at 7:05 PM on May 25, 2010 Let's change the question a bit. I have 2 children, one of them was born on a particular day of the week and is a boy. What are the odds that the other child is a boy? To my mind, this is the same question, no? The days of the week are arbitrarily named. So let's call the day of the week the child was born on A, the next day B and so on. Fill in the chart the same The length of the week is arbitrary, too. So let's say "I have a child who was born on a particular day in a temporal cycle of length x" What's the probability that the other child is a boy as X approaches infinity? 1/2 But this is not information that is any different from knowing that one child is a boy to begin with. Every child is born on a particular day. So either A) the day the child is born is irrelevant and shouldn't be taken into account or B) knowing that one child is a boy, the odds the other child is a boy is 50/50. I'm leaning towards A, personally. posted by empath at 7:05 PM on May 25, 2010 It makes me sad that mathematicians are still missing the gap in logic here. Possible options: (1) bb / (2) bg / (3) gg / (4) gb If he reveals that the child is a boy that eliminates TWO options (3 and 4), not one. This leaves two options (1 and 2). We thus have a 50% chance that he has two boys. If he reveals that the child is a boy that eliminates TWO options (2 and 3). This leaves two options (1 and 4). We again have a 50% chance that he has two boys. The flaw in all these puzzles is that we assume that the reveal only eliminates one option rather than two. The flaw is easier to visualize in the similar Monty Hall puzzle, and rather obvious if you sketch it out on a chalkboard. posted by kanewai at 7:17 PM on May 25, 2010 [2 favorites] Ok I think I understand this intuitively now. Pose the question a bit different: What percentage of families with two children have a boy born on tuesday? It makes sense that families with two boys will be more likely to have boys born on a tuesday than other families. Now, of those families how many have two boys? It makes sense that a much greater proportion of families with two boys will have boys born on a tuesday. In fact, if you think about it the total number of boys in families with two boys is the same as the total number of boys in families with a boy and a girl. Therefore if you take some selection factor on the boys, it will be half as strong on the families with two boys than the families with a boy and girl because the families with two boys have two chances at it. posted by spaceviking at 7:19 PM on May 25, 2010 [6 favorites] Suppose A knows I have a son B, and A also knows I have two children, but A does not know the sex of my other child C. Suppose I do not want A to know the sex of C. Should I then avoid telling A the day of the week that B was born? Because then A will be able to guess C's sex easier? Is that what the math is saying? posted by gubo at 7:20 PM on May 25, 2010 [1 favorite] The flaw in all these puzzles is that we assume that the reveal only eliminates one option rather than two. The flaw is easier to visualize in the similar Monty Hall puzzle, and rather obvious if you sketch it out on a chalkboard. He doesn't say which child is a boy, only that one of them is. posted by empath at 7:22 PM on May 25, 2010 [1 favorite] spaceviking, you know that actually makes sense to me. I retract my previous comment. posted by empath at 7:23 PM on May 25, 2010 Pose the question a bit different: What percentage of families with two children have a boy born on tuesday? It makes sense that families with two boys will be more likely to have boys born on a tuesday than other families. DUDE. You rock. I can fricking sleep tonight without grinding my teeth over this. posted by Durn Bronzefist at 7:26 PM on May 25, 2010 He doesn't say which child is a boy, only that one of them is. But that's ok ... as there is a 50% chance that he selects either path A or path B, so the odds remain the same. posted by kanewai at 7:28 PM on May 25, 2010 I don't even understand the introduction. "To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So we can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3." Yes, GB is the same as BG, it is the same outcome. Hence it is twice as likely to have it compared to GG or BB. How can it be 1/3? Why is BB not 1/4? How can you "eliminate" this? posted by yoyo_nyc at 7:35 PM on May 25, 2010 Reality really honestly does work like this. Consider this: If I have two children, one is a boy, and the other is of unknown gender. If as many people have stated, the gender of the second child is a 50/50 split, then of the people who have one male child, half should have two male children, and half should have one son one daughter. This is due to people making the assumption that the having of a son is a confirmed (past) event, and the other event is an unknown event. It certainly makes sense for intuition to tell us this, because that's the situation that will be encountered in reality. In this situation the possibilities: BB - Still Possible GG - Impossible, (all will agree) GB - Impossible to those making mistake, but in questioner's opinion, STILL POSSIBLE BG - Still Possible. There used to be a game show in which you had to pick one of three doors. One door had a good prize, the others had nonsense. So if you picked truly "at random" you had a 1/3 shot at picking the right door. After you had chosen, one of the remaining "bad" doors was eliminated, and you were left with the option to remain with your chosen door, or switch to the remaining mystery door. Although it is counterintuitive, switching is always right. You picked your door with a 1/3 chance of success. However when the "bad" door was eliminated, the probability that the unchosen door is correct grew. This is because the "correct" door can never be eliminated, and your door can never be eliminated, correct or not. (Shamefully, I've forgotten how to calculate exactly how much better it is). The situations you can eliminate or still have to consider matter a huge amount in these problems, and thinking chronologically is a big disadvantage. I know just enough about probability to know how horribly likely it is that I'm even making a mistake now, but this is what I understand at the moment. posted by SomeOneElse at 7:36 PM on May 25, 2010 [1 favorite] Kanewai, think of it in terms of information. You can't just say that he's telling you path A or B with 50% probability. What information is he giving you? Technically, he's just saying (A OR B). Or, equivalently, NOT ((NOT A) AND (NOT B)) - this is the only choice eliminated, which corresponds to option 3, gg. posted by Lemurrhea at 7:42 PM on May 25, 2010 I think the problem most of us are having with this isn't the math so much as the fact that the reveal isn't that mindblowing: "What are the chances I have two boys?" Oh, I love these! Well, let me see...you would think that it would be 50%, but I bet there's an awesome non-intuitive answer! "[does math] Well, as you can see, one logical answer is 1/3..." That's what I'm talking about! "[does more math] ...but the real answer is 13/27, or 48.1%!" Oh. Okay. Cool trick, bro.posted by Ian A.T. at 7:46 PM on May 25, 2010 Yes. Sorry, I misread it. I know about selective choice in mathematics. I thought he eliminated GB instead of GG. posted by yoyo_nyc at 7:53 PM on May 25, 2010 Allright to restate what I said above more clearly. In all families with 2 children the total number of boys in families with a boy and a girl is equal to the total number of boys in families with two boys. Let's say you had a total of 56 boys. 28 of the boys will be in families with two boys, and 28 of them will be in families with a boy and a girl. Let's say you pick all the boys who are born on a Tuesday. So it stands to reason that (if you ignore small sample size) 4 of the boys in two boy families will be born on tuesday and 4 of the boys in boy girl families will be born on tuesday. So if you are a boy and you are born on Tuesday you have about a 50% chance to be in a two boy family. It's actually less because you have to ignore the case where both are born on tuesday as I was corrected somewhere up there. posted by spaceviking at 8:01 PM on May 25, 2010 [1 favorite] I have to work this out as a thought problem: 1. I have two coins, coin A and coin B. I toss coin A, it comes up heads. Now I intend to toss coin B. Given the fact that coin A came up heads, what is the probability that coin B will now come up heads? Answer: 1/2. The outcome of tossing coin A doesn't affect the outcome of tossing coin B. 2. I have two coins, coin A and coin B. I toss coin A, it comes up heads. Also, I find out that coin A was minted in 1993. Now I intend to toss coin B. Given the fact that coin A came up heads and was minted in 1993 , what is the probability that coin B will now come up heads? Answer: 1/2. The outcome of tossing coin A doesn't affect the outcome of tossing coin B, nor does the year it was minted posted by gubo at 8:03 PM on May 25, 2010 It kind of seems like the 1/3 answer hinges on not "using" your knowledge that one child is a boy, that is, counting the G/G option when calculating the probability. And the 13/27 answer does use that knowledge, by not including any G/G options. I still feel like the option where both boys are born on a Tuesday should be double counted because they are still different boys, the one we know about and the one we don't, if birth order is going to matter, but I realise I am probably very wrong. This makes my head hurt. I need coffee. posted by lwb at 8:06 PM on May 25, 2010 So it stands to reason that (if you ignore small sample size) 4 of the boys in two boy families will be born on tuesday and 4 of the boys in boy girl families will be born on tuesday. You've made your argument worse here, because you've changed the question. If you're a boy in a 2 child family, you always have a 50/50 chance of having a sister or brother in a two child family. The question was if one of 2 children is a boy, what are the odds that both are boys, which is a 1/3rd chance. The debate is over whether knowing the boy was born on tuesday makes a difference. posted by empath at 8:07 PM on May 25, 2010 I am totally ignoring the Tuesday thing atm as well. Another thing to consider. If you already have one male child, and are expecting a second, there is a 50/50 chance that second child would be a boy or a girl. Let's say instead you are out and about and (this is why it is hard for us ppl who think) let's say a gypsy tells you. "You will have two children, and one of them will be a boy". Now, you have your first child. Let's say you have a 50/50 shot on this one. If it is a girl, then at this point you have reduced the problem to a 100% chance, if the gypsy is right, (and this is one irrationally reliable gypsy). If it is a boy, then when your second child comes, you have another 50/50 shot at boy or girl. This is not the same as assuming that the first child HAS to be a boy. Now lets consider the insane confluence of events scenario. So let's say the gypsy tells you you will have two children, and one will be a boy who will have three eyes. (Let's consider this to be ridiculously rare). Being a male child almost becomes irrelevant except for its attachment to the hyperirregularity. So my options become: and because it CAN happen: Your first child. 50% chance boy 50% chance girl, .00~1% chance 3 eyes. So essentially, 50/50 girl boy. If it isn't a 3-eyed boy, your next child WILL be a boy with 3 eyes because of that darned gypsy. If it IS a 3 eyed boy, your next child has a 50/50 shot at being boy/girl, and ANOTHER vanishingly small shot at having 3 eyes. Because most all of us think in terms of Already Having The X in the past, and Having a Chance at the y in the future we look at Boy Child or Boy Child on Tuesday or Boy Child with Three eyes as a confirmed past event and think that it doesn't affect the future event. But that isn't what these problems are asking! And I consider this stuff so challenging I want to point out that I am still probably wrong! Basically, my advice is: 1. If a person like this asks you a question like this, the answer isn't the obvious answer! (mom's got a probability doctorate and siblings are math students. Really the correct answer is to glare and talk about online videogames in a droning monotonous voice.) 2. Think of these problems as Ides of March style predictions unless order is explicitly given. posted by SomeOneElse at 8:08 PM on May 25, 2010 [1 favorite] 1. I have two coins, coin A and coin B. I toss coin A, it comes up heads. Now I intend to toss coin B. Given the fact that coin A came up heads, what is the probability that coin B will now come up heads? Answer: 1/2. The outcome of tossing coin A doesn't affect the outcome of tossing coin B. You've changed the question. Flip both coins, but don't look at either. Someone looks at the coins for you and tells you that at least one coin is heads. What are the odds the other is also heads? 1/3. posted by empath at 8:09 PM on May 25, 2010 [2 favorites] Actually I just realised 1/3 is after eliminating G/G (I really do need that coffee) so I retract my comment. But it still seems wrong somehow. *glares at statistics in general* posted by lwb at 8:13 PM on May 25, 2010 Someone looks at the coins for you and tells you that at least one coin is heads. What are the odds the other is also heads? 1/3. Okay I get this scenario. Now what if someone looks at the coins and says one coin is heads, and that very same coin was minted in 1993? Is the probability that the other coin is heads still 1/3? posted by gubo at 8:15 PM on May 25, 2010 [2 favorites] Wait, but why is G/G in the original set of options but no G/G options in the second? I retract my retraction, I'm still confused, somebody please explain to me. posted by lwb at 8:15 PM on May 25, 2010 Someone looks at the coins for you and tells you that at least one coin is heads. What are the odds the other is also heads? 1/3. Okay I get this scenario. Now what if someone looks at the coins and says one coin is heads, and that very same coin was minted in 1993? Is the probability that the other coin is heads still 1/3? Excellent fucking question. IMO, it's still 1/3rd. This guy would say 1/2, probably. I don't see how its any different from the question in the fpp. posted by empath at 8:20 PM on May 25, 2010 Actually, gubo has 100% convinced me that the answer in the article is wrong wrong wrong. posted by empath at 8:21 PM on May 25, 2010 The problem is "What is the probability that I have two boys" is a bad question. If you interpret it as meaning "what is the chance that my other child is a boy", then it's 50%. But if you interpret it as "of the people who have two children and at least one is a boy, what percentage have two boys", it's 33.3%. posted by 445supermag at 8:24 PM on May 25, 2010 445, it's the same question. You are reading the question as: child is a boy, what are the odds that child is a boy", which is 50%. But he is not specifying this or that child. posted by empath at 8:26 PM on May 25, 2010 If you are one of 56 boys from two kid families. You have a 33% chance of having a brother and a 66% chance of having a sister. Of you and your friends, 28 of them will be in 2 boy families and 28 of them will be in Boy-girl families. However there will be 14 two boy families and 28 boy-girl families. 1/7 of the boys will be born on tuesday: leaving 8. 4 of the two boy families will have boys born on tuesday. 4 of the boy-girl families will have boys born on tuesday. So if you are a boy born on tuesday, you will have about a 50% chance of being in a two boy family. posted by spaceviking at 8:32 PM on May 25, 2010 I would like one of the 13/27th'ers to answer Gubo's question and explain what the difference is. To restate. I have flipped two coins and placed them under a box in front of me. I look in the box, and tell you that one of the coins is heads and it was minted in 1993 What are the odds that both are heads? And actually, I'll add another one. I have flipped two coins and hid them under a box. I look under the box and tell you that one of the coins is heads and was minted in a year ending in 3. What are the odds that both coins are heads?posted by empath at 8:34 PM on May 25, 2010 [1 favorite] I know that the simple case is 1/3. spaceviking, here is why I am not convinced by your appealing logic: you say: 4 of the two boy families will have boys born on tuesday. 4 of the boy-girl families will have boys born on tuesday. and conclude: So if you are a boy born on tuesday, you will have about a 50% chance of being in a two boy family. yet you also say: 28 of them will be in 2 boy families and 28 of them will be in Boy-girl families. but fail to conclude from here that if you are a boy, you will have about a 50% chance of being in a two boy family. (no day-of-week mention). In our problem, we'd like an answer of 1/3 here. So, your chain of reasoning doesn't apply. posted by milestogo at 8:44 PM on May 25, 2010 Let's say you flip 56 different coins in pairs so that there are 28 pairs of two coins. 14 times you will have two heads 14 times you will have two tails 28 times you will have one head and one tail Let's say that 1/7 coins are minted in Denver (to make it more plausible) That means that of all of the heads that were flipped (56 in total) 8 of them were minted in Denver. If you flipped two heads you have 28 total heads with 4 of them minted in denver If you flipped a head and a tail you have 28 total heads with 4 of them minted in denver. However, it is much more likely to have flipped two heads AND had one of them minted in Denver because there were 4 minted in denver out of 14 pairs. Likewise you will have 4 heads minted in denver out of 28 Heads/tails pairs. posted by spaceviking at 8:48 PM on May 25, 2010 Yeah it seems to come down to the wording. If you say, "Take all sets of two coins where one coin is heads-up and also has quality X (where the general distribution of X is known). What fraction of these sets have two heads up?" then you move away from one-third territory, no? But the way and I have framed it, one-third seems to be the answer. posted by gubo at 8:53 PM on May 25, 2010 I'm skipping the discussion to put in my NO MATH answer and let the hive mind check my work ;) Please PM me if I don't respond to the thread So I went with 1/2 at first two because I was thinking the "Gamblers Fallacy" problem of, all the previous information (order of kids, day of week) is irrelevant. But there are two scenarioes being worked with here: S1: "my first child is a boy, what is the probability that my second child is also a boy?" (A: 1/2) S2: "one of my children is a boy, what is the probably that they both are boys" (A: 1/3) The difference between S1 and S2 seems nit-picky or semantics until you realize the situations that you would say S1 vs S2. lets say my first child was a boy... and my wife is pregnant but I don't know what my second child will be yet. I would HAVE to say something like and get the answer of %50 that my future child will be a boy. The only tricky bit is the first clause of the sentence ("my first child is a ____") is actually irrelevant, all I am really saying is "what is the probability that my NEXT child will be a boy?" which (if gender assignment is like coin flips*) is always 50%. Even if I have had 100 previous children that were all girls, the chance that my next child might be a boy is still 50%. The tricky part is you could still ask S1 while knowing the outcome and the true state of affairs (we painted the room blue already lets say) but the answer is still 50% to someone who has not seen the room or the baby and is thus answering a general probability question; say I asked them to guess what the sex of my new baby was. In their mind they would have a 50% chance of being right, the fact of my first child being a boy doesn't matter. But to get back to the question... lets look at S2: It still may seem identical to S1 so lets extend it to show how it tells us more. Lets Take a species of S2, say but morphed and call it - "9 of my children are boys, what is the probability that all 10 are boys" Same question as S2, different information. In order for this to make any sense you must assume that the person saying S2b (or S2 for that matter) is not trying to refer to a future event, IE not sarcastically saying, while his wife is still in labor "I've had 9 boys, whats the chance that I might get a girl?" that is really just S1 all over again (the gamblers dilemma) NO NO, now the 9 and the 1 unknown are linked information. Hypothetically we can go to the census buro (of our fantasy world where people routinely have 10 kids... or it may be better to think about this as coin flips at this point) and look at the breakdown and see "of all the people who had 10 kids of which 9 are boys, it is more likely that, at least one of them was a girl than that all were boys" (10x more likely; or that case is 1 in 11). You could do a combination tree to see this for yourself. Incidentally the logic tree is what takes the place of our census beuro. But for a logic tree the order matters because (taking the order of children 1st - 10th) b b b b b b b b b b - is only one outcome where the other child is a boy whereas G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G b b b b b b b b b b G is 10 outcomes where at least one is a girl. (All above is from the much greater list of possible outcomes that includes: G G G G G G G G G G b G b b b b b G b and every other iteration which I won't list here because they do not fit "have 9 boys" criteria) But you see, that order matters. Otherwise you are asking S b : "my first children are boys, what is the probability that my second 10th child is also a boy? which the graph comparison is simple: b b b b b b b b b ? (question S1b) b b b b b b b b b G (outcome 1) b b b b b b b b b b (Outcome 2) Answer = 50% So, ya order matters is what I am driving at, but it doesn't mean anything in relation to the question, just how the logic works out . To give a sort of betting example. You visit your friend at a reunion after years of not seeing him. You never really kept up with his Christmas cards, correspondence etc of all the kids names who married who, what jobs they took, colleges, or even their sexes and ages, but you remember he had 10 (all adults now and so you cannot easily tell age, so you cannot guess who is oldest/youngest etc). He introduces you to his 9 grown sons sitting at a table. Then you note "hey, I thought you had 10 kids, I've only met 9" and he says "AHH, is by the punch-bowl, go say hi". Well you see a man and a woman by the punch bowl. Who do you introduce yourself to? One is Pat, the other is a stranger. And you don't want to ask your "supposedly" good friend that you don't know the sex of one of his children... So you think through the birth announcements you've received over the years (and promptly thrown away without opening them). And you wish you could go through each of those unopened letters to see if one of them was pink or all of them was blue. Now, you think "If I only had those letters now...". You would open them one at a time until you saw either 1 that was pink (AHH beautiful Patricia, problem solved) or ALL of them were blue (ok, Patrick it is, mother must be Irish...). , you would only have to find 1 pink notice before stopping... if the first one was pink you wouldn't even have to look at the other 9... you KNOW they are all guys. But if Pat is Patrick then you will only KNOW that he is so after opening ALL of those unopened letters. AH-HAH! well I might not have the letters with me, but while walking over to the punch bowl this little thought experiment has yielded something! Just like you wouldn't bet that it is very likely I could flip a coin 10 times and get all heads... I can also wager that it isn't likely I would open all 10 letters and see blue each time... (GO breakdown of the wave function) Because opening those letters all those years ago vs opening them now is the same... to me . Each is it's own %50 chance. So with some confidence you walk over and say hi to the (beautiful by the way) woman... "You must be Patricia" you have ~91% chance of being right (Is my math right I hesitate with percentages?) As for the Days of the week... I'm not going to get into that, but again it is assuming that by saying that 1 was born on a Tuesday that you are similarly cutting down the lists of the combination/ tree... but you can see how that would make a difference now in a similar way. *gender assignment is not 50/50, it is like 50/49/1 (intersex) but for purposes of mathemagical fun... lets forget about the hermaphrodites and b vs g disparity :) posted by DetonatedManiac at 8:59 PM on May 25, 2010 [1 favorite] Milestogo said: yet you also say: 28 of them will be in 2 boy families and 28 of them will be in Boy-girl families. but fail to conclude from here that if you are a boy, you will have about a 50% chance of being in a two boy family. (no day-of-week mention). In our problem, we'd like an answer of 1/3 here. So, your chain of reasoning doesn't apply. You are forgetting all of the girls. If you have 56 boys in two kid families you will also have 56 girls. This makes for 56 total families with 14 being two girls, 14 two boys, and 28 being boy girl. So you have a 25% chance of being in a two boy family and a 50% chance of being in a boy-girl family. But WAIT: you have a fucking point. If you are a boy you are more likely to be in a boy-boy family because there are more boys in those types of families. Ok,,, need to think...dammit, I thought I figured this out. posted by spaceviking at 8:59 PM on May 25, 2010 I was clearly wrong above. I had it backwards about which probabilities were twice as likely. Sorry about that. The 1/3 answer with no additional information makes perfect sense. By revealing that one child is a boy, the possibility of two girls is removed, leaving three possibilities of which only one is another boy. That's pretty intuitive. Of people with two children who have at least one boy, the other child is a boy in 1/3 of cases. However, by identifying the day the child was born, we move away from talking about the general population of two child families and towards a specific child, and the chances for a specific child to be a boy is 1/2. The more specific (less probable) the additional information is, the closer to a probability of 1/2 we get. That makes sense to me, though it might still be the entirely wrong way to think about it. posted by Nothing at 9:12 PM on May 25, 2010 Okay. I finally got it to make sense in my mind, thanks mainly to chortly's chart , and I think I might be able to explain it in a way that makes it make sense to some of the people having trouble with it. Let's start with the basic problem, without the Tuesday. What's slipping some people up is that "one of them is a boy" can describe both BG and GB. This isn't about birth order, though birth order is an easy way to think about it. For people getting tripped up on birth order, try thinking of the order that they sit in in a family photo. The important thing is that you don't know which child the speaker is talking about . If you did, then you'd be determining the gender of only one child , and it would be 50/50. When you bring the Tuesday in, you're adding a layer of specificity about the kid that the speaker is talking about. It's still not 50/50, because you still don't know for certain that you know which child is being discussed (in other words, it might be two boys both born on Tuesday). Now, let's change the statement to, "One of them is a boy born on Christmas." Imagine a chart like chortly's in which each quadrant is 365x365. Now it's much more specificity: because there's such a slim chance that both children are boys born on Christmas, the chance that they're both boys is very, very close to 50/50. Someone correct me if I'm wrong, but I think it's 729/1459, or .499657. It's not 50/50 because there's still that 1/1459 chance that both children are boys born on Christmas. The limit is 50/50; it'll never quite reach 50/50 unless you know which child the speaker is talking about . If the speaker says, "The older child is a boy," then you have a 50/50 chance that they're both boys. posted by roll truck roll at 9:13 PM on May 25, 2010 [1 favorite] Ok, I think I got it, and I haven't changed my mind again yet... You have to think in terms of families. If you are a boy there are 42 possible families you can be in, 14 of them are boy-boy and 28 of them are boy-girl. So perhaps the problem here is perspective. If you think in terms of the family then it is one way, but if you think in terms of the kid its another. For example: I am a family with two kids and I have one boy -- what are the chances that have two boys? Ans: 1/3 BUT: I am a boy, what are the chances that I will be in a family with two boys? Ans: 1/2 In the second case it is more like saying: I'm a family with two kids and the first kid was a boy -- what is the chance the second will be also a boy? posted by spaceviking at 9:15 PM on May 25, 2010 Now in the case of coins I think this is the way to think about it: You flip two coins and hide them in a box. You ask your friend: "If one of these is heads and is minted in 1983 what is the chance that the other will also be heads?" Answ: 1/2 Because it is a really low probability that it will be heads and minted in 1983 (you don't know before you start of course) it is equally as likely for it to occur with two heads as with heads-tails even though heads-tails will happen more often. posted by spaceviking at 9:23 PM on May 25, 2010 At the risk of being an ass... I think I buried the lead in my post. SO let me re-post this part of my post above since I think it puts a nice verbal image of what happens (NO MATH): In the example below we are taking the question from the article: S2: "one of my children is a boy, what is the probably that they both are boys" (A: 1/3) and turning it into a more extreme example in order to illustrate the point S2b: - "9 of my children are boys, what is the probability that all 10 are boys" (A: 1/11) chck this please You visit your friend at a reunion after years of not seeing him. You never really kept up with his Christmas cards, correspondence etc of all the kids names who married who, what jobs they took, colleges, or even their sexes and ages, but you remember he had 10 kids (all adults now and so you cannot easily tell age, so you cannot guess who is oldest/youngest etc). He introduces you to his 9 grown sons sitting at a table. Then you note: "hey, I thought you had 10 kids, I've only met 9" and he says " is by the punch-bowl, go say hi". Well you see a man and a woman by the punch bowl. Who do you introduce yourself to? One is Pat, the other is a stranger. And you don't want to ask your "supposedly" good friend that you don't know the sex of one of his children... So you think through the birth announcements you've received over the years (and promptly thrown away without opening them). And you wish you could go through each of those unopened letters to see if one of them was pink or all of them was blue. Now, you think "If I only had those letters now...". You would open them one at a time until you saw either 1 that was pink (AHH beautiful Patricia, problem solved) or ALL of them were blue (ok, Patrick it is, mother must be Irish...). Notice, you would only have to find 1 pink notice before stopping... if the first one was pink you wouldn't even have to look at the other 9... you KNOW they are all guys. But if Pat is Patrick then you will only KNOW that he is so after opening ALL of those unopened letters. AH-HAH! well I might not have the letters with me, but while walking over to the punch bowl this little thought experiment has yielded something! Just like you wouldn't bet that it is very likely I could flip a coin 10 times and get all heads... I can also wager that it isn't likely I would open all 10 letters and see blue each time... ( to the ) Because opening those letters all those years ago vs opening them now is the same... to me now. Each is it's own %50 chance. So with some confidence you walk over and say hi to the (beautiful by the way) woman... "You must be Patricia, your father has written so much about you" you have ~91% chance of being right (Is my math right I hesitate with percentages?) As for the Days of the week... I'm not going to get into that, but again it is assuming that by saying that 1 was born on a Tuesday that you are similarly cutting down the lists of the combination/ tree... but you can see how that would make a difference now in a similar way.> posted by DetonatedManiac at 9:26 PM on May 25, 2010 How does the probability change if the boy has red hair? And recieved three sets of shoes as a birth present? posted by vertriebskonzept at 9:32 PM on May 25, 2010 The born on tuesday thing seems to me to be a red herring. His statement did not exclude the other child being born on tuesday, he just mentioned that one was. If he has said one child was born on tuesday, the other not, or something to that effect you can whittled down the probability tree otherwise it's a deadend and besides the point. Gubo has made it very clear in his coin example, it would be folly to suggest date of one coin affects the head or tailness of another. posted by ExitPursuedByBear at 9:35 PM on May 25, 2010 I'm going to go with empath's version of gubo's question, because it's a little easier to work with. : " I have flipped two coins and hid them under a box. I look under the box and tell you that one of the coins is heads and was minted in a year ending in 3. What are the odds that both coins are heads? I'm going to start from the premise that if we ignore the year part, the answer is 1/3. That's already been hashed out so many times above, and it doesn't seem worth doing again. Assuming that a coin has a 1/10 chance of being made in a year ending in 3, we imagine another chart like this one , but in which each quadrant is 10x10. The chances of both coins being heads is 19/39. The reason why it's not a straight 50/50 is because there's still a 1/39 chance that both coins are showing heads and made in a year ending in 3 Let's change the question. Let's say you look at one of the coins under the box and tell me, "It's heads and oh my gosh it's a 1920 Franklin McKrugerrand there's only one of these in the world! " Now, I can safely say that the chances of both coins being heads is 50/50. To put it in completely unmathy terms, the more information I have about the coin, the more I know which one you're talking about. posted by roll truck roll at 9:43 PM on May 25, 2010 [3 favorites] Okay, I've been reading the comments above, and as my way of thinking things through, I have a few scenarios here, which may or may not have faulty reasoning: 1. Someone is looking in a room and tells you there are two children in it. They tell you that one of the children is a girl, and then ask you to guess if the other child is a boy or a girl. Suppose this exercise is repeated many times, with new pairs of children each time but always with one of them being a girl. You always answer "girl." How often will you be right? The answer is 1/3, I think. 2. Now a slightly different scenario. Someone is looking in a room and tells you there are two children in it. They tell you that one of the children is a girl, and they give some other fact about that girl, height or weight or day of birth or whathaveyou, and then ask you to guess if the other child is a boy or girl. Suppose this exercise is repeated many times. New pairs of children are present each time but always with one of them being a girl. Furthermore, the fact given about the selected girl changes each time: sometimes the day of birth is given, sometimes the age, whathaveyou. You always answer that the other child is a girl. How often will you be right? It's still 1/3, right? 3. Now here comes the scenario I think the fpp question is aiming at. Someone is looking in a room and tells you there are two children in it. They tell you that one of the children is a girl, and that the day of birth of that girl is Tuesday, and then ask you to guess if the other child is a boy or girl. Suppose this exercise is repeated many times with different pairs of children, always with one of the children being a girl born on Tuesday. You always answer that the other child is a girl. How often will you be right? Here's where I can visualize the answer being 13/27. Am I getting posted by gubo at 9:48 PM on May 25, 2010 [1 favorite] Okay, I generated a population of 1100 pairs of kids twice with random days of weeks, and after filtering out all the ones with no boys born on tuesday, I got 36% with 2 boys on the first run through and 32.5% on the second run through. I am 100% positive the answer is 1/3, and this guy is completely full of shit. If anyone else wants to program it and run it, I'd be happy to see the numbers. I just did it with an excel spreadsheet. posted by empath at 9:55 PM on May 25, 2010 [1 favorite] To put it in completely unmathy terms, the more information I have about the coin, the more I know which one you're talking about. But you don't have any infornation about the coin unless the information tells you which coin is which. For example, if you have 1 quarter and one dime, and the person tells you the dime is heads, then yes, it's 50% that the quarter is heads. Someone else run a simulation, please, that's really the only way to resolve this. posted by empath at 9:57 PM on May 25, 2010 empath, that's exactly the point of my made up "McKrugerrand." In that case, there's only one in the world, so yeah, 1/2. : " Someone else run a simulation, please, that's really the only way to resolve this. Even I know the difference between simulations and proofs. posted by roll truck roll at 9:59 PM on May 25, 2010 If your theory doesn't match results, your theory is wrong. posted by empath at 10:06 PM on May 25, 2010 here are the results . That's every possible combination of two children born on two days of the week. What's the difference between your model and this one? posted by roll truck roll at 10:13 PM on May 25, 2010 When I'm confused, I just ask the basic Unix commands: > echo {b,g}{M,T,W,t,F,S,s}{b,g}{M,T,W,t,F,S,s} | fold -w 5 | wc -l There are 196 possible configurations of two genders and seven weekdays for two children, which we assume are equally likely in the general population. > echo {b,g}{M,T,W,t,F,S,s}{b,g}{M,T,W,t,F,S,s} | fold -w 5 | grep bT | wc -l Of those, only 27 have at least one boy born on a Tuesday; we know we're talking to a parent in that group. > echo {b,g}{M,T,W,t,F,S,s}{b,g}{M,T,W,t,F,S,s} | fold -w 5 | grep bT | grep b.b. | wc -l And of , 13 have two boys. posted by nicwolff at 10:26 PM on May 25, 2010 [11 favorites] The problem when the person is giving the information as opposed to being asked the question is that when you have a mixed pair, the person could tell you about the boy or the girl, which reduces the probability where you have where there is a boy girl pair and you are being told about the boy. There is a big difference whether you are observing whether there is at least one boy, and observing someone telling you there is at least one boy with no bidding to do so. The first case has the four options outlined by Rhomboid earlier, the latter has six: 1. eldest boy youngest girl, told about a boy 2. eldest boy youngest girl, told about a girl 3. eldest girl, youngest boy, told about a girl 4. eldest girl, youngest boy, told about a boy 5. eldest girl youngest girl, told about a girl 6. eldest boy youngest boy, told about a boy But they are not all of equal probability... numbers 5 and 6 each have a probability of 1/4 since there is a 1/4 chance of having two girls overall and a 25% chance of having two boys overall. Numbers 1 & 2 must add up to a total of 1/4, same with 3 & 4 as having a boy first and then a girl or a girl first and then a boy each have a 1/4 probability as well. So if we assume that there is an equal chance of having a boy or girl, and also an equal chance of the parent talking about a boy as a girl when there is a choice the options and probabilities are: 1. eldest boy youngest girl, told about a boy (1/8) 4. eldest girl youngest boy, told about a boy (1/8) 6. eldest boy youngest boy, told about a boy (1/4) So the boy/boy pair has half the share of the remaining probabilities. That means a 50% chance that the second child is also a boy. If the parent had a psychology that strongly favored talking about a boy, the probability would indeed be 1/3 that the second child was a boy. However, if the parent was inclined to talk about a girl over a boy, the probability of the second child being a boy would be 100%--if that parent had any girls at all, they would have talked about one. posted by Zalzidrax at 10:28 PM on May 25, 2010 A crazy uncle of mine used to pose the following brainteaser to me when I was 7 or 8: "My Aunt Sue, she died last night, she died last night. Did she die?" My answer was almost always wrong, no matter which one I picked, and he took great pleasure in never telling me . While he was also the one who taught me the classic one-sentence mysteries such as George and Martha lying dead on the floor with broken glass and water around them, or the man who never would have died if he'd seen the sawdust, he never let me get anywhere near the secret behind his Aunt Sue one. So I came to the conclusion that he was merely messing with my head just to see what answers I could come up with and the explanations I could provide. I feel this brainteaser is presented as kind of the same thing, only there's no Aunt Sue. posted by Spatch at 10:46 PM on May 25, 2010 This problem blew my mind. It is so counter intuitive. posted by aesacus at 10:47 PM on May 25, 2010 Sorry but I have to stick my oar in as there are far too many people accepting that the answer in the simple case is 1/3. In 100 cases of 2 child families there will be BB 25
BG 25
GB 25
GG 25 But the number of boys in each family would be BB 50 BG 25 GB 25 GG 0 So there are 50 boys with brothers and 50 with sisters hence it's 50-50. posted by jamespake at 11:04 PM on May 25, 2010 So there are 50 boys with brothers and 50 with sisters hence it's 50-50. But the boys are not evenly distributed over the four possible family configurations, and the question is about family configurations (i.e. families as a whole) not about individual boys. The answer really is 1/3. posted by Rhomboid at 11:22 PM on May 25, 2010 Okay, I generated a population of 1100 pairs of kids twice with random days of weeks, and after filtering out all the ones with no boys born on tuesday, I got 36% with 2 boys on the first run through and 32.5% on the second run through. I am 100% positive the answer is 1/3, and this guy is completely full of shit. If anyone else wants to program it and run it, I'd be happy to see the numbers. I just did it with an excel spreadsheet. (OK, I can't resist a programming task.) 1100 is way too small of a sample size for this. From an initial sample size of 10000 pairs of kids, I'm getting values hovering around 48%. (13/27 = 0.481481481) I did this in PHP, source is . (I threw it together very quickly, forgive the bad style. Hopefully the variables and logic are clear.) On preview: jamespake, how many of those families satisfy the condition "one or both children are boys?" Of that subset, how many are both boys? posted by kmz at 11:22 PM on May 25, 2010 [1 favorite] Empath, never use Excel for anything when Perl is available: > cat | perl -l for (1..100000000) { $boy1 = rand() < 1/2; $tues1 = rand() < 1/7; $boy2 = rand() < 1/2; $tues2 = rand() < 1/7; if ( $boy1 && $tues1 or $boy2 && $tues2 ) { if ( $boy1 && $boy2 ) { $boy_boy++ } print $boy_boy / $boy_tues; So, how often, out of those among one hundred million families that have a boy born on Tuesday, does he have a brother? 13/27 = .481481481… and we're close enough to that. (In one minute and two seconds, using 1 core of a new 2.66 GHz MacBook Pro, if anyone cares.) posted by nicwolff at 11:32 PM on May 25, 2010 [1 favorite] jamespake, you're sampling the boys; we're sampling the families. Of course ½ of boys with one sibling have a brother. But ⅓ of with two kids and at least one boy have two boys. For that matter, ½ of boys born on Tuesday with one sibling have a brother. But in with two kids, one a boy born on Tuesday, 13/27 have two boys. posted by nicwolff at 11:46 PM on May 25, 2010 [1 favorite] OK, that's why I shouldn't code at 1AM. My code is crap compared to yours, nicwolff. posted by kmz at 11:49 PM on May 25, 2010 As long as you have 47 GB of free RAM to hold 7 x 10 PHP hash entries at 68 bytes each your approach would work too, kmz ;) posted by nicwolff at 12:32 AM on May 26, 2010 The question is ambiguous as to how the boy has been sampled. If we have a list of all the two child families that contain at least one boy then we can either pick a family at random in which case the odds of getting 2 boys are 1 in 3 or we can pick a boy at random in which case the odds are 1 in 2. This is because although there are half as many 2 boys configurations, they contain exactly twice as many boys. I'd say the 50-50 case is more natural - for example if you meet a boy at random who tells you that he has one sibling then you would be wrong to assume that there is a 2/3 chance of it being a sister. posted by jamespake at 12:42 AM on May 26, 2010 Yeah, I just rewrote it in python doing 10000 at a time and i'm steadily getting 33% without knowing about tuesday and 48% with. posted by empath at 1:15 AM on May 26, 2010 I still don't understand it but the answer is right. posted by empath at 1:18 AM on May 26, 2010 Yeah, these are the numbers I'm getting, I did it with arrays, too, because I'm a terrible programmer: families with at least one boy:7467 families with at least one boy born on tuesday:1361 families with two boys:2481 families with two boys, at least one of them born on tuesday:640 percentage of 2 boy families if at least one boy:33.226195259140219 percentage of 2 boy families if at least one boy born on tuesday:47.02424687729610833.2261952591 posted by empath at 1:24 AM on May 26, 2010 I'm still hung up on the syntax. Sorry for returning so late to the thread. I'm against trying to recontextualize the question. I can see how the riddle works, but it is phrased to ambiguously for me to parse it as such. Take each sentence in order to create the formula. "I have 2 children." This defines the given set as a 2 individuals, each with a possibility of being either a boy or a girl. "One is a boy born on a Tuesday." Ok, so that eliminates the variable of one of the children being a boy or a girl. "What is the probability I have two boys?" Ok, here's where most of us lose it, since we're not mathematicians. Well, since we know you have one boy, then the only variable we are left to concern ourselves with is the gender of the other child. Since the gender of one of the children is known, we begin calculating it as a simple equation. The unknown child's gender is defined already as a 50/ 50 probability, since it can only be a boy or a girl. Thus, the reasoning is that since you already know that one is a boy, it is removed from the set, and the only variable that needs to be defined is the gender of one child, not two. Or we can be really silly and say that the probability is really strange because of the variable of hermaphrodites, though the rarity of this mutation in the population reduces the variable to within .000000000000e1 of 50/50. Which is still a pretty neat demonstration of the principal that the original solution provided by the article tries to make. And it still completely ignores the Tuesday part of the riddle. I will say, I understand what most people are trying to say about the solution involving creating a set of 2, with 4 possible combination (BB, BG, GB, GG, etc) and then the added thing with the Tuesday variable. This also explains why trying to "get" this type of math riddle is hard for many people, since the interpretation is ambiguous. If someone can try and explain why you can't eliminate the child whose gender has been revealed as a variable, I'd be happy to hear it. posted by daq at 1:30 AM on May 26, 2010 I'm supposed to be working right now, so sorry if this has been said before but... I disagree with the answer 1/3, because I disagree with the question. The question states "I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?" so, the one child is 100%. We are talking about the probability of the second child being a boy. In which case, the chance is >50% (although not by much), because the chance of XX/XY on a lone or non-identical child is 50:50, to which we need to add the chance of an identical twin. The 1/3 makes no sense at all, given the statement that there is already one child of known gender. It's like those times someone flips a coin ten times and gets heads every time - the chance of the next flip being heads is 50%... posted by sodium lights the horizon at 1:49 AM on May 26, 2010 Prepare to have your mind blown: Consider a game where I flip (in secret) two coins. Your objective is to guess how many heads I have flipped. If you know that I have at least one head, then you've narrowed down my possibilities to HH, HT, and TH, so if you guess 1H1T you'll be right 2/3 of the time. Likewise, if you know that I have at least one tail, by the same argument you'll be right 2/3 of the time if you guess 1H1T. Now, I flip the coins, pick one (at random) and announce that coin. How frequently can you correctly guess the composition of my flip? According to the above, if I announce Heads (so you know I have at least one head), you should pick 1H1T, and succeed 2/3 of the time. Similarly, if I announce Tails, you know I have at least one Tail, and so you should (again!) pick 1H1T, and succeed 2/3 of the time. This leads to the conclusion that regardless of what I announce , you should guess 1H1T, and that you'll be right 2/3 of the time. But this is clearly a contradiction of the assumption that I am flipping independent fair coins, where I should only get 1H1T 1/2 of the time, not 2/3. posted by Pyry at 1:50 AM on May 26, 2010 [2 favorites] daq, because they haven't mentioned a specific child. You know one is a boy. Either of the two (the younger or the older or both) could be a boy, so you have 3 possibilites left: They are both boys, the younger is a girl, or the older a girl. Let's say you don't know that there is at least one boy. You then find out the older child is a girl. The younger child still has a 50/50 chance of being boy. Okay, start again now: You are first told that at least one child is a boy. Then lets say you find out that the older child is a girl. What are the odds then that the younger child is a boy? Now do you see why there is a difference? posted by empath at 1:52 AM on May 26, 2010 Man, I used to be so much better at this sort of stuff when I taught the LSAT. Anyway, it seems to me that this is all about how you organize information. posted by Saxon Kane at 2:01 AM on May 26, 2010 This leads to the conclusion that regardless of what I announce, you should guess 1H1T, and that you'll be right 2/3 of the time. If your options are 1H1T, 2H or 2T, then this is obviously correct. 1H1T is twice as likely as the other two options. posted by empath at 2:23 AM on May 26, 2010 Here's my attempt at explaining. I'll start by answering the question about the coin that comes down "Heads, minted in 1993" or what have you, and progress from there. That *does* change the odds, as you will see ... Two coins are flipped. The first one comes up heads. What are the chances that the other is heads? There are two cases: H/H, H/T. 1 in 2. In only one of the two cases is the second coin heads. You’ve specified the first one, and the second one is just random chance. The same two coins are flipped. Now all you know is that of them is heads – you don’t know which. What are the chances that the other is heads? There are three possible cases: H/H, H/T, T/H. Only in one case of those cases are they both heads. 1 in 3. You haven’t specified which is which, so the odds changed. This is not a verbal trick. If you flipped the coins a million times, and separated out all the instances where the first coin was heads, you would find that both coins were heads half the time. If you flipped the coins a million times, and separated out all the instances where at least one coin, any coin, was heads, you would find both coins were heads a third of the time. I'll explain why this is this case in a bit. But first. OK. Now. Let’s see what happens when add a bit of interesting information. The two coins we’re flipping were both minted in the 1990’s. One of them comes up heads/1993 coin. What are the chances that the other is heads? Stay with me, this isn't as complex as it looks: - There are 10 cases where H1993 is the first coin and the second is tails: (You don’t need to wade through it, but it’s: H1993 T1990, H1993 T1991, H1993 T1992, H1993 T1993, H1993 T1994, H1993 T1995, H1993 T1996, H1993 T1997, H1993 T1998, H1993 T1999) - There are 10 cases where H1993 is the second coin and the first one is tails: (T1990 H1993, T1991 H1993, T1992 H1993, T1993 H1993, T1994 H1993, T1995 H1993, T1996 H1993, T1997 H1993, T1998 H1993, T1999 H1993) - There are 10 cases where H1993 is the first coins and the second one is heads: (H1993 H1990, H1993 H1991, H1993 H1992, H1993 H1993, H1993 H1994, H1993 H1995, H1993 H1996, H1993 H1997, H1993 H1998, H1993 H1999) - There are only 9 more cases where H1993 is the second coin and the first one is heads (since we just already counted the case where both coins were H1993): (H1990 H1993, H1991 H1993, H1992 H1993, H1994 H1993, H1995 H1993, H1996 H1993, H1997 H1993, H1998 H1993, H1999 H1993) The last 19 answers are both H/H situations. There are 39 total cases. So the answer is 19 in 39 -- MUCH closer to 1 in 2 than case where you only know that one is heads, but don't know the date! Again, this is NOT a verbal trick – IF YOU FLIP RANDOM COINS MINTED IN THE 1990’S A MILLION TIMES AND SEPARATE OUT ALL CASES WHERE ONE IS HEADS/1993, THEN 19 OUT OF 39 TIMES, BOTH WILL BE HEADS. Why is this? You’ve come very close to, but not quite reached, a “the first coin is heads, what’s the second coin?” scenario. One coin is close to being specified. It’s “the 1993” coin. That’s close to being “the first” coin. If you knew only one of the coins was minted in 1993, it might as well BE “the first” coin. Then you’d be saying “the 1993 coins comes up heads, what’s the other?” 1 in 2 chance. The only reason it isn’t *quite* this is because THAT OTHER COIN MIGHT BE A 1993 COIN, TOO. Remember when we shaved that last set of numbers from 10 to 9? That’s what happened there. A tiny bit of uncertainty kept one coin from being “the 1993” coin. Either coin *could* be 1993, after all, so you can’t be absolutely sure that one is “the 1993” coin. Now, in most of the cases, it is in fact “the 1993” coin. But there are a few cases where it isn’t, so instead of being 1 in 2, it’s 19 in 39 – close but not quite. Now, let’s take the case that was actually asked about. You flip two coins, not necessarily minted in the 1990’s. One – you don’t know which – turns out to be a 1993 heads. What are the chances the other is heads? Really, really close to 1 in 2. If there are, say, a hundred years where coins were minted, then the odds of both coins being 1993 are really small. You’re really close to one being “the 1993” coin. This same logic applies to families. The “Tuesday” boy isn’t as specific as the “firstborn” boy, but changes the odds. The “December 28” boy would change the odds more. The “December 28, 1987” boy would change the odds even more. The explanation I promised. Here’s a way of thinking about it which might not hurt your head. Take all families with two children. How many have two boys? About 1 in 4. Now just take all families with two children and at least one boy. How many have two boys? About 1 in 3. Why? While keeping the end you’re looking for the same (2 boys) you’ve reduced your set. You’ve shaved off all the families with two girls. Within the set that remains, the number of families with two boys grows proportionally. Now. Take all families with two children and at least one boy born on December 28, 1987. How many have two boys? You’re still looking for two boys, but now you’re looking at ONLY those families with one boy born on December 28, 1987. You’ve gotten rid of most of your families. Of the ones that remain, the odds that the one who wasn’t born on December 28, 1987 is pretty close to one in two. But not quite. Here's why: Take all families with two children and at least one boy born on a Tuesday. How many have two boys? You’ve still shaved away a lot of families. You’re only looking at about 1/14 of your original number of families. You’ve changed the odds a lot. But how have you changed the odds? Well, for most of those families, the “Tuesday” child is only going to be one of the children. In that case, if the “Tuesday” child is male, the odds of the second one being male is 1 in 2. But ... in some of those families, both children were born on Tuesday. There is no “Tuesday” child, so the distinction has no meaning. For those families, the chances of both children being male are still 1 in 3! But that’s not a lot of families, so when you tally all the families together, it still gets pretty close to 1 in 2. 13 in 27, as it happens. posted by kyrademon at 2:25 AM on May 26, 2010 [3 favorites] Pyry - it's wrong to treat HH as one option. There are four options HT, HT, the first H in HH or the second H in HH. It's the same problem with the two brothers - the boy in the question could be either of the two. posted by jamespake at 2:39 AM on May 26, 2010 There is a difference between "what proportion of families with at least one boy have two boys" and "a family has announced that it has at least one boy, what is the probability it has two". The difference depends on what the announcing family had with regards to the announcement. For example, suppose a family has to announce that it either has at least one boy, or that it has at least one girl. If the family has one boy and one girl, it can announce either (assume that they uniformly randomly pick one). Then the question, "what is the probability that a family that has announced it has at least one boy has two boys?" has an answer different from 1/3. That is, the probability tree looks like this: (1/4) BB -> announce Boy (1) -> BBaB (1/4) (1/4) GG -> announce Girl (1) -> GGaG (1/4) (1/4) BG -> announce Boy (1/2) -> BGaB (1/8) (1/4) BG -> announce Girl (1/2) -> BGaG (1/8) (1/4) GB -> announce Boy (1/2) -> GBaB (1/8) (1/4) GB -> announce Girl (1/2) -> GBaG (1/8) Then it's clear that P(BB | announce B) = 1/2 and not 1/3. If the family has to announce that it either has at least one boy, or that it has no boys, then the result is the expected 1/3: (1/4) BB -> announce Boy (1) -> BBaB (1/4) (1/4) GG -> announce -Boy (1) -> GGa-B (1/4) (1/4) BG -> announce Boy (1) -> BGaB (1/4) (1/4) GB -> announce Boy (1) -> GBaB (1/4) P(BB | announce B) = (1/4) / (1/4 + 1/4 + 1/4) = 1/3. posted by Pyry at 2:48 AM on May 26, 2010 [1 favorite] Okay, here's another way of looking at it - if you meet someone who just tells you that they have two children then the odds of them having two boys is 25% and the odds of them having one boy are 50%. But if they say "Their names are Peter and..." then the chance that the other is a boy is 50% - Peter could have a younger brother, an older brother, a young sister or an older sister i.e. There are 4 options not 3 - Pb bP Pg gP. posted by jamespake at 2:54 AM on May 26, 2010 This puzzle always pissed me off. It's not really a math puzzle at all; the actual problem--"Given that at least one of two children is a boy, what is the probability that both children are boys?"--isn't very interesting if stated clearly. The only real puzzle here is figuring out how to convince people that the deliberately coy problem statement is unambiguous enough that they should've been able to figure out what the hell you were asking them. I mean, terse problem statements are nice, but if unpacking the statement is the whole goddamn puzzle, that's pretty sad. The Monty Hall problem is a much better puzzle than the two-boy one, despite having almost the same crux. posted by equalpants at 3:18 AM on May 26, 2010 [5 favorites] Just to be clear, the answer to this riddle hinges on what you think the possible knowledge states are: If the three possible states are "we have no male children", "we have at least one male child, but none born on a Tuesday" and "we have at least one male child born on a Tuesday", then the answer is the given 13/27. If the states are "we have no male children", and "we have a male child born on a Monday", "we have a male child born on Tuesday", "we have... Wednesday", and so on, then the answer is 1/3. If the states are "we have a [gender] born on [weekday]" then the answer is 1/2. posted by Pyry at 3:49 AM on May 26, 2010 [1 favorite] There's a little confusion going on among here about something, I think. Every time a child is born, there is a (roughly speaking) 50% chance that it will be a boy or a girl. So, naturally, the tendency is to assume, that no matter what other outside information is given, the chance of any child being a boy or a girl is 50%. But the thing is, precisely *because* the chance of any individual child being born a boy or a girl is 50%, that number changes when you look at the probability across different, limited sets of families, or talk about how many families overall end up having four boys in a row, or what have you. This is pretty obvious, when you think about it. Sure, there's a 50% chance of having a boy each time, and THAT MEANS that your chance of having four boys in a row is less than 50%. Likewise, when you disregard all families that have only girls, the percentages of the remaining families shift around. But it means you can cut up the sets in tricky, nonobvious ways. For example, you'd probably think that the distribution of 2-child families with at least one boy born on a Tuesday looks about the same as 2-child families with at least one boy. It's not, quite. Out of 27 million 2-child families with at least one boy, about 9 million will have two boys. Out of 27 million 2-child families with at least one boy born on a Tuesday, about 13 million will have two boys. That seem weird? Think about it this way. Start with 147 million 2-child families with at least one boy. About a third, 49 million, will have two boys. Now get rid of all the Sunday-Sunday families. That's about 3 million families gone, a third of which, 1 million, will have had two boys. Do the same for all the Sunday-Mondays, Monday-Wednesdays, and so on. Everything that doesn't have a Tuesday in it. Now we've gotten rid of a good chunk, we're down to 39 million families, each with at least one child, boy or girl, born on a Tuesday. About a third, 13 million, will be 2-boy families. Here's where it gets interesting. Get rid of the families with A girl born on Tuesday and a boy born on Sunday. We're not interested in them. We specified the boy was born on Tuesday. 2 million families off the list -- none of them 2 boy families. Keep going. Girl Tuesday and Boy Monday. Girl Tuesday and Boy Wednesday. And so on. We just got rid of 12 million more families. Not a single one was a two-boy family. So suddenly, we're down to ... 27 million families, 13 million of which are 2-boy families! Basically, by specifying the case that at least one boy was born on a Tuesday, we took the case of any-child-born-on-a-Tuesday (39M families, 13M 2-boy families), and got rid of a whole bunch of boy-girl families! No wonder the odds look different. Sure, it's a little tricksy, but it's essentially the same as looking at the difference between the odds of having two boys in a row if you look at all families (25%), only families with at least one boy (33%), or only families where the first child was a boy (50%). This is *because* the chance of any individual child being born a boy or girl is 50%, not in spite of it. It's moving the statistical frame around. posted by kyrademon at 4:08 AM on May 26, 2010 [2 favorites] So, this is how mathematicians masturbate? I much prefer the old-fashioned method... posted by Thorzdad at 4:40 AM on May 26, 2010 "Oh hello there, math -- a.k.a. why I went to law school. posted by the littlest brussels sprout at 5:21 AM on May 26 " My brother! (or sister) - I calculate that at least 50% of us were in law school cause we read the requirements and math wasn't one, so we sighed with relief and applied to take the LSAT. Like others who have commented, I see how the numbers work, but it is definitely counter-intuitive. To me, a 50-50 chance looks like the only reality. But I guess that leaves out hermaphrodites and I understand that they occur more frequently than one might think. ;) posted by Tena at 6:25 AM on May 26, 2010 I'm curious whether anybody here would have assumed the odds are 1/2 (rather than 1/3) if the detail about Tuesday was not provided. Because if the day of the week was unstated, the odds are 1/3. The funny thing is that the people insisting the odds are 1/2 (rather than 1/3) are actually closer to the correct 13/27 answer, although their reasoning is entirely incorrect. Incidentally, three different commenters have provided graphic illustrations of all possible boy,girl,day combinations - you can work it out yourself by counting up the colored squares. If you continue to insist that the odds are 1/2, you may be interested in Gene Ray's theories as well. posted by ardgedee at 6:35 AM on May 26, 2010 Sooooo......... 50%, then? posted by grubi at 6:57 AM on May 26, 2010 Tena, if you do in fact find that the only reality is "whatever my intuition indicates, despite seeing how the numbers work", then a really excellent career in US politics is awaiting you if you should tire of whatever you're doing now. posted by Wolfdog at 7:14 AM on May 26, 2010 The problem is: How does Tuesday change the odds when it is arbitrary? As someone pointed out above, every child is born on some day of the week. So why can't we say the odds are always 13/27 without the knowledge of which day of the week? I.e., someone poses the question without telling us the day of the week, we do the calculation the tuesday way and get the 13/27 answer and then we ask the day of the week the child was born: if its not tuesday, we don't have to go back and change the calculation, since the numbers are the same for any day of the week. Or what if the questioner says at the end "I lied, the boy was actually born on Friday", again, no change in the calculation. The answer is in what question you really asking, the 1/2, 1/3 and the 13/27 are all true answers to only very subtly different questions. 13/27 doesn't seem to make sense, unless you pose the question "what is the fraction of 2 children families, who have at least one boy born on a tuesday" We are suspicious of an answer that doesn't have a convenient way of adding up to 100%, but, even though there are 7 days, there are overlaps in the sets for each day. posted by 445supermag at 7:16 AM on May 26, 2010 [3 favorites] In fact, it is probably easy to construct a casino game based on the above. Then people can continue to argue that it is semantics - that that other child is equally likely to be a boy or girl - as they lose the shirt off their back. I won ten bucks at a party once with exactly this trick. There was a small bet on each pair of flips, and the guy just wouldn't give it up. In fact, this would be a great game for a casino. You break even on people who know the trick, and make a pile on the people who don't. You could even push the payout a bit to make it look really attractive to the common rube... posted by kaibutsu at 7:23 AM on May 26, 2010 [1 favorite] you have two kids and one is a boy - what is the chance that both are boys? The probability is 1/3: GB, BG, BB -- I think everyone agrees with that. So here's my issue -- which always confuses me and which has a simple yes or no answer that I'm sure a smart MeFite can provide. It's a semantic question, not a mathematical one: Is there a difference between the following two questions: 1. You have two kids and one is a boy -- what is the chance that both are boys? 2. You have two kids and one is a boy -- what is the chance that the other is a boy? I read those two questions differently. The first (to me) is obviously 1/3, the second is obviously 1/2, because they seem to be asking for different things. I freely admit that my mind doesn't work well on these problems, so maybe I'm missing something. posted by The Bellman at 7:36 AM on May 26, 2010 Oh hello there, math -- a.k.a. why I went to law school. [S:Intellectual Property Law Privacy Law Regulation of Environment Regulation of Telecommunications Health Care Law Tax Law:S] Law school is why I did an undergrad in math. I kid because I love. posted by Lemurrhea at 7:37 AM on May 26, 2010 Bellman: The difference is with that word "other". It is ill-defined. Clearly, in a boy-girl family, the girl is the "other" child. However, in a family with two boys, John and Jack, which one is the "other"? This ambiguity is deeply tied to the reasons why it is 1/3 and not 1/2. The more precise way to ask the question is the first. posted by milestogo at 7:41 AM on May 26, 2010 Bellman, I would read the two questions the same. There's no difference in information requested or given. You know that one is a boy, and you want to know if the other is a boy. If the other is a boy, it's necessarily true that both are boys. posted by Lemurrhea at 7:41 AM on May 26, 2010 Why are the days of the week equally weighted? In North America (if not most of the first world) the weekend days see much fewer births. Here are some stats I found for the US in 2005 Sunday 7,374 Monday 11,704 Tuesday 13,169 Wednesday 13,038 Thursday 13,013 Friday 12,664 Saturday 8,459 posted by ODiV at 8:15 AM on May 26, 2010 Oh, those are average births per day of the week. posted by ODiV at 8:17 AM on May 26, 2010 The boy/girl distribution ain't exactly 50% either. Oh, well. posted by kyrademon at 8:20 AM on May 26, 2010 Bellman, I would read the two questions the same. There's no difference in information requested or given. You know that one is a boy, and you want to know if the other is a boy. If the other is a boy, it's necessarily true that both are boys. I understand that reading, but I read it differently. I read the second as asking for the separate, (non-conditional?) probability that the "other" child is a boy, which is (approximately) 50% -- the probability that any given child is any given gender. So that's where I always get confused -- semantics not math because of course the math is absolutely clear once I understand what's being asked. I think milestogo answered it for me -- if you're seeking the 1/3 answer, the more precise way to ask it is the first and the "riddle" obscures that by using the second. posted by The Bellman at 9:08 AM on May 26, 2010 Why are the days of the week equally weighted? In North America (if not most of the first world) the weekend days see much fewer births. Because the question doesn't really have anything at all to do with the probabilities of particular gender and births. I'm fairly certain that if your first child is one sex you are more likely to have another child of that sex. I know if you have two children of one sex you are much more likely (I've seen 80%) to have a third child of that same sex. Biology/psychology is messy, but not what this question is about. posted by OmieWise at 9:25 AM on May 26, 2010 You live in China so the probability is zero. HA! posted by GuyZero at 3:56 PM on May 25 posted by phatkitten at 9:31 AM on May 26, 2010 Biology/psychology is messy, but not what this question is about. Yeah, we're assuming perfectly spherical children in a vacuum... posted by kmz at 9:41 AM on May 26, 2010 I have two children. One is a boy. What're the odds on the other one? Fifty-fucking-fifty. There's only one answer, not math gymnastics. Isn't it the case that both Einstein and Paul Erdos backed the wrong answer to the Monty Hall problem? posted by StickyCarpet at 10:02 AM on May 26, 2010 Why is it assumed that the only allowable sexes are male and female? You could be born with both sexual characteristics or neither. Now what are the odds? posted by Kilovolt at 10:14 AM on May 26, 2010 Now what are the odds? The decision matrix that Upton O'Good laid out should really be a tree once you have more than two options. The simple tree assumes that each option has equal probability. If you want to have multiple options per decision node then you just annotate each edge with the probability and when you go to sum up the odds for each leaf in the tree you do it with the product of the probabilities of all the edges between the leaf and the root. I doubt it changes the overall odds much given that such births are extremely rare. posted by GuyZero at 10:19 AM on May 26, 2010 > Isn't it the case that both Einstein and Paul Erdos backed the wrong answer to the Monty Hall problem? God doesn't play dice with the menfolk of the universe on Tuesdays. posted by ardgedee at 10:26 AM on May 26, 2010 Isn't it the case that both Einstein and Paul Erdos backed the wrong answer to the Monty Hall problem? I dunno, but it is entirely possible. The thing that makes the Monty Hall problem unintuitive, and the reason people complain about the phrasing of this problem, is that in each there is a person in the equation who is actively trying to mess with you, something not encountered in physics (we hope). In the Monty Hall problem, the person is only ever showing you an empty door. In this case, apparently this guy would only ever tell you about a boy if he had at least one. Both of these alter the probability counter to what our intuition would tell us. Of course with the Monty Hall problem the door switching strategy has no downside, statistically, since even if the guy showing the door was doing it fairly and there were a possibility of showing the prize door, learning that one room is empty reduces it to a 50/50 split between the room you picked and the remaining room you didn't. posted by Zalzidrax at 10:51 AM on May 26, 2010 Because the question doesn't really have anything at all to do with the probabilities of particular gender and births. Well the 50-50 split for boy and girl is understandable because it's close enough. What I don't get how people see "Tuesday" and then start automatically working out the probabilities using seven equally weighted days. That's like hearing "I was shot once in either my head, my leg, my torso, my foot, or my hand. What is the probability I was shot in my hand?" and then deciding it's 1/7. posted by ODiV at 10:52 AM on May 26, 2010 It is a semantics problem, not a mathematics problem. All similar problems are as well, because if they were sufficiently specified, then the maths are straightforward and obvious. Kyrademon's explanation is wonderful. However, I think a case can be made that the answer is 14/28 not 13/28 because you SHOULD double count Tuesday/Tuesday. Chortly's graphic grid was also wonderful, but it could be said that it is hiding information, because if you graph the segments independently and tally, you get a different answer. posted by discountfortunecookie at 10:57 AM on May 26, 2010 It is a semantics problem, not a mathematics problem. Says who? If you work it out as above you get a number that has no relevance to anything the person was asking about. Are we looking for the probability that this guy has two male children or not? posted by ODiV at 11:02 AM on May 26, 2010 Tuesday is mentioned for a reason, the way I see it, and it has nothing to do with mathematics. He says, I have one boy, born on a tuesday to differentiate from his boy, who was born on a different day. If his other child was a girl, he would have no need to mention the day of the boy's birth. I'm sticking by this no matter what numbers (which only confuse me) are thrown at me. posted by Hobgoblin at 11:42 AM on May 26, 2010 : " Chortly's graphic grid was also wonderful, but it could be said that it is hiding information, because if you graph the segments independently and tally, you get a different answer. What does this mean? posted by roll truck roll at 11:51 AM on May 26, 2010 This thread is funny, given that based on some Mefi comment I bought and just read The Drunkard's Walk , which uses this example to show how the human mind is not optimized for probability and statistics and why most of the time our gut feelings are wrong. Most people in this thread who don't get it are either going completely with their gut or having a conflict. If probability and statistics came as naturally to us as recognizing faces, people would not need to go to school for years and years to get a degree in the subject. BTW, the book may be interesting for those who did not love their probability and statistics classes in college. I loved mine and already was familiar with most of the book, but it gave me an excuse to freshen up my Bayes. posted by dirty lies at 12:01 PM on May 26, 2010 Brilliant! He combined the monty hall trick with the shared birthday trick. I love it! posted by furtive at 12:08 PM on May 26, 2010 If anyone is still confused or for some ridiculous reason actually thinks the answer to this riddle 13/27, I have proved it with brute force for your edification. posted by turaho at 1:39 PM on May 26, 2010 [3 favorites] But if they say "Their names are Peter and..." then the chance that the other is a boy is 50% - Peter could have a younger brother, an older brother, a young sister or an older sister i.e. There are 4 options not 3 - Pb bP Pg gP. Sigh. That's not the same question though. The whole point of this entire thread is that the probabilities change depending on how much specific information is used to refer to the child. If you just call it "one of them is a boy" that is the most vague as it could mean either the older child or the younger child, and in that case the probability is 1/3. When you specify he was born on a Tuesday, that's another bit of specificity that raises the probability closer to 1/2. When you actually refer to him by name, that removes all doubt about which one you are referring to, as Peter can only be a given one of the two children, so the probability is 50%. Go back and read what I wrote before: Or another way of stating this: when the problem says "I have two kids, one of which is a boy, what the the probability that the other is a boy", it is not the same thing as asking "my younger child is a boy, what are the chances that my older child is a boy?" posted by Rhomboid at 2:14 PM on May 26, 2010 If anyone is still confused or for some ridiculous reason actually thinks the answer to this riddle isn't 13/27 Is the weekend drop in birthrate not applicable for some reason? posted by ODiV at 2:42 PM on May 26, 2010 Is the weekend drop in birthrate not applicable for some reason? It is not applicable because that's not what the riddle is about. It's usually understood that riddles are self-contained and have all the information stated in them required to solve them. For example a real penny thrown by real humans is only fair to two digits but not more [ ] but when a problem says flip a coin, everyone knows that means an exact 50/50 coin, not a .505 coin or a .498 coin. posted by Rhomboid at 2:59 PM on May 26, 2010 Here's where I'm confused. A coin turning up heads and a child being a boy are assumed to be 50% likely because that's reasonably close to what it is in reality. Why, when presented with a number of possibilities that you know nothing about, would you assume that they're equally likely? "One of my sons found a penny on the ground. My other son also found a coin on the ground. What are the chances he found a nickel?" posted by ODiV at 3:07 PM on May 26, 2010 Is the weekend drop in birthrate not applicable for some reason? For the same reason physics problems all seem exist in a frictionless environment. posted by turaho at 3:13 PM on May 26, 2010 I still don't get how you can just take a number of possibilities and just assume that they're equally weighted because it's an easy way to categorize something. It's like trying to figure out during what hour I am most likely to eat breakfast and then concluding that there's a 1 in 24 chance I will eat breakfast in any given hour. posted by ODiV at 3:28 PM on May 26, 2010 Well ... for one thing, ODiv, the birth rates worldwide may very well be closer to an even distribution across 7 days than the stats you posted. The U.S. rates appear to most likely be an anomaly caused largely be C-sections and induced delivery. It's not unreasonable to assume a fairly even distribution of birth by day of week. But yeah, I do take your point. You do need to know real-world distributions to solve these kinds of problems accurately once you start making them real-world problems and not just abstract math. For example, in your coin problem: Assuming that the only coins likely to be found are pennies, nickels, dimes, and quarters, then with a completely even distribution – The chance of one of two random coins being a nickel is 7 in 16. The chance of one of the coins being a nickel if the other is a penny is 2 in 7. But an even distribution seems hardly likely. Let's assume you did research and found that on average, 40% of dropped coins are quarters, 30% are dimes, 20% are nickels, and 10% are pennies – The chance of one of two random coins being a nickel is 9 in 25. The chance of one of the coins being a nickel if the other is a penny is 4 in 19. It does change the numbers. If you want the answer in the real world, you are entirely correct that you need the parameters, and the answer to the 2-boy problem is probably somewhat different than the simplified math implies. posted by kyrademon at 3:31 PM on May 26, 2010 milestogo: Pretty much everyone in this thread? Here's turaho's math where he shows the 196 possibilities as all equally likely. posted by ODiV at 3:32 PM on May 26, 2010 Oh, I didn't realize you meant the Tuesday thing. That struck me as a given uniform distribution, unlike breakfast. posted by milestogo at 3:34 PM on May 26, 2010 Both are pretty obviously not uniform distribution in reality, which is my point. posted by ODiV at 3:37 PM on May 26, 2010 Would it help your understanding of the riddle if it had explicitly said, "Assume boys and girls are equally likely to be born and that the probability of being born on any day is 1/7"? Because implicitly, that is what it is saying. posted by Rhomboid at 3:44 PM on May 26, 2010 Would it help your understanding of this if in reality girls were 20% more likely to be born than boys, but when solving the riddle everyone just assumed an equal distribution? posted by ODiV at 3:48 PM on May 26, 2010 It's a thought experiment. It has nothing to do with reality. Nobody is predicting anything. posted by Rhomboid at 3:50 PM on May 26, 2010 And just to be clear, if you really wanted to you could work this problem out with non-equal weights given to each day. It would be much harder, and you probably could not longer do it in your head, and you'd get an answer that was pretty close to but slightly less than .50, just as you do with equal weights. Except you haven't gained anything but make the problem unnecessarily more complicated by adding extra work that doesn't do anything towards proving the point that specifying a birth-weekday changes the answer from 1/3 to something close to 1/2. posted by Rhomboid at 3:55 PM on May 26, 2010 [1 favorite] So if the riddle specified months, would the number of days in a month be similarly irrelevant? Which answer would be the correct answer to the riddle, the one that took into account the fact that months have different lengths or the one that didn't? They'd be very close, granted, but which one would be right on a test? posted by ODiV at 4:01 PM on May 26, 2010 The word for what you're doing right now is "spoil sport." posted by roll truck roll at 4:18 PM on May 26, 2010 If the riddle specified months then it wouldn't be as good a riddle, because there is no ambiguity as to the fact that months have different lengths -- everyone knows that. However there is no inherent biological reason why birth weekdays would not be uniform, so you'd have to be familiar with the obscure details of hospital management to know that surgical procedures tend to be scheduled on weekdays so that surgeons can have their weekends off [or whatever] in order to even wonder if it's not a uniform distribution. I think it's pretty much obvious to everyone that the question is not about obscure public health statistics and that the author wants us to use 1/7, because again that's how riddles work: everything you need to answer is contained in the text. Now as to a question on a test, that's a very different matter. Unless it's their first day on the job, the professor knows to state all these assumptions so that there is no room for uncertainty. If this was a question on a statistics test, it would certainly say "Assume boys and girls are equally likely to be born and that the probability of being born on any day is 1/7". But this isn't a formal test, it's a riddle, and riddles omit these kind of things because, again, the unstated rule of riddles is that you don't need to bring in outside information. posted by Rhomboid at 4:19 PM on May 26, 2010 Both are pretty obviously not uniform distribution in reality, which is my point. Are you being willfully obtuse, or have you never seen a riddle before, or a physics problem? The real world has nothing whatsoever to do with this, which is why people have been able to transpose it, in this thread, to coins or whatever. Further, as has been pointed out that if you want to talk about the real world, assumptions about what counts as a rounding error are no more valid or useful or correct than the artificial strictures already in place in the riddle. I suspect you just know a fact that you consider neat and cannot bear that it doesn't have any relevance here. posted by OmieWise at 4:34 PM on May 26, 2010 [1 favorite] I understand how you guys get the answer and I understand that's what the asker is driving at. I guess I just don't understand how this is a good riddle. If someone presented this to me as a riddle, I would find it as unanswerable as "I have a son who is on the varsity high school volleyball team. If there are 12 teams in the league, what is the probability of his team winning the league round robin?" There's vital information that is missing, in my opinion. I have no issue with them tacking "Assume boys and girls are equally likely to be born and that the probability of being born on any day is 1/7" on to make it answerable. I have an issue with tacking it on myself though, because births/days of the week is not something I would automatically assume to be equal. If most people have no problem assuming that themselves, then I guess that's where I Are you being willfully obtuse, or have you never seen a riddle before, or a physics problem? Physics problems are usually pretty clear and have fairly well defined assumptions and practices. I've seen riddles before too (obviously), but they usually involve things I have no problems with making assumptions about, I guess. The word for what you're doing right now is "spoil sport." Sorry, I'm spoiling it for you. Can you skip my comments? posted by ODiV at 4:58 PM on May 26, 2010 (The following comment assumes that there are 7 days in a week, 365 days in a year, and equal numbers of odd and even numbered days. I know that’s not exactly accurate, but it makes the math so much less messy, and the problem is hard enough to understand as it is, without muddying the waters further. That being said…) Well, the original article was right, I was smiling about this one all day. But I also puzzled out a general formula for the problem, and I’m surprised no one had mentioned it yet. I’m not sure it helps to make the problem less counter-intuitive, but seeing the numbers work out is somehow comforting. Where n = the number of different days the boy could have been born on, And x = the probability that both children are boys: x = 2n-1 / 4n-1 Case 1: Two children, one is a boy born on a Tuesday (n=7, or 1 of 7 days): x = (2 * 7) -1 / (4 * 7) -1 x = 14-1 / 28-1 x = 13/27 ( = 48%) Case 2: Two children, one is a boy born on an odd day (n=2, or 1 of 2 days): x = (2 * 2) -1 / (4 * 2) -1 x = 4-1 / 8-1 x = 3/7 ( = 43%) Case 3: Two children, one is a boy born on March 21 (n=365, or 1 of 365 days): x = (2 * 365) -1 / (4 * 365) -1 x = 730-1 / 1460-1 x = 729/1459 ( = 49.9657%) And trickily, this one: Case 4: Two children, one is a boy (n=1, or 1 of 1 days): x = (2 * 1) -1 / (4 * 1) -1 x = 2-1 / 4-1 x = 1/3 ( = .33%) Note that as mentioned in the original article, when the trait is more rare than 1 in 7, the probability approaches 1/2, and when the trait is less rare than 1 in 7, the probability approaches 1/3. Wow. Very neat. posted by LEGO Damashii at 5:27 PM on May 26, 2010 [2 favorites] For my sanity to summarize... First, for the pedanterists: assuming 1/2 probability for girl or boy and 1/7 probability for a birth day on any given day and neglecting twins altogether... All three answers (1/2,1/3,13/27) are correct given three distinct questions or scenarios probing different statistical samples. 1) I have two children, one is a boy (or girl...no bearing), what is the chance the other is a boy? A: 1/2. 2) I have two children, one is a boy, what is the chance both are boys? A: 1/3. 3) I have two children, one is a boy born on tuesday, what is the chance the other is a boy? A: 13/27 These correspond to statistical samples: 1) out of all families with two children, selecting one of the children at random, what is the probability it is a boy? A: 1/2. 2) out of all families with two children one of whom is a boy, what is the probability both are boys? A: 1/3 3) out of all families with two children one of whom is a boy born on tuesday (or any day), what is the probability that both are boys? A: 13/27. These samples can be created by the scenarios introduced in this blog post noted upthread; quoting: 1) "A father of two children is picked at random. He is instructed to choose a child by flipping a coin. Then he has to provide information about the chosen child in the following format: “I have a son/daughter born on Mon/Tues/Wed/Thurs/Fri/Sat/Sun.” If his statement is, “I have a son born on Tuesday,” what is the probability that the second child is also a son?" 2) "A father of two children is picked at random. If he has two daughters he is sent home and another one picked at random until a father is found who has at least one son. If he has one son, he is instructed to provide information on his son’s day of birth. If he has two sons, he has to choose one at random. His statement will be, “I have a son born on Mon/Tues/Wed/Thurs/Fri/Sat/Sun.” If his statement is, “I have a son born on Tuesday,” what is the probability that the second child is also a son?" 3) "A father of two children is picked at random. If he doesn’t have a son who is born on Tuesday, he is sent home and another is picked at random until one who has a son that was born on Tuesday is found. He is instructed to tell you, “I have a son born on Tuesday.” What is the probability that the second child is also a son?" The maths are shown for each case there as well, but the numbers are well established by now. In the end, to cite a number, one must ask the age old question: what's the scenario? posted by sloe at 5:37 PM on May 26, 2010 Wow. Reading back that was a pointless derail, annoying and needlessly argumentative. Sorry for that. Please ignore. posted by ODiV at 6:12 PM on May 26, 2010 [1 favorite] This is why I love Metafliter. When this kind of shit is going on, the rest of the world is temporarily safe from math geeks! But as a non math geek, this is really interesting... posted by salishsea at 10:13 PM on May 26, 2010 I am not math savvy but I enjoy semantics. I still say that everyone is wrong so far. Ok, I never said it but I thought it. The question states 'what is the probability I have two boys?' Having two boys is the crux. Well then how many different types of children can you have? I count more than two. Boy, girl, unknown, both sexes in one child, neither sex in one child. However you figure it the odds are going to be less than 1/2 or less than 1/3. You must figure into the equation all outcomes of having another child. Boy , girl, other. Plus you would have to figure into the equation where you live and what are the chances of having a certain type of sex (or non sex) born. There is no right answer. It is like asking what colour is Thursday? posted by Kilovolt at 7:12 AM on May 27, 2010 The whole point of using boys on a Tuesday as that it’s something people are familiar with and intuitive about. It amplifies the counter-intuivity. If, like Kilovolt, myself, and others, you have a hard time allowing assumptions that simplify the math of the problem, you can work it out yourself with coins and dice, and get essentially the same You have two fair, six-sided dice, and two fair, double-sided coins. If you flip both the coins (and they land normally) and one comes up heads, there is a 1 in 3 chance that the other is also heads (and not 1 in 2, as you might think). Do it a twenty times and you’ll see the pattern emerge. If you then assign one die to each coin, and roll one die with one coin and one die with the other coin, and one of the sets comes up with a 3 on the die, and heads on the coin, the chances that the other coin is also heads will be 11/23. Repeat the procedure, but remember to only count the times that one of the two dice shows a 3 (or which ever number you chose, but it has to stay the same) and the coin from that set shows heads. Track the results. The math works out. Everyone knows that a coin has a 1 in 2 chance of landing on heads. So how can it be 1 in 3, or 11 in 23? That’s crazy! See, it's the math that's the amazing part, not the minutiae. Don't stare so hard at the trees. They’re cool in their own way, but right now we’re experiencing the forest. posted by LEGO Damashii at 8:41 AM on May 27, 2010 [2 favorites] Again: it's 50%. Any explanations otherwise are trying to go beyond what the original problem said. Beanplatin' motherfuckers... posted by grubi at 8:58 AM on May 27, 2010 [1 favorite] It's a riddle, a thought experiment; it's designed to induce beanplating. The answer is most assuredly not 50%, and arriving at that conclusion does not require going beyond what is stated. posted by Rhomboid at 2:19 PM on May 27, 2010 I've been away a few days, but this problem went with me. Two nights ago I was lying in bed and figured out pretty much what LEGO D says above. It took me a while to wrap my head around it, but now I'm with the 13/27 crowd. Man, I love beans! posted by MtDewd at 5:17 AM on May 31, 2010 There has been common assumptions according to jamespake and Wikipedia: "this problem is about probability and not about obstetrics or demography. The problem would be the same if it was phrased using a gold coin and a silver coin. I agree that the problem is about probability but it is not the same as using a gold coin and a silver coin. A coin has two sides only to land on. Having a child has more than boy or girl to have for a result. This forest is unending. posted by Kilovolt at 9:16 PM on May 31, 2010 Having a child has more than boy or girl to have for a result. Are you assuming hermaphrodites, then? Because generally XX and XY are the two choices we get as parents. If we start getting into XXY's and hermaphrodites, this question is going to have a million possibilities, instead of just two: the other child is a boy, or the other child is a girl. Which makes it a 50/50 probability, no matter how you all care to beanplate it. posted by misha at 7:07 AM on June 1, 2010 So I dunno, misha, you could scroll up and read the discussion. I'm not sure if people are up for having it again. posted by roll truck roll at 9:40 AM on June 1, 2010 Sorry, roll-truck-roll, but the Engineer got home over the weekend and supported my position of the 50/50 solution, which left me feeling all math-confident and super-powerful for once, so I had to come in and start things back up again. I'm only half kidding. posted by misha at 1:01 PM on June 1, 2010 « Older Tom Waits and Kool Keith collaborated on a song ca... | Lately, the organizations that... Newer » This thread has been archived and is closed to new comments
{"url":"http://www.metafilter.com/92250/Magic-numbers-A-meeting-of-mathemagical-tricksters","timestamp":"2014-04-18T04:55:47Z","content_type":null,"content_length":"242755","record_id":"<urn:uuid:5b1c1c52-85d7-455e-8862-30c8c37d0b44>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry/Trigonometric identities What is an identity?Edit An identity is an equation that holds true for all values of the variables appearing in it, because it either is a definition or is the logical consequence of a definition. An example of a definitional identity is $\tan(x)=\frac{\sin(x)}{\cos(x)}.$ An example of an identity that can logically be proven to hold for all values of its variable is the Pythagorean identity expressed in trigonometric form: $sin^2(x)+cos^2(x)=1.$ Next Page: Graphs of Sine and Cosine Functions Previous Page: Right Angle Trigonometry Home: Trigonometry Last modified on 26 August 2010, at 16:13
{"url":"http://en.m.wikibooks.org/wiki/Trigonometry/Trigonometric_identities","timestamp":"2014-04-17T18:54:09Z","content_type":null,"content_length":"14778","record_id":"<urn:uuid:287bda71-9c90-42f4-b715-0a1cd4de149a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionVisual Servoing Using Range ImagesAnalysis of the Distance Measurement Computed with the ToF CameraCamera Calibration: Computing Integration Time from an Amplitude AnalysisAlgorithm for Updating the Camera Integration Time During the TaskResultsTrajectory 1Trajectory 2Trajectory 3ConclusionsReferencesFigures Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s100807303 sensors-10-07303 Article Visual Control of Robots Using Range Images PomaresJorge^* GilPablo TorresFernando Physics, Systems Engineering and Signal Theory Department, University of Alicante, PO Box 99, Alicante 03080, Spain; E-Mails: Pablo.Gil@ua.es (P.G.); Fernando.Torres@ua.es (F.T.) Author to whom correspondence should be addressed; E-Mail: jpomares@ua.es; Tel.: +34-965-903-400, ext.: 2032; Fax: +34-965-909-750. 2010 4 8 2010 10 8 7303 7322 20 5 2010 23 7 2010 28 7 2010 © 2010 by the authors; licensee MDPI, Basel, Switzerland. 2010 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information. visual servoing ToF cameras self-calibration robotics Nowadays, visual servoing is a well known approach to guide a robot using visual information. The two main types of visual servoing techniques are position-based and image-based [1]. The first one uses 3-D visually-derived information when making motion control decisions. The second one performs the task by using information obtained directly from the image. However, the interaction matrix employed in these visual servoing systems requires known different camera parameters and the depth of the image features. A typical approach to determine the depth of a target is the use of multiple cameras. The most commonly applied configuration using more than one camera is stereo vision (SV). In this case, in order to be able to calculate the depth of a feature point by triangulation, the correspondence of this point in both cameras must be assured. In this paper the use of 3D time-of-flight (ToF) cameras is proposed in order to obtain the required 3D information in visual servoing approaches. These cameras provide range images which give depth measurements of the visual features. In the last years 3D-vision systems based on the ToF principle have gained more importance compared to SV. Using a ToF camera, illumination and observation directions can be collinear, therefore, this technique does not produce incomplete range data due to shadow effects. Furthermore, SV systems have difficulties in estimating the 3D information of planes such as walls or roadways. They cannot find the corresponding physical point of the observed 3D-space in both camera systems. Hence the 3D information of that point cannot be calculated by applying the triangulation principle. Another standard technique to obtain 3D information is the use of laser scanners. The advantages of ToF cameras over laser scanners are the high frame rates and the compactness of the sensor. These aspects have motivated the use of a ToF camera to obtain the required 3D information to guide the robot. Some previous works have been developed in order to guide a robot by visual servoing using ToF Cameras. Within these works, a visual servoing system using PSD (Position Sensitive Device) triangulation for PCB manufacturing is presented in [2]. In [3] a position-based visual servoing is described to perform the tracking of a moving sphere using a pan-tilt unit. In this last paper a ToF Camera manufactured by CSEM is used. A similar approach is described in [4] to determine object positions by means of an eye-to-hand camera system. Unlike these previous approaches, in this paper the range images are not used directly to estimate the 3D pose of the objects in the workspace. A new image-based visual servoing system which integrates range information in the interaction matrix is presented to perform the robot guidance. Another advantage of the proposed system over the previous ones is the possibility of performing the camera calibration during the task. To do so, the visual servoing system uses the range images not only to determine the depths of the features but also to adjust the ToF camera parameters during the task. When a ToF camera is used, some aspects must be taken into consideration, such as large fluctuations in precision caused by external interfering factors (e.g., sunlight) and scene configurations (i.e., distances, orientations and reflectivity). These influences produce systematic errors which must be processed. Specifically, the distance computed from the range images is very changing depending on the integration time parameter. This paper presents a method for the online adaptation of the integration time of ToF cameras. This online adaptation is necessary to capture the images in the best condition independently of the changes in distance (between camera and objects) caused by the movements of the camera when it is mounted on a robotic arm. Previous works have been developed for ToF camera calibration [5–7]. These works perform an estimation of the camera parameters and distance errors when static scenes are observed. In these researches, a fixed distance between the camera and the objects is considered. Therefore, these previous works cannot be applied in visual servoing tasks where the camera performs the tracking of a given trajectory. In this last case, the camera parameters such as the integration time must be modified in order to optimally observe the scene. To do this, several previous works adapt the camera parameters, such as the amplitude of the integration time, during the task. In [8] a CSEM-Swissrange camera is employed for the navigation of a mobile robot in an environment with different objects. This work automatically estimates the value of the integration time according to the intensity pattern obtained by the camera. However, this parameter is depends on illumination and reflectance conditions. To solve this problem, in [9] a PMD camera is also used for mobile robot navigation. This work proposes an algorithm based on the amplitude parameter. In contrast with [4], the range of working distance analyzed is between 0.25 m and 1 m for the application of visual servoing. This paper is organized as follows: In Section 2, a visual servoing approach for guiding a robot by using an eye-in-hand ToF camera is presented. Section 3 describes the operation principle of the ToF cameras and the PMD camera employed. In Section 3, an offline camera calibration approach for computing the required integration time from an amplitude analysis is shown. In Section 5, an algorithm for updating the integration time during the visual servoing task is described. In Section 6, experimental results confirm the validity of the visual servoing system and the calibration method. The final Section presents the main conclusions. A visual servoing task can be described by an image function, e[t], which must be regulated to 0: e t = s − s *where s = (f[1], f[2,] … f[M]) is a M × 1 vector containing M visual features observed at the current state (f[i] = (f[ix], f[iy])), while s * = ( f 1 * , f 2 * , ... f M * ) denotes the visual features values at the desired state, i.e., the image features observed at the desired robot location. In Figure 1(a) the eye-in-hand camera system is shown. A PMD19K camera is located at the end-effector of a 7 d.o.f Mitsubishi PA-10 robot that acquires grayscale images of 160 × 120. In Figure 1(b), an example of a visual servoing task is represented. This figure represents the initial and desired image features from the camera point of view. L[s] represents the interaction matrix which relates the variations in the image with the variations in the camera pose [1]: s ˙ = L s ⋅ r ˙where ṙ represents the camera velocity. By imposing an exponential decrease of e[t] (ė[t] = −λ[1]e[t]) it is possible to obtain the following control action for a classical image-based visual servoing: v c = − λ 1 L ^ s + ( s − s * )where λ[1] > 0 is the control gain, L ^ s + is the pseudoinverse of an approximation of the interaction matrix and v[c] is the eye-in-hand camera velocity obtained from the control law in order to continuously reduce the error e[t]. L ^ s + is chosen as the Moore-Penrose pseudoinverse of L̂[s] [1]. In order to completely define the control action, the value of the interaction matrix for the visual features extracted from the range images will be obtained in the following paragraphs. First, the interaction matrix will be calculated when only one image feature (f[x], f[y]) is extracted. The transformation between the range image I(i,j) and 3D coordinates (relative to the camera position) is given by [10]: x P C = z P C f x ′ s x f y P C = z P C f y ′ s y f z P C = I ( f x , f y ) f f 2 + ( f x ′ s x ) 2 + ( f y ′ s y ) 2where f is the camera focal length, s[x] and s[y] are the pixel size in the x and y directions and f x ′, f y ′ are the normalized pixel coordinates, relative to the position (u[0], v[0]) of the optical center on the sensor array ( f x ′ = f x − u 0 , f y ′ = f y − v 0 ). To obtain the interaction matrix, the intrinsic parameters ξ = (f[u], f[v], u[0], v[0]) are considered, where f[u] = f·s[x] and f[v] = f·s[y]. Therefore, considering these intrinsic parameters, Equation (4) is equal to: x P C = z P C f x − u 0 f u y P C = z P C f y − v 0 f v z P C = I ( f x , f y ) 1 1 + ( f x − u 0 f u ) 2 + ( f y − v 0 f v ) 2 From (5) the coordinates of the image feature can be obtained as: [ f x f y ] = [ u 0 v 0 ] + 1 z P C [ f u 0 0 f v ] [ x P C y P C ] The time derivative of the previous equation is: [ f ˙ x f ˙ y ] = − z ˙ P C ( z P C ) 2 [ f u 0 0 f v ] [ x P C y P C ] + 1 z P C [ f u 0 0 f v ] [ x ˙ P C y ˙ P C ] Considering the camera velocity x ˙ P C, y ˙ P C, z ˙ P C divided in translational x ˙ t C, y ˙ t C, z ˙ t C and rotational velocity α̇^C, β̇^C, γ̇^C, the following expression can be obtained from Equation (7): s ˙ = [ f ˙ x f ˙ y ] = − − x P C β ˙ C + y P C α ˙ C + z ˙ t C ( z P C ) 2 [ f u 0 0 f v ] [ x P C y P C ] + 1 z P C [ f u 0 0 f v ] [ − y P C γ ˙ C + z P C β ˙ C + x ˙ t C − z P C α ˙ C + x P C γ ˙ C + y ˙ t C ] Developing the previous equation, an expression which relates the time derivative of the image features with the camera translational and rotational velocity can be obtained: s ˙ = [ f u z P C 0 − ( f x − u 0 ) z P C − ( f x − u 0 ) ( f y − v 0 ) f v ( f x − u 0 ) 2 + f u 2 f u − f u ( f y − v 0 ) f v 0 f z P C − ( f y − v 0 ) z P C − ( f y − v 0 ) 2 − f v 2 f v ( f x − u 0 ) ( f y − v 0 ) f u f v ( f x − u 0 ) f u ] ︸ L s . [ x ˙ t C y ˙ t C z ˙ t C α ˙ C β ˙ C γ ˙ C ]where: z P C = I ( f x , f y ) 1 1 + ( f x − u 0 f u ) 2 + ( f y − v 0 f v ) 2 The matrix obtained in Equation (9) is the interaction matrix, L[s], therefore, s⋅ = L[s] · ṙ. The pseudo inverse of the interaction matrix derived in (9) is calculated for the control action in (3). In this last equation, an approximation of the interaction matrix is considered due the necessity of estimating the camera intrinsic parameters, ξ. If M visual features can be extracted from the image, the interaction matrix can be obtained as L[s]= [L[s1] L[s2 …] L[sM]]^T, where L[si] is the interaction matrix determined in (9) for only one feature. Various previous works have studied the image-based visual servoing stability. In applications with commercial robots the complete dynamical robot model is not provided. In this cases, the system stability is deduced depending on kinematics properties [11–14]. Paper [1] describes that the local asymptotic stability can be ensured when the number of rows of the interaction matrix is greater than 6. However, we cannot ensure global asymptotic stability. As is indicated in [1], to ensure the local stability, the desired visual features must be closed the current ones. Furthermore, L ^ s + and L s + must be equal or very similar. To do so, the camera depth and intrinsic parameters must be correctly computed. The following algorithm [15] has been used in order to estimate the camera intrinsic parameters. In addition, the accurate determination of the camera depth is one of the main problems. It will be solved in the following sections. In this section, a behaviour analysis of ToF cameras is provided. This analysis helps to define the methods to improve the depth measurement which will be used in the visual servoing system. A PMD19K camera has been used in this analysis. The PMD19K camera contains a Photo Mixer Device (PMD) array with a size of 160 × 120 pixels. This technology is based on CMOS technology and a time-of-flight (ToF) principle. There are other similar cameras based on the same principle and with CMOS technology such as the CamCube 2 or 3 of PMD-Technologies and the SR2, SR3000 or SR4000 of CSEM-Technologies. The specifications and a comparison of the behaviour of these cameras is available in [8] and [16], respectively. PMD19K works with a wavelength of near-infrared (NIR) light of 870 nm and it can capture up to 15 fps with a depth resolution of 6 mm. Furthermore, in the experiments here presented, the camera is connected by Ethernet and it is programmable by SDK for Windows, although it can be connected by Firewire interface and programmable for Linux, too. The ToF camera technology is based on the principle of modulation interferometry [6,16]. The scene is illuminated with NIR light (PMD19K module with a default frequency of ω = 20 Mhz) and this light is reflected by the objects in the scene. The difference between both signals, emitted and reflected, causes a phase delay which is detected for each pixel and used to estimate the distance value. Thus, the ToF camera provides 2 1 2 D depth information of dynamic or static scenes irrespective of the object’s features such as: intensity, depth and amplitude data simultaneously for each pixel of each image captured. The intensity represents the grayscale information, the depth is the distance value calculated within the camera and the amplitude is the signal strength of the reflected signal (quality of depth measures). Then, given the speed light, c, the frequency modulation, ω, the correlation between signals for four internal phase delays, r[0](0°), r[1](90°), r[2](180°), r[3](270°), the camera compute the phase delay, ϕ, the amplitude, a, and the distance between sensor and the target, z, as follows: ϕ = arctan ( r 1 − r 3 r 0 − r 2 ) a = ( r 1 − r 3 ) 2 + ( r 0 − r 2 ) 2 2 z P C = c ϕ 4 π ω This type of cameras has some disadvantages [17]: they are sensitive to background light and interferences and they cause oversaturation and underexposure pixels. The PMD camera has two adjustable parameters to attenuate these errors in the pixels: the modulation frequency and the integration time. To do not change the original calibration determined by the manufacturer, only the behaviour of integration time has been studied to be adjusted. The integration time is defined as the exposure time or the effective length of time a camera’s shutter is open. This is time is needed so that the light reachs the image sensor suitably. In a visual servoing system with eye-in-hand configuration (Figure 1) the camera is mounted at the end-effector of a robotic arm. Therefore, when the robot is moved, the distance between sensor and target, z P C, changes and the integration time, τ, has to be on-line adjusted to minimize the error in the computed depth. Whenever this parameter is suitably computed, the image range can be acquired in better conditions and so the features extraction process in the image can be improved looking for reaching the best features without modified the light environment or the object surfaces in the scene. Figure 2 shows the stability of the distance measurements obtained from the range images with regards to the integration time. May et al. [9] show this dependency in a Swisrange SR-2 camera for the navigation of a mobile robot. The same is studied by Wiedemman et al. [8] to build maps with a mobile robot and by Gil et al. [17] to guide a robotic arm by using an eye-in-hand configuration for visual servoing (Figure 1). In this last work, a PMD19K camera was used. In previous works, some experiments were done in order to observe the evolution of the distance measured by the camera when the integration time changed. In those experiments from 750 images (an integration time offset of 100 ms between each image), a relationship between mean distance value, z P C, and integration time, τ, in microseconds is shown when the robot (Figure 1) is moved and the distance between sensor and target changes. As Figure 2 shows, when integration time is small, the distance computed is unstable and nontrustworthy. In the same way, when the integration time is high an oversaturation phenomenon sometimes appears in the signal which determines the distance curves. Normally, this phenomenon only appears when the distance measured between scene and camera is below a fixed nominal distance or distance threshold, as it is explained in [17]. In Figure 2(a), oversaturation appears when the integration time is greater than 45 ms. However, in Figure 2(b), the oversaturation only occurs when the integration time is greater than 70 ms. Therefore, the nearer the target is, the smaller the threshold of integration time must be. Thus, the farther the target is, the more precise the distance computed is. In addition, something similar happens with the intensity as it is explained in [9], although it is more sensitive to the background light and interferences [8,12]. Consequently, in the calibration process, the flat zone of the curve (Figure 2) has to be computed in order to use a ToF camera such as PMD19K for visual servoing. This zone determines the minimum and maximum integration times allowed to avoid the oversaturation and the instability problems. In this paper, these values have been fixed using the calibration method presented in [17], where the histogram which represents the frequency distributions of the amplitude measurements of PMD19k are adjusted by means of probability density functions (PDF) using Kolmogorov-Smirnov and Anderson-Darling methods. As regards the amplitude measurements, the curve which shows the evolution of the mean amplitude can be computed from a set of images acquired using a nominal fixed distance (the same as the mean distance that was computed in Figure 2). The analysis of the mean amplitude curve determines the thresholds of time integration, [τ[min], τ[max]] which are needed in order to guarantee the precise computation of the distance measurements (Figure 3). The amplitude parameter, a, of a ToF camera defines the quality of the range images computed using a specific integration time. The minimum threshold, τ[min], is computed as the minimum integration time needed to compute the image depth in the desired camera location. It is determined as the time value where a least squares line fitting the mean amplitude curve crosses the zero axis (Figure 3). The maximum threshold, τ[max], is computed as the maximum integration time needed to compute the image depth in the initial camera location. These limits (Figure 3) are computed depending on the distance between target and camera by means of an offline process, as follows: Pose the Robot in the initial pose and capture an image, I[τ], for some integration time, τ ∈ [0,85ms] At each iteration: Compute mean amplitude: a[m] Estimate the frequency histogram for a[m] and fit it by means of K-S and A-D Tests in order to classify the scene according to look-up-table as near or far target τ[min] is computed from zero crossing determinated by the fitting of the curve which represents the image at the maximum distance (min{τ} to capture the image at maximum working distance) (see Figure τ[max] is computed as the suitable integration time for obtaining a desired mean amplitude, a[d], such as: If ( near ) then a d = max { a m } else a d = upper _ quartile { a m } The amplitude analysis of Figure 3 shows a group of curves (a curve for each camera location). The curves show how the linearisation level (part of flat slope) determines the degree of oversaturation. Thus, the amplitude curves grow quickly until they reach an absolute maximum value when the camera is near the target and the curves are more linear when the camera is moved away from the target. Once, the integration time values for final and initial camera positions have been computed, some intermediate integration time, τ[k], Figure 3(a) are computed for the robot trajectory. To do this, empirical tests have been done with the following algorithm: Fix the integration time as τ[0] = τ[max] for image I[0] Compute the deviation error e[a] = a[d] – (a[m])[0] where a[d] = max{a[m]} according to a desired minimum distance. Update integration time following the control law τ[k] = τ[k–1] (1 + K · e[a]) where K is a proportional constant and it is adjusted depending on the robot velocity. This way, some intermediate integration time values, τ[k] ∈ [τ[min], τ[max]], have been estimated for different distances between the final and the initial positions. Therefore, the proper computation of ∂ τ ∂ z P C is done using a polynomial interpolation which fits these intermediate positions (Figure 4). In general, polynomial interpolation may not fit precisely at the end points. But this is not a problem because they are fixed with the time integration needed for the desired and the initial camera positions. Considering, τ[min] and τ[max] as the values 10 ms and 46.4 ms (upper quartile of the maximum value shown in Figure 3(b), 57.4 ms.) respectively and some intermediate time, τ[k], all computed, according the previous calibration method, ∂ τ ∂ z P C is given by: ∂ τ ∂ z P C = 2.8825 z μ 4 + 4.5556 z μ 3 − 4.581 z μ 2 + 0.4968 z μ + 11.8853where: z μ = z P C − 662 193 From the previous analysis, a method to automatically update the integration time is presented in this section in order to be applied during visual servoing tasks. Considering ^cM[o] the extrinsic parameters (pose of the object frame with respect to the camera frame), an object point can be expressed in the camera coordinate frame as: P P C ( x P C , y P C , z P C ) = M O C P P O Considering a pin-hole camera projection model, the point P P C with 3D coordinates relative to the camera reference frame is projected onto the image plane at the point p of 2D coordinates. This point is computed from the focal length (distance between retinal plane and optical center of camera) as: p = ( x , y ) T = ( f x P C z p C , f y P C z P C ) T Finally, the units of (17) specified in terms of metric units (e.g., mm.) are scaled and transformed in coordinates in pixels relative to the image reference frame, as: s = ( f x , f y ) = ( u 0 + f u x , v 0 + f v y )where ξ = (f[u], f[v], u[0], v[0]) are the camera intrinsic parameters. The intrinsic parameters describe properties of the camera used, such as the position of the optical center (u[0], v[0]), the size of the pixel and the focal length defined by (f[u], f[v]). They are computed from a calibration process based on [15] During a visual servoing task, the camera extrinsic parameters are not known, and ^cM[o] is considered as an estimation of the real camera pose. In order to determine this pose, we must minimize progressively the error between the observed data, s[o], and the position of the same features computed by back-propagation employing the current extrinsic parameters, s (16)–(18). Therefore an error function which must be progressively reduced is defined as: e = s − s o The time derivative of e will be: e ˙ = s ˙ − s ˙ o = ∂ s ∂ r ∂ r ∂ t = L s ∂ r ∂ t To make e decrease exponentially to 0, ė = −λ[2]e, we obtain the following control action: ∂ r ∂ t = − λ 2 L s + ewhere λ[2] is a positive control gain and L s + is the pseudoinverse of the interaction matrix (9). Once the error is annulled the extrinsic parameters will be obtained. This approach is used by the virtual visual servoing systems to compute the camera locations. More details about the convergence, robustness and system stability can be seen in [11,12]. Consequently, two estimations are obtained for the depth of a given image feature: one depth (z[1]) from the previous estimated extrinsic parameters and another depth ( z 2 = z P C) from (10). This last depth is calculated from the range image and, therefore, can be updated by modifying the camera integration time. The adequate integration time will be obtained when z[1] and z[2] are equal. Therefore, a new control law is applied in order to update the integration time, τ, by minimizing the error between z[1] and z[2]: ∂ τ ∂ t = − λ 3 ∂ τ ∂ z p C ( z 2 − z 1 )where λ[3] > 0. The algorithm for updating the camera integration time is summarized in the following lines: First perform the offline camera calibration to determine the initial integration time and ∂ τ ∂ z P C (see Section 4). At each iteration of the visual servoing task: Apply the control action to the robot: v c = − λ 1 L ^ s + ( s − s * ) Estimate the extrinsic parameters using virtual visual servoing. Determine the depth, z[1], from the previous extrinsic parameters and z[2] from the range image (10). Update the integration time by applying ∂ τ ∂ t = − λ 3 ∂ τ ∂ z p C ( z 2 − z 1 ) In order to describe more clearly the interactions among all the subsystems that compose the proposed visual servoing system, a block diagram is represented in Figure 5. In this block diagram (Figure 5) it is possible to observe that in the feedback of the visual servoing system a complete convergence of virtual visual servoing is performed in order to determine the extrinsic parameters. Moreover, the convergence and stability aspects when virtual visual servoing techniques are used as feedback of a visual servoing system are discussed in [18]. The target used for the experiments can be seen in Figure 1. This target is composed of four objects on a black table as background to ensure a low reflectivity at the borders. The PMD19k is mounted at the end-effector of a Mitsubishi PA10 with 7 d.o.f. In addition, the ambient light (exterior light source) was controlled with a power regulator for this work in indoor environments. Thereby, special care was taken to avoid the interference with the NIR of the camera. The real distance between camera and target (background and objects) for this first experiment was 600 < z P C < 966 mm. The initial and final camera locations were P Pi C = ( 0,0,966 ) mm and P Pf C = ( − 100,−200,600 ) mm respectively. The features are computed as the centroid of the four objects represented in the range image acquired by the PMD19k (Figure 1). The pixel coordinates of these centroids are p[1] = (7,23)^T, p[2] = (27,12)^T, p[3] = (17,41)^T and p[4] = (37,29)^T for the initial robot pose and p[1] = (85,40)^T, p[2] = (115,24)^T, p[3] = (103,71)^T and p[4] = (134,52)^ T for the final pose. Figure 6 depicts the initial and final positions of the visual features and the eye-in-hand camera. In Figure 7, the measured depth data from a range image is shown for three different camera locations. Only a range image was plotted but from three different camera location (offset ΔP = (ΔX, ΔY, ΔZ)mm between locations) with the same time integration value, 53 ms. This plot shows distinct systematic errors when the integration time is not updated or it is chosen nadequately. However, these errors can be easily corrected by applying the method presented in Section 5. Thus, the combination of the calibration method for estimating the integration time in the initial position [17] and the method to update the integration time presented in sections 4 and 5 significantly improves the quality of the measured depth data. Furthermore, Figure 8 shows how the depth and amplitude measured by the PMD19k change when the integration time is not updated to adapt it according to the distance between camera and target when the robot is moving. The PMD19K has been configured with some different integration times (17, 27, 53 and 70 ms). For example, 53 ms and 27 ms are near the good integration times for the initial and final camera locations, respectively. The experimental results show that whenever an integration time is greater than the optimal value (such as 70 ms), the amplitude values show instability after the maximum amplitude is reached (Figure 8(a)). Furthermore, if the used integration time is smaller than the optimal value (such as 17 ms), so many iterations are needed until the distance is computed correctly (Figure 8(b)). However, the time 27 ms compute a depth for the final position close to the final camera location. Applying the algorithm described in Section 5 from the initial and desired image features location, the image trajectory presented in Figure 9(a) is obtained. In this figure, it is possible to observe that the image features follow a straight line between the initial and the final locations. Furthermore, in Figure 9(b), the camera poses during the visual servoing task are represented. It is possible to observe that the visual servoing task is correctly performed. Therefore, we can conclude that a correct behaviour is obtained in the image and in the 3D space. In Figure 10 the velocities of the robot’s end-effector applied during the visual servoing task are represented. In order to perform the correct tracking, the integration time is updated at each iteration of the visual servoing task using the algorithm described in Section 5. Figure 11 shows the value of the integration time considered at each iteration. Finally, considering these values of the integration time, the new range images obtained at ΔP = (0,0,0) mm, ΔP = (20,40,80) mm and ΔP = (40,80,120) mm are represented in Figure 12. Comparing these figures with the ones obtained at Figure 7, it is possible to observe that the update process of the integration time based on the proposed algorithm eliminates the the previous errors. The image ranges shown in Figure 11 are better than those in Figure 6 because the integration time has been updated during the visual servoing task. The distance between the camera and the target has changed as Figure 10 shows and the camera PMD19k has been self-configured with suitable integration time values. In this example, the integration times have been (53, 41 and 35 mseg). In this case, a trajectory with a displacement only in depth is described. The initial and final positions of the features in the image are (68,51)(86,51)(68,70)(86,70) and (56,43)(93,43)(56,80) (93,80), respectively. The initial distance between the eye-in-hand camera and the object is 1,160 mm and the final distance is 560 mm by using the proposed control law, the robot is able to perform precisely the displacement in depth as Figure 13 shows. In order to complete the task, the integration time has been updated using the algorithm described in Section 5 and thus the evolution represented in Figure 14 is obtained. As we have previously indicated [see Figure 2(b)], the minimum, τ[min], and maximum,τ[max], values of the integration time are 10 ms and 57.4 ms, respectively. Therefore, when the theoretical value for the integration time is greater than τ[max] this parameter is saturated to 57.4 ms (see Figure 14). As described in [1], in classical image-based visual servoing systems the depth of each image feature must be estimated at each iteration of the control scheme. In order to avoid the necessity of estimating these parameters, one popular approach is to choose L ^ s + = L s * +, where L[s^*] is the value of L[s] for the desired position s^*. In this case, L s * + is constant, an only the desired depth of each point has to be set, and thus, no varying 3D parameters have to be estimated during the visual servoing. In this section, a comparison between this last approach and the one proposed in this article is shown. To do so, a visual servo task is considered in which the initial position of the visual features in the image are (105,83)(119,73)(114,98)(130,89) and the desired position for the image features are (13,27)(44,20) (20,58)(51,51) [Figure 15(a)]. The initial and final positions of the eye-in-hand camera are represented in Figure 15(b). Figure 16 shows the evolution of the image features which are obtained when a classical image-based visual servoing system with L ^ s + = L s * + is applied. In this case, the visual features are lost and the image features does not converge towards the desired ones. However, the use of the control law and the depth estimation proposed in Equations (9) and (10) generates the behaviour represented in Figure 17. In this last figure we can see that the visual servoing system is able to converge towards the desired location. This experiment shows the necessity of correctly estimating the depth parameters in order assure the correct convergence. In this experiment there are important variations in the distance between the camera and the object from which the features are extracted. The initial and final depths are 1,160 mm. and 680 mm. respectively, and during the task the depth arrive until 1,760 mm. Thus, considering a fixed integration time, important errors appear and the task cannot be performed. Therefore, the integration time has to be updated with the approach described in this paper, and thus the evolution represented in Figure 18 is obtained. In this experiment the integration time is limited to values between the minimum, τ[min], and maximum, τ[max], in the same way that in the previous experiment according to Figure 2(b). This paper presents a new image-based visual servoing system which integrates range information in the interaction matrix. Another property of the proposed system is the possibility of performing the camera calibration during the task. To do this, the visual servoing system uses the range images not only to determine the depths of the object features but also to adjust the camera integration time during the task. When a ToF camera is employed to guide a robot, the distance between the camera and the objects of the workspace change. Therefore, the camera integration time must be updated in order to correctly observe the objects of the workspace. As it is demonstrated in the experiments, the integration time must be updated depending on the distance between the camera and the objects. The use of the proposed approach guarantees that the information obtained from the ToF camera is accurate because an adequate integration time is employed at each moment. This last aspect permits obtaining a better estimation for the objects depth. Therefore, the behaviour of the visual servoing is enhanced with respect to previous approaches where this parameter is not accurately estimated. Currently, we are working in determining the accurate dynamic model of the robot to improve the visual servoing control law in order to assure the given specifications during the task. The authors want to express their gratitude to the Spanish Ministry of Science and Innovation for their financial support through the project DPI2008-02647 and to the Research and Innovation Vicepresident Office of the University of Alicante for their financial support through the emergent projects. ChaumetteFHutchinsonSVisual servo control. I. Basic approaches200613829010.1109/MRA.2006.250573 de JongFPieterPJVisual Servoing in PCB ManufacturingProceedings of the 6th Annual Conference of the Advanced School for Computing and Imaging (ASCI)Lommel, BelgiumJune 14–16, 20005963 ReiserUKubackiJUsing a 3D Time-of-Flight Range Camera for Visual TrackingProceedings of 6th IFAC Symposium on Intelligent Autonomous VehiclesToulouse, FranceSeptember 3–5, 2007 KlankUPangercicDRusuRBBeetzMReal-time CAD Model Matching for Mobile Manipulation and GraspingProceedings of the 9th IEEE-RAS International Conference on Humanoid Robots (Humanoids)Paris, FranceDecember 7–10, 2009 FuchsSHirzingerGExtrinsic and Depth Calibration of ToF-camerasProceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008)Anchorage, AK, USAJune 24–26, 200816 KhongsabPMaster Thesis.Luleâ University of TechnologyLuleå, Sweden2009 LindnerMKolbACalibration of the Intensity-Related Distance Error of the PMD TOF-CameraProceedings of SPIE XXV Conference on Intelligent Robots and Computer VisionBoston, MA, USASeptember 15–17, 200767646771 WiedemannMSauerMDriewerFSchillingKAnalysis and Characterization of the PMD Camera for Application in Mobile RoboticsProceedings of the 17th IFAC World CongressCoex, KoreaJuly 6–11, 20081368913694 MaySWernerBSurmannHPervölzK3D Time-of-flight Cameras for Mobile RoboticsProceedings of IEEE Conference on Intelligent Robots and Systems (IROS 2006)Beijing, ChinaOctober 9–15, 2006790795 Mure DuboisJHugliHFusion of Time of Flight Camera Point CloudsProceedings of Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and ApplicationsMarseille, FranceOctober 18, 2008 ComportAIMarchandEPressigoutMChaumetteFReal-time Markerless Tracking for Augmented Reality: The Virtual Visual Servoing Framework20061261562810.1109/TVCG.2006.78 MarchandEChaumetteFVirtual Visual Servoing: A Framework for Real-time Augmented Reality20022128929810.1111/1467-8659.t01-1-00588 BenhimaneSMalisEHomography-based 2D Visual Tracking and Servoing20072666167610.1177/0278364907080252 Hadj-AbdelkaderHMezouarYMartinetPChaumetteFCatadioptric Visual Servoing from 3D Straight Lines20082465266510.1109/TRO.2008.919288 ZhangZFlexible Camera Calibration by Viewing a Plane from Unknown OrientationsProceedings of the Seventh IEEE International Conference on Computer VisionCorfu, GreeceSeptember 20–27, 19991666673 RappHMaster Thesis.University of HeidelbergHeidelberg, GermanySeptember2007 GilPPomaresJTorresFAnalysis and Adaptation of Integration Time in PMD Camera for Visual ServoingProceedings of 20th International Conference on Pattern Recongnition (ICPR 2010) Istanbul, TurkeyAugust 2010 PomaresJChaumetteFTorresFAdaptive visual servoing by simultaneous camera calibrationProceedings of IEEE International Conference on Robotics and AutomationRome, ItalyApril 10–14, 200728112816 (a) Eye-in-hand configuration. (b) Image acquired from the range camera point of view. Evolution of the mean distance of the range image for two different scenes: (a) An object and the camera moved between 0.5 m and 1 m. (b) Four objects and the camera moved between 0.3 m and 0.8 m. Evolution of mean amplitude, a[m], for the tests of Figure 2. Polynomial interpolation applied to compute ∂ τ ∂ z P C for distances between 0.3 and 1 m, for the tests of Figure 2. Block diagram of the visual servoing system. (a) Initial position of the image features and the eye-in-hand camera. (b) Final position of the image features and the eye-in-hand camera. (Trajectory 1). Range Image computed for the integration time of 53 ms. (a) Evolution of the measured amplitude when the integration time is not updated. (b) Evolution of the depth parameter when the integration time is not update. Trajectory during the visual servoing task. (a) Trajectory of the image features. (b) Trajectory of the eye-in-hand camera. Experiment 1. Velocities during the visual servoing task. Experiment 1. Integration time values at each iteration of the visual servoing task. Trajectory 1. Range Image computed for the integration time updated at each iteration. Trajectory during the visual servoing task. (a) Trajectory of the image features. (b) Trajectory of the eye-in-hand camera. Experiment 2. Integration time values at each iteration of the visual servoing task. Experiment 2. (a) Initial position of the image features and the eye-in-hand camera. (b) Final position of the image features and the eye-in-hand camera. Experiment 3. Image trajectory when L ^ s + = L s * +. Trajectory during the visual servoing task. (a) Trajectory of the image features. (b) Trajectory of the eye-in-hand camera. Experiment 3. Integration time values at each iteration of the visual servoing task. Experiment 3.
{"url":"http://www.mdpi.com/1424-8220/10/8/7303/xml","timestamp":"2014-04-18T09:25:51Z","content_type":null,"content_length":"112229","record_id":"<urn:uuid:25502084-6af0-413f-b27e-113353bfb9fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Are Journal Accept Rates Are Journal Accept Rates as Low as They Look? Paula England, Stanford University, and former editor of ASR (1994–1996) Authors aspiring to publish in a sociology journal typically understand that, in the best case, an article gets accepted only after an invitation to revise and to resubmit (an R&R). They often want to know the probability that an author sending an article to this journal will eventually get it accepted by this journal. But, oddly enough, this is not what ASA journals' "accept rates," previously published annually in Footnotes but now online, tell us. Here is how ASA (and some other scholarly journals) compute their accept rates. The basic concept is to take acceptances during the year as a ratio of all decisions—positive and negative—made in the year. ASA puts all decisions in the denominator, including accepts, rejections, conditional accepts, and invitations to revise and resubmit. In effect, original submissions and revisions (after an R& R or conditional accept) count as separate manuscripts for purposes of the accept rate. A manuscript that ultimately gets accepted counts twice—as one accept and one nonaccept. If we want the accept rate to answer the question I posed above, a better procedure would be to only put final decisions in the denominator—accepts and rejects decided during the year. Thus, every paper would only enter the statistics once, counting as an acceptance regardless of how many revisions it went through or a reject if it was ultimately rejected, either originally or after a revision. For Example Consider the following hypothetical—a journal in which all papers submitted are eventually accepted, but every paper goes through one R&R decision on the way. An author submitting would know her or his paper was sure to be accepted eventually, so calling the accept rate 100% makes sense in this scenario, and this is what we would get if only final decisions were in the denominator. However, the way ASA calculates its journals' accept rates, the rate is only 50% despite the fact that every paper is ultimately accepted. If every paper required one R&R and one conditional accept, the rate would drop to 33%. Thus, under the present way of calculating rates, differences across editors within a journal, between journals, or between disciplines may be affected by how many revisions editors typically require before acceptance. How much difference would it really make if only final decisions were put in the denominator? Clearly accept rates would be higher if only final decisions were in the denominator (the numerator is the same under either system). How much difference would it really make if only final decisions were put in the denominator? To find out I asked the editors of two ASA journals and the journal of Sociologists for Women in Society to share their 2008 statistics with me so I could see what differences it makes to calculate accept rates with only final decisions included in the denominator. (Thanks to Randy Hodson and Vincent Roscigno, editors of the American Sociological Review (ASR); Gary Alan Fine, editor of Social Psychology Quarterly ; and Dana Britton, editor of Gender & Society , for the data from which I calculated the numbers.) In 2008, official rate was 8.25%, calculated using ASA’s method, with a denominator including final accepts and rejects, as well as the intermediate decisions allowing revision. If the denominator had included only final decisions, the accept rate would have been 11.42%. The second rate is 38% higher than the first (the difference between the two over 8.25 is .38). Similar computations for Social Psychology Quarterly show that their official 2008 accept rate of 9.43% would be 15.96% if only final decisions were in the denominator, a 69% increase. If I apply the ASA method to Gender & Society statistics, its accept rate would be 9.67%; with only final decisions in the denominator, it is 11.88%, which is 23% higher. Arguments For and Against An argument sometimes made for the status quo is that, when trying to convince an interdisciplinary tenure and promotion committee that a colleague has published in very selective journals, the lower the rate the more useful for the case. However, even the more realistic accept rates that I calculated above using only final decisions as the base show that our journals are extremely selective. A downside of the current system is that it gives authors an unrealistically low idea of their chances that their paper will ultimately be accepted by a journal. Moreover, the rate as now calculated is reduced when editors increase the typical number of revisions required before papers are ultimately accepted, even if the probability of eventual acceptance does not change. I suggest that we change how ASA calculates accept rates, taking a given year’s number of accepts as a percent of all final decisions made that year (accepts and rejects).
{"url":"http://www.asanet.org/footnotes/mar09/rates_0309.html","timestamp":"2014-04-19T17:02:55Z","content_type":null,"content_length":"17619","record_id":"<urn:uuid:6f1a3ce1-ec67-4a07-8b85-328eb09537f0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from August 23, 2007 on The Unapologetic Mathematician As Todd Trimble pointed out, things get really nice when a category is enriched over itself. That is, the morphisms from one object to another in $\mathcal{V}$ themselves have the structure of an object of $\mathcal{V}$. This trivially the case for $\mathbf{Set}$, because there’s a set of functions from one set to another. We also know that in $\mathbf{Ab}$ there’s an abelian group of homomorphisms from one abelian group to another. We say that the category has an “internal hom functor”, because the hom functor lands back inside the category itself, rather than in the category of For the moment, let’s consider a category $\mathcal{V}$ that is not only monoidal (which is needed to have an enriched category), but also symmetric and closed. Remember that “closed” means we have an adjunction $\underline{\hphantom{X}}\otimes B\dashv (\underline{\hphantom{X}})^B$ for each object $B$. In $\mathbf{Set}$ the set $A^B$ is the set of functions from $B$ to $A$, while in $\mathbf {Ab}$ it’s the abelian group of homomorphisms from $B$ to $A$. We see that these are already the internal hom functors we’re looking for in these situations. So in general let’s take our symmetric, monoidal, closed category $\mathcal{V}$, with underlying ordinary category $\mathcal{V}_0$. The adjunction between the monoidal structure and the exponential has a counit — an arrow $A^B\otimes B\rightarrow A$ — which corresponds to “evaluation” in both of our sample cases. That is, it takes a function $f:B\rightarrow A$ and an element $b\in B$ and gives an element $f(b)\in A$. We can use this to build a category. Start with the objects of $\mathcal{V}_0$, and define the hom-object from $B$ to $A$ as $A^B$ (using the exponential functor from the closed structure). We need to find arrows $A^B\otimes B^C\ rightarrow A^C$ and $\mathbf{1}\rightarrow A^A$, and we’ll use the adjunction to do it. For composition, we have the arrow $(A^B\otimes B^C)\otimes C\rightarrow A^B\otimes(B^C\otimes C)\rightarrow A^B\otimes B\rightarrow A$ where the first step is the associator and the other two are evaluations. This is an element of $\hom_{\mathcal{V}_0}((A^B\otimes B^C)\otimes C,A)$, so the adjunction sends it to an element of $\hom_ {\mathcal{V}_0}(A^B\otimes B^C,A^C)$, as we require. For identities, we can just use the left-unit arrow $\mathbf{1}\otimes A\rightarrow A$ and pull the same trick. Now properties of adjoints give us the required relations to make this a category enriched over $\mathcal{V}$. And finally we can check that $V(A^B)=\hom_{\mathcal{V}_0}(\mathbf{1},A^B)\cong\hom_{\mathcal{V}_0}(B,A)$, so the “underlying set” of $A^B$ is actually the set of morphisms from $B$ to $A$ in the underlying category $\mathcal{V}_0$. This justifies our suspicions that the $\mathcal{V}$-category we just built is in fact $\mathcal{V}$ itself, now as a category enriched over itself. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/08/23/","timestamp":"2014-04-19T04:47:15Z","content_type":null,"content_length":"45762","record_id":"<urn:uuid:20d98728-c536-430d-be58-27eba155ce1c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterizing polyhedron from Brownian particle collisions with a boundary up vote 3 down vote favorite Please imagine that we have an ordinary 2-sphere, of radius $r_{sphere}$, and some three-dimensional polygon that has all of its points fixed at positions strictly internal to the sphere's surface. Also confined in the sphere is point-like particle (with diffusion constant, $D_{particle}$) undergoing Brownian motion. The surface of the 2-sphere, as well as the surface of the polygon internal to the 2-sphere, are perfect reflecting boundaries for the particle. Working in discrete time, we track the point-like particle for $N$ finite time units (we'll call them seconds), $(t_1, t_2, ..., t_k, ..., t_N) \in T$. However, during this time the only information we are allowed to record is: 1. If a collision between the probe and the surface of the 2-sphere occurs at a given time point, $t_k$. And if there is at least one such collision during $t_k$... 1. The coordinates of a collision event on the surface of sphere, randomly selected from all collisions that occur during $t_k$. Beyond, perhaps, the volume of the polygon in the sphere (and I'm not entirely sure this is learnable), how (if at all possible) can we use the information specified above to characterize the polygon in some additional manner? Update - If we apply a further restriction that the polyhedron is convex, at the limit of large $N$ will there be enough information from the collisions to reconstruct the convex polyhedra? pr.probability geometry stochastic-calculus Trivial observation: the symmetry group of a convex polytope about the origin (if nontrivial) will be manifested in the density function of the collisions. – Steve Huntsman Aug 23 '10 at 16:35 I assume by "three-dimensional polygon" you mean a polyhedron, a closed surface composed of flat faces? – Joseph O'Rourke Aug 23 '10 at 17:31 You might look at the MO question "Algorithm for finding the volume of a convex polytope," mathoverflow.net/questions/979/… , for most algorithms use random walks inside the polyhedron to estimate the volume. – Joseph O'Rourke Aug 23 '10 at 17:34 Joseph, yes, I do mean a polyhedron, but not necessarily a convex polyhedron. – Rob Grey Aug 23 '10 at 18:09 @Rob: Ah, then the MO question on convex polytopes may not be so relevant... – Joseph O'Rourke Aug 23 '10 at 18:14 show 3 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged pr.probability geometry stochastic-calculus or ask your own question.
{"url":"http://mathoverflow.net/questions/36461/characterizing-polyhedron-from-brownian-particle-collisions-with-a-boundary","timestamp":"2014-04-20T01:22:17Z","content_type":null,"content_length":"53940","record_id":"<urn:uuid:51843959-ceca-4622-93e8-58e7ec6dc762>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Reference Request: de Rham vs. Dolbeault up vote 2 down vote favorite Hi everyone. I need the following statement: For a Kahler manifold $X$, the natural map $H^n(X,\mathbb{C})\to H^n(X,\mathcal{O})$ (from the sheaf extension) coincides with the Hodge projection $\Pr_{0,n}$, up to the de Rham isomorphism and the Dolbeault isomorphism. Does anybody know a good reference? P.S. Surely there must be a reference. I am much less interested in proofs: I think I know one. add comment 1 Answer active oldest votes There are lots of references. Mainly every textbook which treats Hodge theory. Try to look at: • Voisin: Hodge theory and complex algebraic geometry. I • Huybrechts: Complex geometry up vote 11 • Wells: Differential analysis on complex manifolds down vote • Griffiths, Harris: Principles of algebraic geometry There, you will find mainly the proof in the case $n=2$, which is used to prove the Lefschez theorem on $(1,1)$-classes. The general case is a straightforward adaptation of that argument. 1 Yes, but to write in a paper "The general case is a straightforward adaptation of that" is not a good manner, is it? And the proofs in Griffiths&Harris and Voisin are rather specific for $n=2$. (They may be generalized, to be sure, but I wouldn't say it is straightforward. Do not remember about other books). Do not take me wrong, but it doesn't look like a very good reference. – Alex Gavrilov Jul 24 '12 at 13:29 The point is rather that you didn't say you needed this reference for a paper you are writing. In this case, sincerely, you can just state that fact as well-known. No referee would protest! – diverietti Jul 25 '12 at 7:03 Yes, perhaps you are right. Anyway, it won't do any harm: if the referee insists, then I can write my own proof (with the reference to Griffiths&Harris for a special case). Of course I shoud have made my purpose clear from the beginning, so we could avoid this bit of confusion. However, I slill hope that someone may give me a reference for a complete proof, which is why I do not accept your answer. Do not mind this. By the way, I have Griffiths&Harris on my bookshelf. – Alex Gavrilov Jul 26 '12 at 13:29 Of course I don't mind! If I find a complete reference I'll tell you! – diverietti Jul 26 '12 at 16:24 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/102983/reference-request-de-rham-vs-dolbeault","timestamp":"2014-04-16T20:11:46Z","content_type":null,"content_length":"55220","record_id":"<urn:uuid:4f953a76-fa2b-494a-bc8d-f586dd3157de>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] htdp 18.1.6 From: wooks . (wookiz at hotmail.com) Date: Sun Jul 9 18:02:46 EDT 2006 hi danny, I am talking about problem 18.1.6. I feel I understand the examples given but none of them exemplify stepping through a function that has recursive calls like 18.1.6. ----Original Message Follows---- From: Danny Yoo <dyoo at hkn.eecs.berkeley.edu> To: "wooks ." <wookiz at hotmail.com> CC: plt-scheme at list.cs.brown.edu Subject: Re: [plt-scheme] htdp 18.1.6 Date: Sun, 9 Jul 2006 08:20:17 -0700 (PDT) On Sun, 9 Jul 2006, wooks . wrote: >I don't feel equipped to do this as the examples from the book all involve >non-recursive functions. Hi Wooks, Are we talking about exercise 18.1.1? I'm not sure why having recursion affects how you'd solve the problems. Exercise 18.1.1 do have recursive functions in the second and third subproblems, but why would that cause difficulties? Nothing in the question itself asks to do something wacky in terms of recursion. But let's go further in this. Would you be able to do the first subproblem, the one involving: (local ((define x (* y 3))) (* x x)) Would you be able to do the same for a different example that does involve functions, like this? (local ((define (hypotenuse a b) (sqrt (+ (square a) (square b)))) (define (square x) (* x x))) (hypotenuse 3 4)) That is, if you can't do the whole problem first, try a few easier examples (or ask for some alternatives) to prep yourself. Good luck! Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2006-July/013940.html","timestamp":"2014-04-20T18:30:10Z","content_type":null,"content_length":"6898","record_id":"<urn:uuid:adf64e9a-2e9e-4167-825f-44cae9e91085>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
10 second questions Re: 10 second questions Hi bobbym, The solutions #4557 and #4558 are perfect. Excellent! #4560. Find the value of x if of 140 + x = 800. Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4559 and #4560 are correct. Neat work! #4561. Find the value of #4562. Find the value of . Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4561 and #4562 are perfect. Good work! #4563. Find #4564. Simplify : Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4563 and #4564 are correct. Neat work! Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4565 and #4566 are correct. Neat job! #4568. Find the value of x if . Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4567 and #4568 are correct. Good work! #4569. Find the value of x if #4570. Find the value of 1.07 x 65 + 1.07 x 26 + 1.07 x 9. Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4569 and #4570 are correct. Good work! #4571. Find the value of x if #4572. Find the value of x if . Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solution #4571 is correct. Neat work! The revised problem #4572 : #4573. 3/4 of 2/9 of 1/5 of a number is 249.6. What is 50% of that number? #4574. Find the value of . Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4572, #4573, and #4574 are correct. Brilliant! #4575. Find the value of #4576. Find the value of . Character is who you are when no one is looking. Re: 10 second questions Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: 10 second questions Hi bobbym, The solutions #4575 and #4576 are correct. Neat work! #4577. Find the value of #4578. Find the value of x. Character is who you are when no one is looking. Re: 10 second questions Hi ganesh; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Super Member Re: 10 second questions Last edited by Shivamcoder3013 (2013-02-28 01:32:13) I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Full Member Re: 10 second questions Re: 10 second questions Devantè wrote: Great to have you back, Ganesh. 1. No 2. No 3. 10? Unless it's a trick question 4. 3, not counting the start. EDIT: Wait - I think number 4 was a trick question. You didn't specify that there was a destination - So I don't think there is way of knowing, without saying that the third stop was the I might be too late but it should be around 20 since you write one six each sixties but two on sixty-six Re: 10 second questions 4579. Convert the following into vulgar fractions: (a) 3.004 (b) 0.0056 Character is who you are when no one is looking. Re: 10 second questions In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=249703","timestamp":"2014-04-17T15:58:12Z","content_type":null,"content_length":"47749","record_id":"<urn:uuid:9ca7ae9c-2ee2-40e0-8e22-d8547ab7a8af>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
GDT Symbols, Terms & Definations - Thelen Tree Farm GD & T Symbols, Terms and Definitions Geometric Dimensioning & Tolerancing (GD & T) (sometimes refered to as GDT) Is a set of standard symbols which are used to define parts and assembly features and their tolerance zones in dimensioning engineering drawings. Also, it defines a part based on how it functions. Geometric Dimensioning & Tolerancing helps individuals to understand the design intent by allowing better tools for describing the drawings. Currently, ASME Y14.5M - 1994 is the accepted geometric dimensioning and tolerancing standard. │Symbol│ Name │ Description │ │ │ Basic Dimension │A numerical value used to describe the theoretically exact size, profile, orientation, or location of a feature or datum target. It is the basis from which permissible │ │ │ │variations are established by tolerances on other dimensions, in notes, or in feature control frames. │ │ │ Datum │A theoretically exact point, axis, or plane derived from the true geometric counterpart of a specified datum feature. A datum is the origin from which the location or │ │ │ │geometric characteristics of features of a part are established. │ │ │ Datum Target │ │ ├──────┼─────────────────────┤A specific line, or area on a part used to establish a datum. │ │ │ Datum Point │ │ │ │ Maximum Material │The condition in which a feature of size contains the maximum amount of material within the stated limits of size-for example, minimum hole diameter, maximum shaft │ │ │ Condition (MMC) │diameter. │ │ │ Least Material │The condition in which a feature of size contains the least amount of material within the stated limits of size-for example, maximum hole diameter, minimum shaft │ │ │ Condition (LMC) │diameter. │ │ │Regardless of Feature│The term used to indicate that a geometric tolerance or datum reference applies at any increment of size of the feature within its size tolerance. │ │ │ Size (RFS) │ │ │ │ Full Indicator │The total movement of an indicator when appropriately applied to a surface to measure its variations (formerly called total indicator reading-TIR). │ │ │ Movement (FIM) │ │ │ │ Virtual Condition │The boundary generated by the collective effects of the specified MMC limit of size of a feature and any applicable geometric tolerances. │ │Symbol│ Name │ Description │ │ │Feature Control │The feature control frame consists of: A) type of control (geometric characteristic), B) tolerance zone, C) tolerance zone modifiers (i.e., MMC or RFS), D) datum references │ │ │ Frame │if applicable and any datum reference modifiers. │ │Symbol│ Name │ Description │Tolerance │ │ │ Flatness │A two dimensional tolerance zone defined by two parallel planes within which the entire surface must lie. │ │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │ Straightness │A condition where an element of a surface or an axis is a straight line. │ │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ Form │ │ │ Circularity │A condition on a surface of revolution (cylinder, cone, sphere) where all points of the surface intersected by any plane perpendicular to a common axis │ │ │ │ │(cylinder, cone) or passing through a common center (sphere) are equidistant from the axis of the center. │ │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │ Cylindricity │A condition on a surface of revolution in which all points of the surface are equidistant from a common axis. │ │ │ │Perpendicularity │The condition of a surface, axis, median plane, or line which is exactly at 90 degrees with respect to a datum plane or axis. │ │ │ │ (squareness) │ │ │ │ │ Angularity │The distance between two parallel planes, inclined at a specified basic angle in which the surface, axis, or center plane of the feature must lie. │ │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │ Parallelism │The condition of a surface or axis which is equidistant at all points from a datum of reference. │ │ │ │ True Position │A zone within which the center, axis, or center plane of a feature of size is permitted to vary from its true (theoretically exact) position. │ │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ Location │ │ │ Concentricity │A cylindrical tolerance zone whose axis coincides with the datum axis and within which all cross-sectional axes of the feature being controlled must lie. (note:│ │ │ │ │this is very expensive and time consuming to measure. Recommended that you try position or runout as an alternative tolerance.) │ │ │ │Profile of a Line│A uniform two dimensional zone limited by two parallel zone lines extending along the length of a feature. │ │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ Profile │ │ │ Profile of a │A uniform three dimensional zone contained between two envelope surfaces separated by the tolerance zone across the entire length of a surface. │ │ │ │ Surface │ │ │ │ │ Runout │A composite tolerance used to control the relationship of one or more features of a part to a datum axis during a full 360 degree rotation about the datum axis.│ │ │ │ │Each circular element of the feature/part must be within the runout tolerance. │ Runout │ ├──────┼─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ │ │ Total Runout │A composite tolerance used to control the relationship of one or more features of a part to a datum axis during a full 360 degree rotation about the datum axis.│ │ References Anonymous, 1994, Dimensioning and Tolerancing, ANSI Y14.5M-1994, ASME International, New York
{"url":"http://www.thelen.us/1gdt.php","timestamp":"2014-04-20T18:33:57Z","content_type":null,"content_length":"18054","record_id":"<urn:uuid:4b65f517-b807-49a8-a509-68eaa5f53e1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
quicksort : Java Glossary C.A.R. Hoare’s recursive sorting technique. It works with a pivot element, moving all keys smaller than the pivot to one side and all the keys bigger to the other. Then it recursively sorts each half. QuickSort can be pathologically slow if the data are already ordered. In Java, QuickSort is slower than either HeapSort or RadixSort. Typical QuickSort implementations are unstable since they scramble keys to avoid pathological pre-orderings. Free Java source code is available from Roedy Green at Canadian Mind Products. Oddly, the Haskell version of Quicksort is probably the easiest to understand: qsort [] = [] qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs) The first line reads: the result of sorting an empty list ([]) is an empty list. The second line reads: to sort a list whose first element is x and the rest of which is called xs, sort the elements of xs that are less than x, sort the elements of xs that are greater than or equal to x, and concatenate (++) the results with x sandwiched in the middle. To learn more about quicksort’s behaviour see Eppstein’s paper. QuickSort source code download.. standard footer available on the web at: http://mindprod.com/jgloss/quicksort.html optional Replicator mirror of mindprod.com J:\mindprod\jgloss\quicksort.html on local hard disk J: the feedback from other visitors, or your own feedback about the site. Canadian Mind Products Blog IP:[65.110.21.43] Your face IP:[54.81.80.46] Feedback You are visitor number 35,024.
{"url":"http://www.mindprod.com/jgloss/quicksort.html","timestamp":"2014-04-16T19:32:01Z","content_type":null,"content_length":"11124","record_id":"<urn:uuid:3bb22396-69cd-445b-b985-9e880d6103f9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
MB threadview page ybf - there's a concept in law: >In contract law, a mistake is an erroneous belief, at contracting, that certain facts are true. It can be argued as a defence, and if raised successfully can lead to the agreement in question being found void ab initio or voidable, or alternatively an equitable remedy may be provided by the courts. Common law has identified three different types of mistake in contract: the 'unilateral mistake', the 'mutual mistake' and the 'common mistake'. It is important to note the distinction between the 'common mistake' and the 'mutual mistake'.< We have indeed been talking about different things. I believe you are correct in your argument and I plead guilty to reading over your point. What I meant originally, and what Tversky discussed, was the mathematical equivalence of a 100% chance of making $.03, and 3% chance of making $1.00. Thus the relevance of my comments about a tangential colloquy. While the example is renovated the original point about cognitive bias obscuring rational assessments of quantitative differentials and equivalents, and their potential role in investment theses,
{"url":"http://finance.yahoo.com/mbview/threadview/?bn=c7049093-faf3-334f-a3c6-cb6ecaff97fc&tid=1301389064000-765b4216-dd24-3977-8a77-5e58dbf312f1","timestamp":"2014-04-17T03:01:50Z","content_type":null,"content_length":"151729","record_id":"<urn:uuid:c29ec396-7b38-4624-b61b-881101abb1f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/patbate21/answered/1","timestamp":"2014-04-16T10:20:27Z","content_type":null,"content_length":"78643","record_id":"<urn:uuid:b8c88316-6c81-48e5-b128-1a1cd49da7eb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
PowerPoint Presentation - Motivation: Teaching + Learning PPT Presentation Summary : Title: PowerPoint Presentation - Motivation: Teaching & Learning Author: Lawrence R. Rogien Keywords: Doyle, Self-regulated learner, Competitive, cooperative Source : http://www.marietta.edu/~bauerm/educ202/20212_files/20212.ppt PowerPoint Presentation - Motivation - Marietta College PPT Presentation Summary : Overview What Is Motivation? ... Allyn and Bacon Four Approaches to Motivation Self-Schemas Interests and Motivation Goal Orientation and Motivation Teachers, ... Source : http://www.marietta.edu/~bauerm/educ202/20210_files/20210.ppt Motivation - Riverdale High School PPT Presentation Summary : Title: Motivation Author: Kimberly Last modified by: Rutherford County Schools Created Date: 9/22/2010 9:24:11 AM Document presentation format: On-screen Show Source : http://www.rhs.rcs.k12.tn.us/teachers/sprinklej/documents/CharacterMotivation.ppt Motivation: Intrinsic vs. Extrinsic - Albert Lea PPT Presentation Summary : Motivation Intrinsic vs. Extrinsic Student Motivation Motivation is typically defined as the forces that account for the arousal, selection, direction, and ... Source : http://swa1.albertlea.k12.mn.us/cmcintyre/Shared%20Documents/Education%20Information/Motivation.ppt Student Motivation in Reading - Appalachian State University PPT Presentation Summary : Interviews With Reading Teachers. ... Students’ motivation to read is affected by several factors. Every student is different in how they respond to books and reading. Source : http://www.ltl.appstate.edu/prodlearn/prodlearn/POL_summer_2011/Kivett_Tara__2011/artifacts/Student%20Motivation-Final.pptx Motivation in the Classroom - Montana State University Billings PPT Presentation Summary : Motivation in the Classroom Chapter ... Motivation problems No school/life connection Alienation from school Lack of involvement in classes/school activities Teachers ... Source : http://www.msubillings.edu/COEFaculty/barfield/ETP/EDF%20250/Motivation%20in%20the%20Classroom%20cont.%2097%20MW.ppt Intrinsic Motivation In Education PPT Presentation Summary : Intrinsic Motivation In The Classroom By: ... Conclusion There are six important guidelines for teachers to follow: a. Teachers are enablers, not rewarders. Source : http://peggyloveu.wikispaces.com/file/view/Intrinsic+Motivation+In+The+Classroom+%E5%A0%B1%E5%91%8A.ppt Motivation - blueprintinstitute - home PPT Presentation Summary : Motivation – teachers who motivate do not necessarily make learning fun, but they make it attainable and purposeful. Motivation Myth #1 “ ... Source : http://blueprintinstitute.wikispaces.com/file/view/Motivation.ppt Educational Psychology, Canadian Edition - John Wiley + Sons PPT Presentation Summary : ... and why is it important? What is the difference between intrinsic and extrinsic motivation? How can teachers support students’ psychological needs? Source : http://www.wiley.com/college/odonnell/0470840323/ppt/ch10.ppt Motivating Middle School Students - Schoolwires PPT Presentation Summary : Motivating Middle School Students CAN it be done? YES!!!!! Break-out Questions: Working with the teachers at your table, discuss your answers to these questions. Source : http://gpisd.schoolwires.net/cms/lib01/TX01001872/Centricity/Domain/108/MotivatingMiddleSchoolStudents.ppt Motivating Students - Misericordia University PPT Presentation Summary : Intrinsic and extrinsic motivation. Aspects of motivation. How to motivate. ... Teacher Observations of the Middle School. Question. Percent of teachers responding. Source : http://users.misericordia.edu/ted121/seced/management/Mod%204-A%20Management.pptx Motivation and Discipline - Longwood University PPT Presentation Summary : Motivation and Discipline Teaching is a combination of instruction and order (Doyle) To maintain order, you need motivation and discipline Fewer discipline problems ... Source : http://www.longwood.edu/staff/colvinay/KINS%20378/Motivation_and_Discipline.ppt Motivation in the Math Classroom - Welcome to TASEL-M! PPT Presentation Summary : Strategies to Promote Motivation in the Mathematics Classroom TASEL-M August Institute 2006 Motivation in the Math Classroom In pairs discuss: What, ideally, does ... Source : http://taselm.fullerton.edu/august%20institute%20page/2006/Motivation%20in%20the%20Math%20Classroom.ppt Some Ideas for Motivating Students - Texas College PPT Presentation Summary : Teachers should spend more time explaining why we teach what we do, ... Students respond with interest and motivation to teachers who appear to be human and caring. Source : http://www.texascollege.edu/eTC/omason/CourseDocuments/powerpoints/1-30/Chapter%204%20Some%20Ideas%20for%20Motivating%20Students.ppt Motivation - Management Class PPT Presentation Summary : Motivation in Learning and ... Lessons for Teachers Emphasize students’ progress Make specific suggestions for improvement Stress connection between effort and ... Source : http://management-class.com/courseware/education/woolfolkppt10.ppt Teacher Quality and Incentives Research Project PPT Presentation Summary : Motivation Teacher costs represent the largest share of educational expenditure Teachers play a key role in school quality and student learning Attracting and ... Source : http://www.iadb.org/res/publications/pubfiles/pubP-462.ppt Motivation - Innovative Learning PPT Presentation Summary : Motivation IP&T 301 Suzy ... usefulness Meaningful learning Show progress Emphasize effort and the value of mistakes Self-determination Impact of Teachers ... Source : http://www.innovativelearning.com/educational_psychology/motivation/Motivation.ppt Motivation and Engagement - pc|mac PPT Presentation Summary : Motivation and Engagement Stoney M. Beavers Alabama Secondary Teacher of the Year 2006-2007 Overview What is motivation? What factors affect motivation? Source : http://images.pcmac.org/Uploads/BlountCounty/BlountCounty/Departments/Presentations/Motivation%20and%20Engagement.ppt Self-Determination Theory of Motivation - Faculty Development ... PPT Presentation Summary : Origin of Intrinsic Motivation Innate need to be competent and self-determining ... positive relationship with parents and teachers Self-Determination Theory of ... Source : http://fdc.webster.edu/wcr/education/EDUC3375WE/Support%20Materials/Week%208_Motivation/IntrinsicMotivation.ppt Presentation Summary : Motivation Motivation is defined simply as what causes people to behave as they do. all organizations need motivated employees and motivation is also critical to ... Source : http://faculty.ycp.edu/~sjacob/SOC%20310/powerpoint/Motivation.ppt Motivation Matters: Tools to Encourage Students to Become ... PPT Presentation Summary : Title: Motivation Matters: Tools to Encourage Students to Become Engaged Readers and Learners Author: jlrussell Last modified by: Denise Reid Created Date Source : http://castle.eiu.edu/reading/MotivationMatters.ppt Presentation Summary : Motivation Motivation A need or desire that energizes and directs behavior. Early Motivation Theories Motivation is based on our instincts: A behavior that is ... Source : http://staff.tuhsd.k12.az.us/kreeder/MotivationFull.ppt Motivation - Educational Psychology PPT Presentation Summary : Motivation Dr. K. A. Korb ... Physiological and safety needs A positive student-teacher relationship is an important condition for effective learning Teachers should ... Source : http://korbedpsych.com/LinkedFiles/677_04Motivation.ppt Motivation - Minnesota State University Moorhead PPT Presentation Summary : Motivation The Central Question in Psychology Motivation An internal state that arouses, directs, ... How were you motivated by your teachers? Source : http://web.mnstate.edu/smithb/Ed%20Psych%20Webs/motiva~1/Motivation_PP/Extr_Intr_Mot.ppt If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/motivation-for-teachers.html","timestamp":"2014-04-24T20:58:24Z","content_type":null,"content_length":"23694","record_id":"<urn:uuid:5856453b-e666-41cb-9d8e-89c1efc99b44>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the volume of the prism? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fa83512e4b059b524f424fc","timestamp":"2014-04-18T03:36:36Z","content_type":null,"content_length":"35914","record_id":"<urn:uuid:52457d30-a05b-4309-be6e-3e951f168283>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Working up to a 1RM, ME day? [Archive] - WannaBeBig Bodybuilding and Weightlifting Forums 04-01-2010, 07:45 AM I've been using Jim Wendler's 5/3/1 and I plan on switching things over to his 3 days a week westside template. I'll be putting my SQ and DL on the same day and rotating exercises. My question is when working up to a max effort on ME day how many sets do you use to get there? I know that I'm supposed to have at least 3 lifts at or above 90%. So does that mean everything below 90% is considered a warmup?
{"url":"http://www.wannabebig.com/forums/archive/index.php/t-135263.html","timestamp":"2014-04-21T05:48:20Z","content_type":null,"content_length":"11506","record_id":"<urn:uuid:6d4b4591-1556-440d-bf73-043151448228>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: spatial analysis - DISTANCE [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: spatial analysis - DISTANCE From David Torres <torresd@umich.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: spatial analysis - DISTANCE Date Sun, 30 Aug 2009 11:40:07 -0400 I copied and pasted your example, Scott, and got a couple of error messages about N matrix not found. I installed the moremata files, but this didn't solve the problem. I'm not quite as adept at using stata as some, so, please, bear with me. I've never used a mata command, so would like to know whether its use will allow for variable creation wheere the unitids can be stored and reshaped and all that jazz? David Diego Torres, MA(Sociology) Quoting Scott Merryman <scott.merryman@gmail.com>: On Sun, Aug 30, 2009 at 9:15 AM, David Torres<torresd@umich.edu> wrote: Austin, I've looked over the code you offered and it seems that what I'm getting is just the xy coordinates rather than what I need. There's got to be a way, somehow, to have stata return a list of all the unitids that are within the specified radius. When I run the code I have, all I get is a return of the unitid on the nearest school to my 65000 tracts. However, within the rad10 variable, several tracts have multiple schools within my radius of 10 miles. I need to get all of these unitids regardless of whether I end up with wide or long data; I can always reshape later. How about something like this: set obs 9 gen id =_n set seed 1234 gen latitude = uniform()/2 gen longitude = uniform()/2 X= st_data(.,( "latitude" , "longitude")) X =X*(pi()/180) dist = J(rows(X),rows(X), 0) for (i = 1; i <=rows(X); i++) { for (j = 1; j <=rows(X); j++) { dist[i,j] = (1/1.609)*6372.795*(2*asin(sqrt( sin((X[i,1] /// - X[j,1])/2)^2 + cos(X[i,1])*cos(X[j,1])*sin((X[i,2] /// - X[j,2])/2)^2 ))) N = mm_cond(dist :<= J(rows(dist), cols(dist), 10), /// J(rows(dist), cols(dist),1), J(rows(dist), cols(dist), 0)) st_matrix("N" ,N) svmat N * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-08/msg01529.html","timestamp":"2014-04-17T21:47:05Z","content_type":null,"content_length":"9190","record_id":"<urn:uuid:47e4e95d-89f9-46ec-bced-c46d2ed8bddb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
George Klir Distinguished Professor Emeritus E-mail: gklir@binghamton.edu Short Bio.: George J. Klir is currently a Distinguished Professor Emeritus of Systems Science at Binghamton University, State University of New York. He has been with Binghamton since 1969. His earlier work was in the areas of systems modeling and simulation, logic design, computer architecture and discrete mathematics. His current research interests include the areas of intelligent systems, generalized information theory, fuzzy set theory and fuzzy logic, theory of generalized measures and soft computing. He is the author of over 300 articles and 16 books. He has also edited 10 books and has been editor of the International Journal of General Systems since 1974 and the International Book Series on Systems Science and Systems Engineering since 1985. He was president of SGSR (1981-82), IFSR (1980-84), NAFIPS (1988-1991) and IFSA (1993-1995). He is a fellow of IEEE and IFSA, and has received numerous awards and honors, including five honorary doctoral degrees, the Gold Medal of Bernard Bolzano, the Lotfi A. Zadeh Best Paper Award, the Kaufmann's Gold Medal, the SUNY Chancellor's Award for Excellence in Research and the IFSA Award for Outstanding Achievement. His biography is included in many biographical sources, including Who's Who in America, Who's Who in the World, American Men and Women of Science, Outstanding Educators of America, Contemporary Authors, etc. His research has been supported for more than 20 years by grants from the NSF, ONR, Air Force, NASA, NATO, Sandia Laboratories and some industries. • PhD, Computer Science, Czechoslovak Academy of Sciences, Prague, Czechoslovakia, 1964 • MS, Czech Technical University, Prague, Czechoslovakia, Electrical Engineering, 1957 Principal Books: 2006: Uncertainty and Information: Foundations of Generalized Information Theory, John Wiley, Hoboken, NJ. 2000: Fuzzy Sets: An Overview of Fundamentals and Personal Views, Beijing Normal University Press, Beijing. 1998: Uncertainty-Based Information: Elements of Generalized Information Theory, Physica Verlag/Springer Verlag, Heidelberg and New York (with M Wierman). Second Edition: Physica-Verlag/Springer Verlag, Heidelberg and New York, 1999. 1997: Fuzzy Set Theory: Foundations and Applications, Prentice Hall PTR, Upper Saddle River, NJ (with U. St. Clair and B. Yuan). 1995: Fuzzy Sets and Fuzzy Logic: Theory and Applications, Prentice Hall PTR, Upper Saddle River, NJ (with B. Yuan). 1992: Fuzzy Measure Theory, Plenum Press, New York (with Z. Wang). 1991: Facets of Systems Science, Plenum Press, New York. Second Edition: Kluwer/Plenum, New York, 2001. 1988: Fuzzy Sets, Uncertainty and Information, Prentice Hall PTR, Englewood Cliffs, NJ (with T. Folger). Japanese Translation: UNI, Tokyo, 1993. 1985: Architecture of Systems Problem Solving, Plenum Press, New York. Russian Translation: Radio i Sviaz, 1990. Second Edition: Kluwer/Plenum, New York, 2003 (with D. Elias). 1972: Introduction to the Methodology of Switching Circuits, Van Nostrand Reinhold, New York. 1969: An Approach to General Systems Theory, Van Nostrand Reinhold, New York. Spanish Translation: Ediciones Ice, Madrid, 1980. 1966: Synthesis of Switching Circuits, SNTL, Prague, in Czech (with L. Seidl). British Edition: Iliffe, London, 1968. American Edition: Gordon and Breach, 1968. 1965: Cybernetic Modelling, SNTL, Prague, in Czech (with M. Valach). British Edition: Iliffe, London, 1967. American Edition: Van Nostrand, Princeton, NJ, 1967. Principal Edited Books: 2004: Fuzzy Logic in Geology, Academic Press/Elsevier, San Diego, CA (with R. V. Demicco). 1996: Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi A. Zadeh, World Scientific, Singapore (with B. Yuan). 1996: Computer-Aided Theory and Technology, Springer-Verlag, Berlin and New York (with T. Oren). 1979: Methodology in Systems Modelling and Simulation, North-Holland, Amsterdam (with B. P. Zeigler, M. S. Elzas, and T. I. Oren). 1978: Applied General Systems Research, Plenum Press, New York. 1972: Trends in General Systems Theory, John Wiley, New York. Polish Translation: Wydawictwa naukovo-techniczne, Warsaw, 1976. Spanish Translation: Alianza Editorial, Madrid, 1978. Selected Papers(1994-2005) • "Uncertainty and information: Emergence of vast new territories", in Systemics of Emergence: Research and Development, edited by G. Minati, E. Pessa, and M. Abram. Springer, New York, pp. 3-28. • "Applying fuzzy measures and nonlinear integrals in data mining", Fuzzy Sets and Systems, 156(3), pp. 371-380 (with Z. Wang and K.-S. Leung). • "Generalized information theory: aims, results, and open problems," Reliability Engineering and Systems Safety 83(1-3), pp.21-38. • "The role of fuzzy logic in sedimentology and stratigraphic models." In: Soft Computing and Intelligent Systems data Analysis and Exploration, ed. by Nikravesh, M., et al., Elsevier, Amsterdam, pp. 189-218 (with R. Demicco and R. Belohlavek). • "Basic issues of computing with granular probabilities." In: Data Mining, Rough Sets and Granular Computing, ed. by T.Y. Lin, Y.Y Yao, and L.A. Zadeh, Springer-Verlag, New York, pp.339-349. • "Systems science." Encyclopedia of Information Systems, Academic Press, San Diego, pp. 391-401. • "Uncertainty." Encyclopedia of Information Systems, Academic Press, San Diego, pp. 511-521. • "Intelligent path planning of two cooperating robots based on fuzzy logic." Intern. J. of General Systems, 31(4), pp. 359-376 (with Y.T. Kim et al.). • "Uncertainty in economics: The heritage of G.L.S. Shackle." Fuzzy Economic Review, VII(2), pp. 3-21. • "On the capability of fuzzy set theory to represent concepts." Intern. J. of General Systems, 31(6), pp. 569-585 (with R. Belohlavek, H.W. Lewis III, and E. Way). • "Foundations of fuzzy set theory and fuzzy logic: a historical overview," Intern. J. of General Systems, 30(2), pp. 91-132. • "The role of uncertainty in systems modeling," In: H.S.Sarjoughian and F.E.Cellier (eds.), Discrete Event Modeling and Simulation Technologies: A Tapestry of Systems and AI-Based Theories and Methodologies, Springer-Verlag, New York, pp. 53-74. • "On measuring uncertainty and uncertainty-based information," Annals of Mathematics and Artificial Intelligence, 23(1), pp. 5-33 (with R.M. Smith). • "Stratigraphic simulations using fuzzy logic to model sediment dispersal," J. of Petroleum Science & Engineering, 31, pp. 135-155 (with R.V. Demicco). • "Optimal redundancy management in reconfigurable control systems based on normalized nonspecificity," Intern. J. of Systems Science, 31(6), pp. 797-808 (with Eva Wu). • "Measures of uncertainty and information." In: D. Dubois and H. Prade (eds.), Fundamentals of Fuzzy Sets, Kluwer, Boston, pp. 439-457. • "Uncertainty-Based Information: A Critical Review." In: Discovering the World With Fuzzy Logic, ed. by V.Novak and I.Perfieva Springer-Verlag, New York, pp. 29-53. • "On fuzzy-set interpretation of possibility theory," Fuzzy Sets and Systems, 103(3), pp. 263-273. • "On the complementarity of systems sciences and classical sciences." In: Toward New Paradigm in Systems Science, ed. by Y. P. Rhee Seoul National Univ. Press, Seoul, pp. 85-101. • "Conceptual foundations of quantum mechanics: the role of evidence theory, quantum sets, and modal logic," Intern. J. of Modern Physics, B 10(1), pp. 29-62, (with G. Resconi and E. Pessa). • "A design condition for incorporating human judgment into monitoring systems," Reliability Engineering and System Safety, 65, pp. 251-258 (with K. Tanaka). • "Constrained fuzzy arithmetic: Basic questions and some answers," Soft Computing, 2(2), pp.100-108 (with Y. Pan). • "Genetic algorithms for determining fuzzy measures from data," J. of Intelligent and Fuzzy Systems, 6(1), pp. 171-183 (with W. Wang and Z. Wang). • "Generative archetypes and taxa: a fuzzy set formalization," Rivista di Biologia/ Biology Forum, 91, pp. 403-424 (with R. von Sternberg). • "Fuzzy arithmetic with requisite constraints," Fuzzy Sets and Systems, 91(2), pp. 165-175. • "Constructing fuzzy measures in expert systems," Fuzzy Sets and Systems, 92(2), pp. 251-264 (with Z. Wang and D. Harmanec). • "The role of constrained fuzzy arithmetic in engineering." In: Uncertainty Analysis in Engineering and Sciences, ed. by Ayyub, B. M., Kluwer, Boston, pp. 1-20. • "Data-driven identification of key variables." In: Intelligent Hybrid Systems, ed. by Ruan, D. , Kluwer, Boston, pp. 161-187 (with Bo Yuan). • "Bayesian inference based on interval-valued prior probabilities and likelihoods," J. of Intelligent and Fuzzy Systems, 5(3), pp.193-203 (with Y. Pan). • "Choquet integrals and natural extensions of lower probabilities," Intern. J. of Approximate Reasoning, 16(2), pp.137-147 (with Z. Wang). • "From classical mathematics to fuzzy mathematics: emergence of a new paradigm for theoretical science." In: Fuzzy Logic in Chemistry, Academic Press, San Diego, pp. 31-63. • "PFB-integrals and PFA-integrals with respect to monotone set function," Intern. J. of Uncertainty, Fuzziness, and Knowledge-Based Systems, 5(2), pp. 163-175 (with Z. Wang). • "Fuzzy measures defined by fuzzy integral and their absolute continuity," J. of Mathematical Analysis and Applications, (1), pp.150-165 (with Z. Wang & W. Wang). • "Monotone set functions defined by Choquet integral," Fuzzy Sets and Systems, 81(2), pp.241-250 (with Z. Wang and W. Wang). • "Constructing fuzzy measures by transformations," J. of Fuzzy Mathematics, 4(l), pp. 207-215 (with Z. Wang and W. Wang). • "Epistemological categories of systems: an overview and mathematical formulation," Intern. J. of General Systems, 24(1-2), pp. 207-224 (with I. Rozehnal). • "Modal logic interpretation of Dempster-Shafer theory: an infinite case," Intern. J. of Approximate Reasoning, 14(2-3), pp. 81-93 (with D. Harmanec and Z.Wang). • "From classical sets to fuzzy sets: a grand paradigm shift." In: Advances in Fuzzy Theory and Technology, III, ed. by P.P. Wang, Duke Univ., Durham, NC, pp. 5-30. • "Uncertainty as a resource for managing complexity." In: From Statistical Physics to Statistical Inference and Back, ed. by P. Grassberger and J.-P. Nadal, Kluwer, Boston, pp.139-153. • "Multivalued logics versus modal logics: alternative frameworks for uncertainty modelling." In: Advances in Fuzzy Theory and Technology, II, ed. by P.P. Wang, Bookwrights Press, Durham, North Carolina (Lotfi A. Zadeh Award for the Best Paper in 1993), pp. 3-47. • "On modal logic interpretation of possibility theory," Intern. J. of Uncertainty,. Fuzziness and Knowledge-Based Systems, 2(2), pp. 237-245 (with D. Harmanec). • "On modal logic interpretation of Dempster-Shafer theory of evidence," Intern. J. of Intelligent Systems, 9(10), pp. 941-951 (with D. Harmanec and G. Resconi).
{"url":"http://www2.binghamton.edu/ssie/people/klir.html","timestamp":"2014-04-17T01:04:56Z","content_type":null,"content_length":"54608","record_id":"<urn:uuid:57cf88a4-6150-4523-9e84-92e4cea5aeb2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
What Do Other People Want To Be? Back To All Lessons What Do Other People Want To Be? In this lesson, students will graph people's job choices and identify which jobs would have the most competition based on the data. Economic Freedom, Human Resources, Labor • Be able to graph and answer questions based on data showing people's job choices. • Be able to list and/or verbally communicate at least three reasons for working. In this lesson, students will graph people's job choices and identify what goods and services each job provides. The graphing in this lesson is simple graphing done via interactive activities and assumes that students are familiar with the concept of graphing. An explanation of graphing is available for teachers to paraphrase. This lesson follows the What Do You Want to Be? lesson and builds on some of the decisions students made in that lesson, although it can also stand alone. • Kids' Zone: Learning with NCES.This website helps students create different types of graphs, and provides an explination of graphing for teachers to paraphrase. • What Do You Want to Be?: In this lesson, students have the opportunity to explore various jobs and decide what they might want to be when they grow up through an interactive activity. The What Do You Want to Be? lesson provides some fundamental ideas that are built upon in this lesson. What Do You Want to Be? • Interactive Graphing Activity: Students can use this interactive activity to display their survey results. interactive graphing activity Activity 1 Part A Survey students based on what they decided in the What Do You Want to Be? lesson, or conduct a survey in class by asking students the following question: 1. What do you want to be when you grow up? Record the results on the board. Have students complete the interactive graphing interactive graphing activity to display the results. Guide to the interactive graphing activity: If you used the jobs from the "What Do You Want to Be?" lesson, students will only need to enter the values under the "Number of People" column. If you elected to do a new survey in your class, you'll need to total the results from the class and instruct students to enter the top five jobs and the value for each on the first page of the activity. The maximum value for each category is 10, so you may need to explain that limitation if more than 10 is reported for any one job. The survey button that is displayed on the first page generates random numbers for each of the jobs. You can use this feature if you are unable to conduct a survey in your class. Once data has been entered or generated, students should create a bar graph by dragging each of the bars into the correct position based on their data. They may check their results using the "check" button in the lower right corner. They will not be able to move on until all of the bars are in the correct positions. To complete this activity, students may print a copy of their graph and answer the questions on the print out. Discussion items: Ask students to think about and answer the following questions: 1. Why did you choose the job you did? 2. Why do we need people to perform that particular job? List the students answers on the board as you discuss them focusing on the reasons that people work. Help the students generalize about why people work based on the responses you receive. Discuss with students the various jobs that were listed and why people may have been more interested in some jobs over others. Part B. Tell students that you are going to have them survey adults to find out what job they do, why they work, and why they chose that particular job. Have students ask their parents/guardians or any adult the following questions and report back to the class the following day. 1. What is your job? 2. Why did you choose that job? 3. Why do people need to perform that particular job? As students report their findings back to the class for question #1, list the various jobs that students report on the board. Make tally marks next to each job that is repeated. Again, have students complete the interactive graphing activity. Guide to the interactive graphing activity: This time students will need to enter both the "Jobs" and the "Number of People" based on the survey data that was collected. Inform students that they will be creating a graph that displays only the top five jobs that were listed. Have students type in the top five jobs in the fields under the "Jobs" heading. Once they've added the jobs, you can have them enter the number of people based on the tally marks from the board. The maximum value for each category is 10, so you may need to explain that limitation if more than 10 is reported for any one job. To complete this activity, students may print a copy of their graph and answer the questions on the print out. Discussion items: Continue to list the students answers on the board for questions #2 and #3 based on their survey. Again, help the students generalize about why people work based on the responses you receive. Compare the survey results with the reasons students listed choosing a specific job. Did students have reasons similar to the adults reasons? What main differences were there? What reasons did students leave out that students thought were important? Do you think your reasons for choosing a job might change when you get older? Have students list or verbally communicate at least three reasons for working. 1. Have students look at all the jobs that were listed and see if they can come up with one good and/or service that each provides. 2. Have students see if they can come up with the "Top 3" reasons that people work based on the survey. • “Good activity!” • “This is a good activity!” • “Thanks a lot for this lesson plan. It surely helps me in teaching this lesson to my esl students.” • Review from EconEdReviews.org Really goodlesson! Very fun too! “I think this lesson is really fun for students! It's a great lesson to teach students about competition and the job market--all in one!! ” Add a Review
{"url":"http://www.econedlink.org/lessons/index.php?lid=212&type=educator","timestamp":"2014-04-18T03:21:10Z","content_type":null,"content_length":"25187","record_id":"<urn:uuid:64422052-8ee0-44e6-b766-d631f83d79a3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
A topological algorithm for identification of structural domains of proteins • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2007; 8: 237. A topological algorithm for identification of structural domains of proteins Identification of the structural domains of proteins is important for our understanding of the organizational principles and mechanisms of protein folding, and for insights into protein function and evolution. Algorithmic methods of dissecting protein of known structure into domains developed so far are based on an examination of multiple geometrical, physical and topological features. Successful as many of these approaches are, they employ a lot of heuristics, and it is not clear whether they illuminate any deep underlying principles of protein domain organization. Other well-performing domain dissection methods rely on comparative sequence analysis. These methods are applicable to sequences with known and unknown structure alike, and their success highlights a fundamental principle of protein modularity, but this does not directly improve our understanding of protein spatial structure. We present a novel graph-theoretical algorithm for the identification of domains in proteins with known three-dimensional structure. We represent the protein structure as an undirected, unweighted and unlabeled graph whose nodes correspond to the secondary structure elements and edges represent physical proximity of at least one pair of alpha carbon atoms from two elements. Domains are identified as constrained partitions of the graph, corresponding to sets of vertices obtained by the maximization of the cycle distributions found in the graph. When a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. The decision to accept or reject a tentative cut position is based on a specific classifier. The algorithm is applied iteratively to each of the resulting subgraphs and terminates automatically if partitions are no longer accepted. The distribution of cycles is the only type of information on which the decision about protein dissection is based. Despite the barebone simplicity of the approach, our algorithm approaches the best heuristic algorithms in accuracy. Our graph-theoretical algorithm uses only topological information present in the protein structure itself to find the domains and does not rely on any geometrical or physical information about protein molecule. Perhaps unexpectedly, these drastic constraints on resources, which result in a seemingly approximate description of protein structures and leave only a handful of parameters available for analysis, do not lead to any significant deterioration of algorithm accuracy. It appears that protein structures can be rigorously treated as topological rather than geometrical objects and that the majority of information about protein domains can be inferred from the coarse-grained measure of pairwise proximity between elements of secondary structure elements. Investigation of the structural organization of proteins is important for our understanding of the mechanisms of protein folding and function, and for insights into protein evolution. Direct determination of protein structures [1,2] and comparative sequence analysis [3,4] indicate that proteins have a modular structure, i.e., a polypeptide chain may consist of several regions that can fold independently and be inherited as discrete sequence fragments, which recombine to produce novel sequence and spatial architectures. This level of protein organization is called domain [5-7]. The notion of a structural domain of a protein may be associated with its physical compactness and thermodynamical stability when excised or expressed independently of other domains [8]. A formal definition of a domain, however, is still an outstanding problem. Several attempts have been made to identify the structural domains of proteins. The most straightforward approach is based on visual inspection of a structure by a human expert. However, this approach is difficult to formalize, and therefore it is not easily applicable for analysis of large data sets. Another approach is to employ comparative sequence analysis. This method benefits from the vast collection of sequences from diverse organisms and high sensitivity of database search and protein sequence alignment. The shortcomings of this method is that, first, it relies on sequence similarity and thus is not applicable when the homologous sequences are not known; second, that the problem of defining the exact borders of sequence domains is itself difficult [9-11]; and third, that many sequence rearrangements, such as permutations, are hard to detect by these methods. Currently, the best results in protein domain dissection are produced by joint application of sequence analysis and examination of structure when it is available. The authoritative databases of structural domains, such as SCOP [12] and CATH [13] are populated in that manner. A distinct category of approaches comprises fully automated methods [14], which define structural domains based on various algorithmic ideas. In the rest of this paper, we will restrict our discussion to those algorithms that operate at the level of the known three-dimensional structures rather than sequences. One example of such an approach is the work of Taylor [15]. He applied a Potts model [16] – Taylor describes his formalism as the Ising model, however, his spin variables can have more than two states, which is known in statistical physics as a Potts model [17,18] – by representing a protein structure as an undirected, weighted graph whose nodes correspond to the amino acid residues and the weights of the edges are a function of the spatial distance between residues. Spin-like variables are assigned to each node, and the domains of a protein are dynamically obtained as converging patterns of these variables. Another program, DomainParser [19,20], utilizes the Ford-Fulkerson algorithm [21], which is a graph-theoretical method to find the minimal number of weighted cuts that separate a graph into two partitions. DomainParser appears to be the most accurate automated method of protein dissection into structural domains [8]. In both these cases, however, the core formalisms of these approaches (Potts model and Ford-Fulkerson algorithm, respectively) need to be supplemented with several additional heuristics for adequate performance. For example, the Potts model needs additional rules to, e.g., reassign small domains, keep β-sheets intact and reclaim short loops [15] and DomainParser uses heuristics about, e.g., the size and compactness of domains and the interface between and segments within domains [19], to mention just a few. We want to emphasize that none of these additional rules can be derived from the utilized formalism (Potts model or Ford-Fulkerson algorithm) but needs to be introduced ad hoc. Furthermore, these rules do not follow the spirit of the utilized method, that means, are not related to correlations between time series or graph-theoretical methods at all but are conceptionally completely In this article, we present a novel algorithm for the automated identification of domains in a protein of a known three-dimensional structure. Our approach is based on ideas from graph theory. First, we represent a protein as an undirected, unweighted and unlabeled graph, which we call a protein graph. The vertices of a protein graph represent secondary structure elements. Two vertices are connected by an edge if the spatial distance between the corresponding secondary structure elements is below a certain threshold, and every pair of consecutive elements is connected by an edge ('backbone connection') by definition. Second, we determine all cycles up to a predefined length within this graph. A constrained partitioning of the vertices of the graph in two subsets results in two different types of cycles, pure and mixed cycles. The 'pure' type contains vertices from only one partition, whereas the 'mixed' type contains vertices from both partitions. Hence, there are three disjoint subsets, one for 'mixed' and two for 'pure' cycles. We examine cycle distributions induced by removal of each backbone connection in turn, and select the constrained partitioning of vertices that mutually maximizes the cycle distributions, thereby defining a tentative cut position along the backbone of the protein. Third, the decision to accept or reject the tentative cut position is obtained by using a special classifier. If the tentative cut position is accepted, the protein graph is split, and the three-step procedure is repeated with each of the two resulting subgraphs. Our algorithm stops automatically if the tentative cut positions are no longer accepted, and it does not rely on prior information about the number of domains. Traditionally, approaches involving graphs use the C[α ]atom of residues [15,19,20] as course-grained level of description, and employ weighted [15] or even weighted and directed graphs [19,20]. One novel idea of our algorithm is to partition the graph on the basis of the cycle distributions. Another novelty is the representation of a protein as an unweighted, undirected and unlabeled graph whose vertices correspond to secondary structure elements. The representation that we employ is simple, and it does not take into account a wealth of additional information available in the protein structure data files, such as position and interactions of amino acid side chains, interactions with ligand and solvent, inherent disorder, and so on. It was not an intention of this work to gain a few percentage points on the already quite high average accuracy enjoyed by the methods that use all this information. Rather, we were interested to see how far we can get in protein domain dissection if we applied a more rigorous algorithmic framework that relies on a topological point of view and requires only a small number of assumptions. Perhaps surprisingly, our algorithm's average performance was comparable with all but the most advanced methods of heuristic domain dissection. The implications of this high achievement of a simple approach to our understanding of protein domain organization are discussed at the end of this work. Results and discussions We selected 2781 proteins from the ASTRAL database [22], among which no pair shares more than 30% sequence similarity. We randomly split this list of proteins into a training set and a test set. The training set consists of 910 and the test set of 1871 proteins. Random selection of two sets was repeated several times, and results of the work were quantitatively very similar in all cases, indicating that both sets were sufficiently large to be statistically sound and more involved tests, such as cross-validation, were not necessary. Parameter optimization Algorithm 2 depends on the following parameters: The maximal cycle length L, the spatial distance Θ and the parameters of the logical decision function α. First, we determine the optimal value of Θ for a fixed value of L = 11, then we investigate the influence of L. We use a training set consisting of 571 one-domain and 153 two-domain proteins to determine the parameters of the decision function D[α ]for the first cut. This assumption simplifies the numerical simulations while being applicable to more that 90% of all proteins with the known structure. The function raised steeply to a maximum of Θ = 6.2Å, followed by a slight, if any, decline to at least Θ = 8.0Å. This order of distance between the C[α ]atoms, seems to be close to the average between the backbones of secondary structure elements (often approximated by the order of 5Å between beta-strands in a sheet, and 10Å between helices in an alpha-helical layer [23]) and gives ample opportunity to various sorts of interactions between amino acid side chains. Analysis of different values of L gave qualitatively similar results. We next investigated the influence of the maximal cycle length L on the performance of our algorithm for optimal Θ = 6.2Å. In Fig. Fig.1,1, the histograms for all proteins in our test set are shown. There is an increase in the mean number of secondary structure elements going from one-domain to four-domain proteins, but, notably, even some one-domain proteins consist of more than 80 secondary structure elements and, hence, give a very large protein graph. Histograms for the number of secondary structure elements (#SSE) in a protein graph. Top, left: One-domain proteins. Top, right: Two-domain proteins. Bottom, left: Three-domain proteins. Bottom, right: Four-domain proteins. Most of our graphs have more than 30 nodes. The determination of the cycles in a graph is a NP-complete problem [24], and simulations show that determination of all cycles up to the maximal possible length in graphs of this size is computationally prohibitive. For this reason, we would like to restrict the maximal cycle length L. In practice, cycles found in a protein graph tend to contain only a subset of nodes, which is considerably smaller than the total number of nodes in the graph. The protein graph of 1A79 is shown as an example in Fig. Fig.2.2. 1A79 is a two-domain protein that consists of 30 secondary structure elements, but the longest cycles we found for 1A79 had L = 16. The maximum of E[obj ]is the same for even shorter cycles, down to L = 4. Thus, even a L of intermediate size appears to be sufficient to see the domain signature in an extremum of E[obj]. We found that L = 11 gives a good compromise between similarity to the case of L[max], as, e.g., shown in Fig. Fig.22 and execution time of a program. For example, with L = 11 and Θ = 6.2Å it takes about 12 hours to determine all cycles for all one domain proteins in our test set using 15 computers with 4 processors each with 3.4GHz. We want to remark, that it is clear that not only the number of nodes, but also the number of edges in a graph influences the resulting number of cycles. This means, there exist graphs with the same number of vertices but a higher connection density for which L = 11 would be impractical. This implies that the concrete value chosen for L is not universal in the sense that we can use it for any possible graph to determine the cycles but it is a characteristic for a graph class. Finally, for Θ = 6.2Å and L = 11 we determine the parameters α of the decision function for the first, second and third cut separately. The first cut separates one-domain and two-domain proteins, the second cut separates two-domain and three-domain proteins and the third cut separates three-domain and four-domain proteins. Objective Function E[obj ]for 1A79 chain A. The color corresponds to different values of the maximum cycle length L. Black: L = 16, blue: L = 11, red: L = 6 and green: L = 4. Results for multi-domain proteins We used the test set consisting of 1871 proteins, which contained one, two, three, or four domains (accounting for 74.9%, 18.8%, 5.4% and 0.9% of all proteins, respectively) and the optimized parameters of our algorithm found from the training set. To evaluate the performance of our algorithm, we applied the error measure suggested in Jones et al. [25], which P determines the overlap of the assigned domains and the predicted domains, or, more precisely, the overlap of residues in these domains. P is defined by $P=1Lr(min{r11,r21}+(Lr−max{r1d−1,r2d−1})++∑i=1d−1(min{r1i+1,r2i+1}−max{r1i,r2i})) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY= Here L[r ]is the number of residues of a protein and $r1i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0= OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGYbGCdaqhaaWcbaGaeGymaedabaGaemyAaKgaaaaa@3091@$ and $r2i MathType@MTEF@5@5@+= vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGYbGCdaqhaaWcbaGaeGOmaidabaGaemyAaKgaaaaa@3093@$ are the assigned and predicted cut positions for the d-domain protein. Our level of description is at the level of secondary structure elements, which leads to an inevitable error on the order of half the number of residues of a secondary structure element divided by the total number of residues. As in [ 8], if P > 0.75 we view the prediction as right, otherwise as wrong. The prediction is also wrong if the number of domains is different from the number of the domains assigned by SCOP, even if the overlap of the remaining domains is larger than 75%. The summary result of our studies are shown in table table1.1. The accuracy of our domain prediction was 84.9% for one-domain, 63.4% for two-domain, 30.7% for three-domain and 22.2% for four-domain proteins. This gives an overall prediction accuracy of 77.3%. Table Table22 compares our results with the results from DomainParser. In Fig. Fig.33 we show the differences in the predicted and assigned cut positions for the two-domain and three-domain proteins with correctly assigned number of domains It is apparent that there are only very few cut positions which are extremely inaccurate. Most predicted cut positions are within ± 5 secondary structure elements and have a domain overlap larger than P > 0.8. Results for the test proteins Comparison of our results with the results from DomainParser ([19] old version, [20] new version) Top row: Results for two-domain proteins. Left: Histogram of the differences from the predicted to the assigned cut position by SCOP. Right: Histogram of the overlap values. Bottom row: Results for three-domain proteins. Left: Histogram of the differences ... In Fig. Fig.44 we show the normalized objective functions for one three-domain protein (i.e, the original E[obj ]divided by its maximum value). Notably, the distance from the maximum of E[obj ]to the next highest value is usually modest, indicating that the decision to cut or not to cut in such a case is non-trivial. In the case of three-domain or four-domain proteins, the second highest peak of the objective function upon the first cut may not be an indication of the second cut as can be seen for 1L8A, where the second peak of the first-cut function is at i = 79, whereas the correct second cut is i = 86. This implies that in general our approach does not allow a shortcut in the partitioning of the graph. Normalized objective function E[obj ]for 1L8A (left, first cut) and 1L8a (right, second cut). We also examined trends in structure and fold classes of the proteins that were accurately dissected by DomainICA and those that did not behave well. Among the 235 proteins with correctly assigned number of domains, 16% of the domains were from the SCOP class of all-alpha proteins, 20% from all-beta, 38% from alpha/beta, and 26% from alpha+beta classes. For the 90(25) proteins which were undercut (overcut), 36(24)% were from all-alpha class, 11(38)% from all-beta, 29(18)% from alpha/beta and 24(20)% from the alpha+beta class. The fractions of correctly partitioned two-domain proteins were 53% for proteins with at least one all-alpha domain, 71% for the proteins containing at least one all-beta domain, 74% for the protein with at least one alpha/beta domain and 70% for those with at least one alpha+beta domain. Apparently, proteins with domains from the all-alpha class are more prone to erroneous partitioning than proteins that contain at least one beta-sheet. One possible explanation for this may have to do with the mean number of contacts in which a secondary structure element participates. A helix has on average 1.9 contacts to other secondary sturture elements, whereas a strand has 2.5, not including self-contacts and multiple contacts between the same secondary structure elements (recall that our algorithm does not use this information; the trend, however, is the same if multiple contacts are also considered). Thus, all-alpha domains are less connected on average, and the number of cycles in these graphs is smaller compared to graphs that represent proteins with beta-sheets. The other factor may be the generally larger distance between packed helices than between strands in a beta-sheet, which makes Θ = 6.2Å an adequate average but too small a value to deal with the specific case of all-alpha proteins. Correction for this latter factor should be easy to incorporate into automated algorithm, as it only requires the measure of preponderance of the alpha-helices in the structure; the former factor, i.e., contact density, is not so easily taken into account by our framework. Proteins that include discontinuous domains, where one domain is inserted into another, pose additional problems for our algorithm, because, for example, a two-domain protein would need two cuts instead of one and an additional step of fragment merging. Many such two-domain proteins, however, can be partitioned "almost correctly" if one part of discontinuous fragment is much shorter than the other, and, hence, one correct cut prediction is sufficient to fulfill P > 0.75. Among the 350 two-domain proteins that we examined, 28 had discontinuous domains, 14 of which were assigned correctly. We finish this results section by discussing several examples that illustrate the working mechanism of our algorithm. In the following, the left figure shows always the domains assigned by SCOP and the right figure shows the predicted domains by DomainICA. In Fig. Fig.55 we show the two-domain protein 1H72 (homoserine kinase). DomainICA does not cut homoserine kinase, because the helices and especially the loops from the second domain make multiple contacts with the secondary structure elements in the first domain. The loop contacts are ignored by most other algorithms, and may cause undersplitting by our algorithm; on the other hand, our approach draws attention to interactions involving loops, and indeed inclusion of loops as vertices in the protein graph improves the overall performance of DomainICA (data not shown). The next two-domain protein shown in Fig. Fig.66 is 1CRK (mitochondrial creatine kinase). In this case the second domain (blue) is split resulting in a three-domain protein predicted by DomainICA (right figure). Interestingly, the split separates a β-sheet. This is due to the fact that, first, all three secondary structure elements are treated equally, second, the third-domain (right figure) has very few contacts in addition to the contact provided by the β-sheet and third, we do not count multiple contacts between two secondary structure elements. In Fig. Fig.77 we show the three-domain protein 1HS6 (leukotriene A(4) hydrolase). Again, the first domain (red, left figure) is split between two strands of a β-sheet because of the lack of additional contacts between other secondary structure elements of these two domains. Large parts of the second domain (blue, left figure) and the third domain (green, left figure) are predicted as one domain (green, right figure) because there exist several contacts between the beginning of domain two (blue, left figure) and the end of domain three (green, left figure) making a split less favorable than separating the first part from domain two (blue, left figure). The last protein we show in Fig. Fig.88 is the three-domain protein 1GSO (glycinamide ribonucleotide synthetase). Both SCOP and DomainICA dissect this protein in three domains, but the second domain is significantly smaller in our prediction than in SCOP. It is evident that the domain in question consists of two subdomains, one of which makes many contacts with the third domain as defined by SCOP, whereas the other is spatially more isolated. A two-domain protein 1H72 (d.14.1.5, d.58.26.1). Left: Domain assignment according to SCOP. 5:167 (red), 168:300 (blue). Right: Domain assignment from DomainICA. 5–300 (blue). A two-domain protein 1CRK (a.83.1.1, d.128.1.2). Left: Domain assignment according to SCOP. 1:98 (red), 99:380 (blue). Right: Domain assignment from DomainICA. 1–80 (blue), 81–263 (red), 264–380 A three-domain protein 1HS6 (a.118.1.7, b.98.1.1, d.92.1.13). Left: Domain assignment according to SCOP. 1:208 (red), 209:460 (blue), 461–610 (green). Right: Domain assignment from DomainICA. 1–154 (red), 155–290 (blue), 291–610 ... A three-domain protein 1GSO (b.84.2.1, c.30.1.1, d.142.1.2). Left: Domain assignment according to SCOP. 2–103 (red), 104–327 (blue), 328–426 (green). Right: Domain assignment from DomainICA. 2–115 (red), 116–192 ... In this work, we presented a graph-theoretical approach for partitioning proteins into structural domains based on two main new ideas. First, we represent proteins as unweighted, undirected and unlabeled graphs whose vertices correspond to the elements of secondary structure, including loops. Second, we introduced the mutual maximization of cycle distributions found in the partitioned graph as an approximate measure of domain compactness. Several other algorithms have been suggested for the problem to identify the domains of a protein automatically [15,19,20]. The main differences between our algorithm and the most successful other algorithm, DomainParser [19,20], is that the latter uses a graph-theoretical core to model proteins at the level of individual residues, and it also cuts proteins on the basis of several heuristic rules that draw from the knowledge of protein physics and geometry not captured by their representation as protein graph. In contrast, DomainICA employs only information present in the graph-theoretical representation of the proteins. Another difference between DomainICA and other approaches is that the former is not employing any weighting scheme, where other approaches use weighted [15] or even weighted and directed [19,20] graphs. In this work, we did not strive first of all to provide numerical results about the identification of structural domains with better accuracy than DomainParser or other recently available algorithms. If this was our main goal, the most sensible approach might be to gradually refine the heuristics of these already useful methods, gaining in accuracy a few percentage points at a time. We set a radically different goal, namely to cast parsing protein structures into domains as an problem of optimization of a partition function that emerges from an extremely simple topological representation of a protein and requires the knowledge of an extremely small number of parameters. If this approach failed completely, this would not be of any interest – if, however, it approached the best available approaches in prediction accuracy, this would beg the question of which properties of protein structures are important in domain recognition and, by extension, whether the simple model is telling us anything important about protein structure and function. The main conclusion from our study is, indeed, that despite extreme paucity of information that is presented by undirected, unweighted, and unlabeled protein graphs, the performance of DomainICA is closely comparable to DomainParser in the case of one-domain and two-domain proteins, which account for more than 90% of all proteins in the ASTRAL database and for a substantial fraction of complete proteomes in many organisms, especially in prokaryotes. It appears that more detailed information about protein structure, such as analysis of interdomain interactions at the residue level or considerations of protein physics and geometry, do not add much structural signal to our coarse grained representation. A corollary of this may be that proteins might be more properly treated as topological rather than geometrical objects, as has been recently speculated (see [26-29] for the debate of this and related issues). The success of our algorithm also raises intriguing questions about the physical constraints on protein domains and may point out to the contacts between secondary structure elements as the main level at which protein domains attain their evolutionary optimal structural design. We also feel that further analysis of protein graphs may offer new venues into the problem of structural, if not evolutionary, classification of proteins and protein domains. Representation of a protein We use the secondary structure elements of a protein – α-helices, β-strands and loops – as a coarse-grained level of description of protein tertiary structure. Each secondary structure element is a node in a protein graph. The connectivity of the graph is given by the following algorithm that utilizes the structural information about a protein from Protein Data Bank files [30]. Algorithm 1 Representation of a protein as a graph: 1. Determine the secondary structure elements of a protein and enumerate them in consecutive order. We differentiate between three types of secondary structure elements: helix, strand and loop. 2. Each secondary structure element represents one node in the protein graph. 3. Two nodes m and n in the protein graph are connected by an edge e(m, n) = 1, if there exist two C[α]-atoms, one from secondary structure element m and another from secondary structure element n, whose spatial distance is below a threshold Θ $e(m,n)={1:|Cαm−Cαn|≤Θ0:|Cαm−Cαn|>Θ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= Additionally, we connect consecutive secondary structure elements along the backbone e(m, m - 1) = e(m, m + 1) = 1 ∀ m N - 1} and e(1, 2) = e(N, N - 1) = 1. All other entries in the adjacency matrix of the protein graph remain zero. There are several ways to obtain the secondary structure elements of proteins. We use the assignment provided in a pdb file [30]. Other programs can be also used to identify the secondary structure elements, e.g., DSSP [31] or STRIDE [32]; this does not change the rest of the method, though it might change the layout of some protein graphs. Protein graph is an undirected, unweighted and unlabeled graph. We do not preserve labels of the nodes representing a helix, a strand or a loop, and we do not consider weights of edges resulting from multiple pairs of C[α]-atoms whose reciprocal spatial distance is below the threshold Θ. All connections determined by Eq. 2 are treated in the same way, regardless of the physical nature of the interactions (e.g., ionic, van der Waals, or other). We call an unweighted, undirected and unlabeled graph obtained by algorithm 1 a protein graph and denote it by G[III]., for the fact that we consider three types of secondary structure in our approach. Indeed, loops are treated as distinct secondary structure elements and are represented as nodes, not as edges. We found that this representation improves the accuracy of the algorithm, presumably because interactions between the loops and other elements contribute to protein domain formation. Partitioning of a protein graph Structural domains of a protein are thought to be compact in some way [33], and several suggestions to characterize the compactness of a domain more precisely have been made. For example, there are hypotheses that the domain should stay folded if the protein is cut into its domains, or that the number of contacts between domains should be less than the number of intra-domain contacts [6,34]. Examining protein structures indicates that the notion of domain compactness is much less rigorous than, e.g., the compactness in inorganic crystal structures, where more formalized definitions are possible. A common property shared by well-folded domains is that the backbone changes direction many times and brings secondary structure elements in contact with one another, often "folding back", as can be seen most directly in the case of parallel and anti-parallel beta-sheets. One well-defined entity which distinguishes between a back-folded and a non-backfolded backbone is a cycle, i.e., a closed path that returns to its starting point in a graph. More generally, we claim it is possible to bipartition a protein graph based on the hypothesis that the resulting partition maximizes the cycle distributions found in both subgraphs. In the following we give the mathematical details of our algorithm, which we call DomainICA (domain identification and cutting algorithm). Algorithm 2 (DomainICA) Partitioning of a protein graph G[III ]with N nodes. 1. Calculate the cycle set $CS MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY= wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFce=qcqWFse=uaaa@3A00@$ consisting of all cycles found in the graph G[III ]up to a length L. 2. Determine the cycle histograms CH[L](i) and CH[R](i) for i N - 1} by dividing the cycle set CS in three non-intersecting sets $CS MathType@MTEF@5@5@+= dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFce=qcqWFse=uaaa@3A00@$[L], $CS MathType@MTEF@5@5@+= dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFce=qcqWFse=uaaa@3A00@$[R ]and $CS MathType@MTEF@5@5@+= dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFce=qcqWFse=uaaa@3A00@$[LR ]defined by $CSL(i)={c∈CS|cj≤i,∀j} MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY= $CSR(i)={c∈CS|cj>i,∀j} MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY= $CSLR(i)=CS\{CSR(i)∪CSL(i)} MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY= Here a cycle c is represented by a vector whose components c[j ]are the nodes in the cycle. We call i the boundary index of part L. The cycle histograms are now defined for the i-th index by $CHL(i,j)=|{c∈CSL(i):|c|=j}| MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY= $CHR(i,j)=|{c∈CSR(i):|c|=j}| MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY= 3. Normalize the cycle histograms along the cycle length index $CH¯L(i,j)=CHL(i,j)∑i′CHL(i′,j) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= $CH¯R(i,j)=CHR(i,j)∑i′CHR(i′,j) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= 4. Determine an objective function E[obj ](i) for i N - 1} by: $Eobj(i)=∑jLCH¯L(i,j)CH¯R(i,j) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= 5. Determine the maximum of the objective function $ic=arg⁡max⁡i′Eobj(i′) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= 6. Accept the suggested cut position, if the decision function D[α ]is true $Dα(ic|Nf,Eobj(ic),E¯objr(ic))=1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= In Fig. Fig.99 the idea of algorithm 2 is shown. The backbone consisting of N secondary structure elements is shown as black line. One of N - 1 possible configurations for the boundary index i is indicated. There are only N - 1 configurations, because each part has to contain at least one node. The boundary index i determines uniquely two disjoint vertices sets V[L](i) = {1, 2, ..., i} and V[ R](i) = {i + 1, i + 2, ..., N} separating the nodes on the "backbone" in a L and R part. The boundary index i can be seen as the position of a cut sliding along the backbone connections. These vertex sets, together with the edges given by Eq. 2, define subgraphs G[L ]and G[R ]of the original graph G. The backbone connections are shown in black, connections within part L in blue, connections within part R in red and connections between the two parts in green. A separation on the backbone at position i results in a deletion of the backbone connection from node i to i + 1. Additionally, all green connections are deleted. This results in two separate graphs, G[L ]and G[R]. Note that the backbone introduces a constraint to the bipartitioning of the graph – only the edges corresponding to the backbone are considered for a cut position. From these graphs, the histograms of the cycle distributions are given, e.g., for the L part and boundary index i, as the number of cycles of length j from $CS MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0= OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFce=qcqWFse=uaaa@3A00@$ which contain only vertices from V[L](i). This is denoted by CH[L] (i, j). Our objective function E[obj ]determines the dot product between the normalized cycle histograms of the L and R part and measures by this their mutual overlap. We use the normalized cycle histograms along the cycle length index, because the absolute number of cycles is of less interest than the relative number compared to other potential cut positions. The normalization transforms the absolute values into relative weights between different cut positions. In the next subsection, we discuss the decision function from Eq. 13. A protein graph is split in two parts for a given boundary index i by deleting the backbone connection from node i to i + 1 and the connections between the two resulting parts (shown in green). Decision function The crucial step in our procedure is the decision to accept or reject the suggested cut position i[c]. We base this decision on the calculation of an objective function for a randomized protein graph. The cut position is accepted if the value of the objective function of the randomized protein graph is significantly lower than the value of the real protein graph. The randomized protein graph is produced by randomly alternating β[r]N entries of the graph adjacency matrix, excluding diagonal and first off-diagonal entries. This ensures that the resulting graph retains its backbone connections and that secondary structure elements do not acquire meaningless self-connections. The predicted cut position can be viewed as statistically significant, if it is stable against the averaged randomized objective functions $E¯objr(ic)=1Nr∑i=1NrEobj,ir(ic) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai= of an ensemble of N[r ]randomized protein graphs at the suggested cut position i[c]. Now we can define the decision function for accepting the tentative cut position. Definition 1 We call D [α ]: I → {0, 1} the decision function of a cut position i[c ]I and define it by $Dα={1:(Nf>α1)∨(Nf>α3∧Eobj(ic)≠0∧E¯objr(ic)≠0∧Eobj(ic)>E¯objr(ic)∧−log(E¯objr(ic)Eobj(ic))≥α2)0:else MathType@MTEF@5@5@+= Here $Nf=NL+NRNtot MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq= is the fraction of cycles either in the L or in the R part, N[L](N[R]) the number of cycles in the L (R) part and N[tot ]is the total number of cycles found in the graph. The three values α[i ]are free parameters of the decision function. The decision function given in definition 1 was found empirically. The logical decision function D[α ]in Eq. 15 we employ as binary classifier consists of two parts. The first part evaluates as true if N[f ]is larger than a threshold α[1]. This condition can be seen as a graph-theoretical analogon to the idea of Rossmann et al. [6], who speculated that a domain should have more intra-domain than inter-domain connections (they, however, counted contacts between all residues, not between secondary structure elements as we do). Here N[f ] The second argument evaluates the breakdown of the randomized objective function. Application of our algorithm to proteins consisting of one or more domains indicates that the set of parameters for { α[1], α[2], α[3]} has to be optimized separately to obtain a better performance. For this reason, we introduce for each cut a decision function D[α ]with different parameters. This is possible because the algorithm can keep track of the number of cuts and a specific form of the decision function can be applied automatically and iteratively until the procedure ends. Authors' contributions AM and FES conceived the study, FES developed and implemented the algorithm and analyzed the data, AM and FES wrote the manuscript. We would like to thank Mike Coleman, Earl Glynn, Galina Glazko and Daniel Thomasset for fruitful discussions. • Chandonia JM, Brenner SE. The Impact of Structural Genomics: Expectations and Outcomes. Science. 2006;311:347–351. doi: 10.1126/science.1121018. [PubMed] [Cross Ref] • Phillips DC. The three-dimensional structure of an enzyme molecule. Sci Am. 1966;215:78–90. [PubMed] • Andreeva A, Howorth D, Brenner SE, Hubbard TJ, Chothia C, Murzin AG. SCOP database in 2004: refinements integrate structure and sequence family data. Nucleic Acids Res. 2004;32:D226–229. doi: 10.1093/nar/gkh039. [PMC free article] [PubMed] [Cross Ref] • Mulder NJea. InterPro, progress and status in 2005. Nucleic Acids Res. 2005;33:D201–205. doi: 10.1093/nar/gki106. [PMC free article] [PubMed] [Cross Ref] • Wetlaufer D. Nucleation, rapid folding, and globular intrachain regions in proteins. Proc Natl Acad Sci. 1973;70:697–701. doi: 10.1073/pnas.70.3.697. [PMC free article] [PubMed] [Cross Ref] • Rossmann MG, Liljas A. Recognition of structural domains in globular proteins. J Mol Biol. 1974;85:177–181. doi: 10.1016/0022-2836(74)90136-3. [PubMed] [Cross Ref] • Doolittle RF. The multiplicity of domains in proteins. Annu Rev Biochem. 1995;64:287–314. doi: 10.1146/annurev.bi.64.070195.001443. [PubMed] [Cross Ref] • Veretnik S, Bourne PE, Alexandrov NN, Shindyalov IN. Toward consistent assignment of structural domains in proteins. J Mol Biol. 2004;339:647–678. doi: 10.1016/j.jmb.2004.03.053. [PubMed] [Cross • AJ E, S VD, CA O. An efficient algorithm for large-scale detection of protein families. Nucleic Acids Res. 2002;30:1575–84. doi: 10.1093/nar/30.7.1575. [PMC free article] [PubMed] [Cross Ref] • RA G, J H. SnapDRAGON: a method to delineate protein structural domains from sequence data. J Mol Biol. 2002;316:839–51. doi: 10.1006/jmbi.2001.5387. [PubMed] [Cross Ref] • K B, BK M, CG E. Prediction of protein interdomain linker regions by a hidden Markov model. Bioinformatics. 2005;21:2264–70. doi: 10.1093/bioinformatics/bti363. [PubMed] [Cross Ref] • Murzin AG, Brenner SE, Hubbard T, Chothia C. SCOP: a structural classification of proteins database for the investigation of sequences and structures. J Mol Biol. 1995;247:536–540. doi: 10.1006/ jmbi.1995.0159. [PubMed] [Cross Ref] • Orengo C, Michie A, Jones S, Jones D, Swindells M, Thornton J. CATH-A Hierarchic Classification of Protein Domain Structures. Structure. 1997;5:1093–1108. doi: 10.1016/S0969-2126(97)00260-8. [ PubMed] [Cross Ref] • Holland TA, Veretnik S, Shindyalov IN, Bourne PE. Partitioning protein structures into domains: why is it so difficult? J Mol Biol. 2006;361:562–590. doi: 10.1016/j.jmb.2006.05.060. [PubMed] [ Cross Ref] • Taylor WR. Protein structural domain identification. Protein Eng. 1999;12:203–216. doi: 10.1093/protein/12.3.203. [PubMed] [Cross Ref] • Potts RB. Some generalized order-disorder transformations. Proceedings of the Cambridge Society. 1953;48:106–109. • Blatt M, Wiesman S, Domany E. Superparamagnetic Clustering of Data. Phys Rev Lett. 1996;76:3251. doi: 10.1103/PhysRevLett.76.3251. [PubMed] [Cross Ref] • Blatt M, Wiesman S, Domany E. Data Clustering using a model granular magnet. Neural Computation. 1997;9:1805–1842. doi: 10.1162/neco.1997.9.8.1805. [Cross Ref] • Xu Y, Xu D, Gabow HN. Protein domain decomposition using a graph-theoretic approach. Bioinformatics. 2000;16:1091–1104. doi: 10.1093/bioinformatics/16.12.1091. [PubMed] [Cross Ref] • Guo Jt, Xu D, D K, Xu Y. Improving the performance of DomainParser for structural domain partition using neural network. Nucl Acids Res. 2003;31:944–952. doi: 10.1093/nar/gkg189. [PMC free article] [PubMed] [Cross Ref] • Ford LR, Fulkerson DR. Flows in Networks. Princeton University Press; 1962. • Chandonia JMG, Hon G, Lo Conte NS, Koehl P, Levitt M, Brenner SE. The ASTRAL compendium in 2004. Nucleic Acids Research. 2004;32:189–192. doi: 10.1093/nar/gkh034. [PMC free article] [PubMed] [ Cross Ref] • Schulz GE, Schirmer RH. Principles of Protein Structure. Springer Publisher; 1979. • Horvath T, Gärtner T, Wrobel S. Cyclic Pattern Kernels for Predictive Graph Mining. Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2004. • Jones S, Steward M, Michie A, Swindells MB, Orengo C, Thornton JM. Domain assignment for protein structures using a consensus approach: characterization and analysis. Protein Sci. 1998;7:233–242. [PMC free article] [PubMed] • Chen SJ, Dill KA. Symmetries in proteins: A knot theory approach. J Chem Phys. 1996;104:5964–5973. doi: 10.1063/1.471328. [Cross Ref] • Erdmann MA. Protein Similarity from Knot Theory: Geometric Convolution and Line Weavings. Journal of Computational Biology. 2005;12:609–637. doi: 10.1089/cmb.2005.12.609. [PubMed] [Cross Ref] • Rogen P, Fain B. Automatic classification of protein structure by using Gauss integrals. Proc Natl Acad Sci USA. 2003;100:119–124. doi: 10.1073/pnas.2636460100. [PMC free article] [PubMed] [Cross • Emmert-Streib F. Algorithmic Computation of Knot Polynomials of Secondary Structure Elements of Proteins. Journal of Computational Biology. 2006;13:1503–1512. doi: 10.1089/cmb.2006.13.1503. [ PubMed] [Cross Ref] • Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig , Shindyalov HIN, Bourne PE. The Protein Data Bank. Nucleic Acids Research. 2000;28:235–242. doi: 10.1093/nar/28.1.235. [PMC free article] [PubMed] [Cross Ref] • Kabsch W, Sander C. Dictionary of protein secondary structure: pattern recognition of hydrogen bonded and geometrical features. Biopolymers. 1983;22:2577–2637. doi: 10.1002/bip.360221211. [PubMed ] [Cross Ref] • Frishman D, Argos P. Knowledge-based protein secondary structure assignment. Proteins. 1995;23:566–579. doi: 10.1002/prot.340230412. [PubMed] [Cross Ref] • Rose GD. Hierarchic organization of domains in globular proteins. J Mol Biol. 1979;134:447–470. doi: 10.1016/0022-2836(79)90363-2. [PubMed] [Cross Ref] • Siddiqui AS, Barton GJ. Continuous and discontinuous domains: An algorithm for the automatic generation of reliable protein domain definitions. Protein Science. 1995;4:872–884. [PMC free article] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central • PubMed PubMed citations for these articles • Substance PubChem Substance links Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1933582/?tool=pubmed","timestamp":"2014-04-17T07:36:52Z","content_type":null,"content_length":"152475","record_id":"<urn:uuid:5d561aa3-8f67-4987-9904-4887c3c26afa>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
algebraic geometry October 18th 2011, 09:16 AM #1 algebraic geometry Let $k$ be a field of characteristic 2. Then find the ideal of $X = V(t_1^2 + t_2^2 + t_3^2) \subseteq \mathbb{A}_k^3$. I'm not sure how to solve this. I mean, since $k$ is of characteristic two, I can see that $(t_1 + t_2 + t_3)^2 = t_1^1 + t_2^2+t_3^2$, so that $I_X = \sqrt{\langle(t_1 + t_2 + t_3)^2\rangle}$. Is it just what we'd think it would be, i.e. $I_X = \langle t_1 + t_2 + t_3 \rangle$ ? If so, why is it true? But how do I find this radical ideal? And is this the best way to work out what the ideal is? Thanks for any help Re: algebraic geometry There is an 'easy' way to work out the radical of an ideal in case this is principal (which in your case it is): Let $J=\langle f \rangle$ where $f=\prod_{i=1}^n f_i^{k_i}$ is its factorization into irreducibles, then $\sqrt{J} = \langle \prod_{i=1}^n f_i \rangle$ To prove this let $g\in \sqrt{J}$ then there exists $m$ such that $g^m \in J$ ie. $g^m = pf$ for some polynomial $p$, by the uniqueness of a factorization in a polynomial ring (over a field at least) the $f_i$ must appear in the factorization of $g$, and so $g\in \langle \prod_{i=1}^n f_i \rangle$. On the other hand if $g\in \langle \prod_{i=1}^n f_i \rangle$ then $g^{\prod_{i=1}^n k_i} \in J$. Re: algebraic geometry Thanks. Is there some way to ensure that the polynomial I have is definitely irreducible? Re: algebraic geometry Let $k$ be a field of characteristic 2. Then find the ideal of $X = V(t_1^2 + t_2^2 + t_3^2) \subseteq \mathbb{A}_k^3$. I'm not sure how to solve this. I mean, since $k$ is of characteristic two, I can see that $(t_1 + t_2 + t_3)^2 = t_1^1 + t_2^2+t_3^2$, so that $I_X = \sqrt{\langle(t_1 + t_2 + t_3)^2\rangle}$. Is it just what we'd think it would be, i.e. $I_X = \langle t_1 + t_2 + t_3 \rangle$ ? If so, why is it true? But how do I find this radical ideal? And is this the best way to work out what the ideal is? Thanks for any help $I(V(J))=\sqrt{J}$ is not always true if your base field is not algebraically closed. is $k$ algebraically closed? Re: algebraic geometry Apologies, the underlying assumption we use is that k is algebraically closed - forgot to mention this Re: algebraic geometry well then, the degree of each $t_i$ in $f =t_1+t_2+t_3$ is one and so $f$ has no non-trival factorization, i.e. $f$ is irreducible. thus $(f)$ is prime and so $\sqrt{(f)}=(f)$. also $\sqrt{J^m} = \sqrt{J}$, for any ideal $J$ and any integer $m \geq 1$. thus $I(V((f^2)))=\sqrt{(f^2)}=\sqrt{(f)^2}=\sqrt{(f)}=( f)$. Re: algebraic geometry Let's say that we have a field $k$ which is not necessarily algebraically closed. Is it still true that $V(I) = V(J) \iff \sqrt{I} = \sqrt{J}$? For example, if this is true then I think that $\ sqrt{\langle x-1, x^2 + y^2 -1 \rangle} = \langle x-1, y \rangle$ over any field k, but I am not too sure. Re: algebraic geometry Let's say that we have a field $k$ which is not necessarily algebraically closed. Is it still true that $V(I) = V(J) \iff \sqrt{I} = \sqrt{J}$? For example, if this is true then I think that $\ sqrt{\langle x-1, x^2 + y^2 -1 \rangle} = \langle x-1, y \rangle$ over any field k, but I am not too sure. the answer to your first question is no. for example consider $I =(x^2+y^2)$ and $J=(x^4+y^4)$ over $\mathbb{A}^2(\mathbb{Q})$. the answer to your second question is yes, although you should have mentioned if you were considering $\mathbb{A}^2(k)$ or $\mathbb{A}^3(k),$ etc. you always need to give two important piece of infiormation: your ground field and the dimension. Re: algebraic geometry Ok, thanks. I should have written that I was working in affine 2-space. Unfortunately I worked out that result by assuming the result in the first question, so how would I calculate it properly. Do you have any recommendations of books I can read on this subject, because I am having real difficulty trying to work out radical ideals Re: algebraic geometry well, because $x^2-1 \in (x-1)$, we have $(x-1,x^2+y^2-1)=(x-1,y^2)$. now clearly $(x-1,y) \subseteq \sqrt{(x-1,y^2)}$ and since $(x-1,y)$ is a maximal ideal of $k[x,y]$, we must have $(x-1,y)=\ it is a good exercise to prove that the same result holds, i.e. $\sqrt{(x-1,y^2)}=(x-1,y)$, in affine $n$-space for any $n \geq 2.$ note that the ideal $(x-a_1, \ldots , x_n-a_n)$, for any field k and any $a_i \in k$, is always a maximal ideal of $k[x_1, \ldots , x_n]$ because the map $\varphi: k[x_1, \ldots . x_n] \ longrightarrow k$ defined by $\varphi(f(x_1, \ldots, x_n))=f(a_1, \ldots , a_n)$ is an onto ring homomorphism and $\ker \varphi = (x_1-a_1, \ldots , x_n - a_n).$ if $k$ is algebraically closed, then the converse is also true, by Nullstellensatz, i.e. every maximal ideal of $k[x_1,x_2, \ldots , x_n]$ is in the form $(x-a_1, \ldots , x_n-a_n)$ for some $a_i \in k.$ there are algorithms for finding the radical of an ideal in polynomial rings but they are not easy and i don't think you'll need them. just simplify your ideal first and use the properties of radicals and remember what i just said about maximal ideals. Re: algebraic geometry In this case, as in many, you do not need to actually compute the radical of $\mathfrak{p}$. We have But $x+y+z$ irreducible $\Rightarrow \mathfrak{p}$ prime $\Rightarrow\, I(V(\mathfrak{p}))=\mathfrak{p}$. October 18th 2011, 04:37 PM #2 Super Member Apr 2009 October 19th 2011, 02:59 AM #3 October 19th 2011, 05:20 AM #4 MHF Contributor May 2008 October 19th 2011, 05:39 AM #5 October 19th 2011, 08:06 AM #6 MHF Contributor May 2008 October 19th 2011, 08:49 AM #7 October 19th 2011, 09:05 AM #8 MHF Contributor May 2008 October 19th 2011, 10:12 AM #9 October 19th 2011, 10:52 AM #10 MHF Contributor May 2008 October 22nd 2011, 07:13 PM #11 Nov 2010
{"url":"http://mathhelpforum.com/advanced-algebra/190708-algebraic-geometry.html","timestamp":"2014-04-25T02:11:24Z","content_type":null,"content_length":"75793","record_id":"<urn:uuid:89d3cef3-dd60-4a65-b869-1d90fac944a7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 15 Machanics? taunt? Wow. Now to greater errors. What exactly is a mass of 3m or 5m? And in the answers, since when is force measured in milligrams? Isn't that a mass unit? Something is major wrong in your physics class, this usage is not acceptable: Remember this was the problem... Friday, March 4, 2011 at 8:44am by bobpursley Applied Machanics (Physics) What do you mean by: 1/12ml^2 k^2=radius of gyration Sunday, May 3, 2009 at 7:20pm by Henri Applied Machanics (Physics) I thought the moment of inertia would be 1/12 ml^2 Sunday, May 3, 2009 at 7:20pm by bobpursley Applied Machanics (Physics) I am using l as the length of the rod. (.3m) Recheck your formula. Sunday, May 3, 2009 at 7:20pm by bobpursley AS machanics A particle P of mass 2kg is attached to one end of a light rod of length 0.5m which is free to rotate in a verticle Tuesday, March 12, 2013 at 9:59am by Lynn solid machanics a material has a youngs modulus of 1.25*105N/mm2 and a poissons ratio of 0.25.calculate the modulus of rigidity and the bulks modulus. Wednesday, August 8, 2012 at 9:10am by sanjeev can u tell me what formula i should be using to answer this question? If a stone falls past a bird on a ledge moving at 4 m/s, how fast will the stone be movingjust before it hits the ground below 9 seconds later? A. 92.2 m/s B. 48.2 m/s C. 352.8 m/s D. 36 m/s Sunday, May 2, 2010 at 8:44pm by me can u tell me what formula i should be using to answer this question? If a stone falls past a bird on a ledge moving at 4 m/s, how fast will the stone be movingjust before it hits the ground below 9 seconds later? A. 92.2 m/s B. 48.2 m/s C. 352.8 m/s D. 36 m/s Monday, May 3, 2010 at 6:22pm by me solid machanics Given: E=1.25*10^5 N/mm² ν=0.25 G=E/[2(1+ν)] K=E/[3(1-2ν)] Wednesday, August 8, 2012 at 9:10am by MathMate particles of mass 3m and 5 m hang one at each end of a light inextensible string which passes over a pulley.The system is released from rest with the hanging parts taunt and vertical.During the subsequent motion the resultant force exerted by the string on the pulley is of ... Friday, March 4, 2011 at 8:44am by rob A space aircraft is launched straight up.The aircraft motor provides a constant acceleration for 10 seconds,then the motor stops.The aircraft's altitude 15 seconds after launch is 2 km.ignore air friction .what is the acceleration,maximum speed reached in km/h and the speed(in... Tuesday, April 2, 2013 at 6:21am by shone vf - vi = 5*10^6 - 2*10^6 = 3*10^6 where vf is the final speed and vi is the initial speed. Assuming a constant deceleration, the electron passes through 2.1 multiply 4cm = 8.4 cm of paper. 8.4 = 1/ 2*a*t^2 deltav = 3*10^6 = a*t where a is the acceleration, t is the time. You ... Sunday, January 13, 2013 at 6:52pm by Jennifer An electron moving at a speed of 5 multiply by 10 raise to power six m/s was shot through a sheet of paper which is 2.1 multiply by 4cm thick.the electron emerges from the paper with a speed of 2 multiply by 10 raise to power six m/s,find the time taken by the electron to pass... Sunday, January 13, 2013 at 6:52pm by Anonymous Applied Machanics (Physics) A steel rod of 500g and 30 cm long spins at 300 RPM,the rod pivots around the center. a) Find angular momentum Moment of Inertia(I)=mk^2 I=0.5kgX(0.075m)^2 I=0.0028125kg*m^2 To find angular momentum: w=300rpmX2pi/60sec w=31.42 rads/s Ang. Momentum=Iw A.M.=0.0028125X31.42 A.M.=... Sunday, May 3, 2009 at 7:20pm by Henri A vertical distance covered for t₁=10 s of accelerated motion is h₁=at₁²/2. The speed at this height is v₁=at₁. The distance covered at decelerated motion during t₂=15- t₁=15-10=5 s is h₂=v₁t₂-gt₂²/ 2. ... Tuesday, April 2, 2013 at 6:21am by Elena
{"url":"http://www.jiskha.com/search/index.cgi?query=Machanics","timestamp":"2014-04-21T13:48:20Z","content_type":null,"content_length":"10780","record_id":"<urn:uuid:034b2089-f842-495b-8160-af6eb039271f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Line Integral Convolution Line integral convolution is a technique, or family of techniques, for representing two-dimensional vector fields. The idea is to produce a texture which is highly correlated in the direction of the vector field but not correlated across the vector field. This is done by generating a noise texture then, for each pixel of the image, "flowing" forward and back along the vector field. The points along this path are looked up in the noise texture and averaged to give the LIC texture at the starting point. The basic technique ignores both the magnitude of the vector field and its sign. With a minor modification the same technique can be used to produce an animation of "flow" along the vector field. Attached to this page is cython code to implement a simple line integral convolution operator, plus some demonstration python code. The demo code can either make more or less the image above - a simple array of vortices; note how an overall rotation appears in the sum of individual vortex vector fields, just as a superfluid's "bulk rotation" is actually a vortex array - or it can make a video of the same vector field. The video is a little awkward to work with, since all the standard video compression techniques butcher it horribly, but it does work well.
{"url":"http://wiki.scipy.org/Cookbook/LineIntegralConvolution","timestamp":"2014-04-18T15:39:32Z","content_type":null,"content_length":"13905","record_id":"<urn:uuid:dd8a7519-b353-471a-934e-ee995f46ba0d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of results: 15 Machanics? taunt? Wow. Now to greater errors. What exactly is a mass of 3m or 5m? And in the answers, since when is force measured in milligrams? Isn't that a mass unit? Something is major wrong in your physics class, this usage is not acceptable: Remember this was the problem... Friday, March 4, 2011 at 8:44am by bobpursley Applied Machanics (Physics) What do you mean by: 1/12ml^2 k^2=radius of gyration Sunday, May 3, 2009 at 7:20pm by Henri Applied Machanics (Physics) I thought the moment of inertia would be 1/12 ml^2 Sunday, May 3, 2009 at 7:20pm by bobpursley Applied Machanics (Physics) I am using l as the length of the rod. (.3m) Recheck your formula. Sunday, May 3, 2009 at 7:20pm by bobpursley AS machanics A particle P of mass 2kg is attached to one end of a light rod of length 0.5m which is free to rotate in a verticle Tuesday, March 12, 2013 at 9:59am by Lynn solid machanics a material has a youngs modulus of 1.25*105N/mm2 and a poissons ratio of 0.25.calculate the modulus of rigidity and the bulks modulus. Wednesday, August 8, 2012 at 9:10am by sanjeev can u tell me what formula i should be using to answer this question? If a stone falls past a bird on a ledge moving at 4 m/s, how fast will the stone be movingjust before it hits the ground below 9 seconds later? A. 92.2 m/s B. 48.2 m/s C. 352.8 m/s D. 36 m/s Sunday, May 2, 2010 at 8:44pm by me can u tell me what formula i should be using to answer this question? If a stone falls past a bird on a ledge moving at 4 m/s, how fast will the stone be movingjust before it hits the ground below 9 seconds later? A. 92.2 m/s B. 48.2 m/s C. 352.8 m/s D. 36 m/s Monday, May 3, 2010 at 6:22pm by me solid machanics Given: E=1.25*10^5 N/mm² ν=0.25 G=E/[2(1+ν)] K=E/[3(1-2ν)] Wednesday, August 8, 2012 at 9:10am by MathMate particles of mass 3m and 5 m hang one at each end of a light inextensible string which passes over a pulley.The system is released from rest with the hanging parts taunt and vertical.During the subsequent motion the resultant force exerted by the string on the pulley is of ... Friday, March 4, 2011 at 8:44am by rob A space aircraft is launched straight up.The aircraft motor provides a constant acceleration for 10 seconds,then the motor stops.The aircraft's altitude 15 seconds after launch is 2 km.ignore air friction .what is the acceleration,maximum speed reached in km/h and the speed(in... Tuesday, April 2, 2013 at 6:21am by shone vf - vi = 5*10^6 - 2*10^6 = 3*10^6 where vf is the final speed and vi is the initial speed. Assuming a constant deceleration, the electron passes through 2.1 multiply 4cm = 8.4 cm of paper. 8.4 = 1/ 2*a*t^2 deltav = 3*10^6 = a*t where a is the acceleration, t is the time. You ... Sunday, January 13, 2013 at 6:52pm by Jennifer An electron moving at a speed of 5 multiply by 10 raise to power six m/s was shot through a sheet of paper which is 2.1 multiply by 4cm thick.the electron emerges from the paper with a speed of 2 multiply by 10 raise to power six m/s,find the time taken by the electron to pass... Sunday, January 13, 2013 at 6:52pm by Anonymous Applied Machanics (Physics) A steel rod of 500g and 30 cm long spins at 300 RPM,the rod pivots around the center. a) Find angular momentum Moment of Inertia(I)=mk^2 I=0.5kgX(0.075m)^2 I=0.0028125kg*m^2 To find angular momentum: w=300rpmX2pi/60sec w=31.42 rads/s Ang. Momentum=Iw A.M.=0.0028125X31.42 A.M.=... Sunday, May 3, 2009 at 7:20pm by Henri A vertical distance covered for t₁=10 s of accelerated motion is h₁=at₁²/2. The speed at this height is v₁=at₁. The distance covered at decelerated motion during t₂=15- t₁=15-10=5 s is h₂=v₁t₂-gt₂²/ 2. ... Tuesday, April 2, 2013 at 6:21am by Elena
{"url":"http://www.jiskha.com/search/index.cgi?query=Machanics","timestamp":"2014-04-21T13:48:20Z","content_type":null,"content_length":"10780","record_id":"<urn:uuid:034b2089-f842-495b-8160-af6eb039271f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US6075610 - Method and apparatus for measuring internal property distribution 1. Field of the Invention The present invention relates to a method for measuring an internal property distribution of a measured object and an apparatus therefor. More particularly, the present invention concerns an internal property distribution measuring method and an apparatus therefor applicable to optical CT (computed tomography) apparatus or the like for obtaining a tomographic image by moving the light incidence position and light detection position along the surface of measured object. 2. Related Background Art In the optical CT apparatus wherein measurement light is incident at one light incidence position on the surface of the object being a scattering medium, wherein the measurement light transmitted as scattered by the object is received at a plurality of light detection positions on the surface of the object, and wherein a distribution of an internal property in the scattering medium is obtained as moving the light incidence position and light detection position along the surface of the object, the following methods are known as methods for obtaining a distribution of absorption coefficient inside thereof, for example. Specifically, they are the methods described in "Imaging of Multiple Targets in Dense Scattering Media" (H. L. Graber, J. Chang, R. L. Barbour, SPIE vol. 2570, p. 219-p. 234), "Imaging diffusive media using time-independent and time-harmonic sources; dependence of image quality on imaging algorithms, target volume weight matrix, and view angles" (Jenghwa Chang et al., SPIE vol. 2389), and so on. The basic imaging principle in such conventional methods is to use a relational equation between received light and a function indicating a contribution to the received light (which is referred to as "spread function" for convenience) where the inside of measured object is divided into a plurality of voxels for convenience, light incident from a certain point on the surface of object passes through the inside of measured object and is received at another point on the same surface, and on that occasion attention is focused on a specific internal property such as an absorption coefficient for each voxel. The voxel stated herein means each region (volume element) obtained by dividing the measured object into a plurality of regions. In the above conventional methods, however, a phantom without absorption was prepared separately from the measured object, the quantity of detected light to be a reference was measured using it, and an aimed absorption coefficient distribution inside the scattering medium was obtained using the spread function in that state. For imaging with such methods, it was necessary to assume a phantom model (physical model) or a simulation model made so as to have a shape identical or similar to the measured object and so as to have a known internal property and to use data obtained from such a model as a reference value in calculation of imaging. Therefore, these conventional methods were not able to avoid errors caused by the difference between the actual measured object and the physical model or simulation model, individual differences of measured object, and so on, and it was very difficult to apply them, especially, to measured objects having complex structure, such as a living On the other hand, a method for obtaining a spatial distribution of concentration of absorptive substance without using a phantom is the method described in the bulletin of Japanese Laid-open Patent Application No. 8-29329. The method described in the same bulletin, however, needed to use light having a plurality of wavelengths even for the cases of only one absorptive constituent in the measured object, and the spatial distribution of concentration of absorptive substance was obtained under the assumption that a mean optical pathlength distribution and attenuated light quantity (the quantity of light attenuated due to influence of scattering or the like) were constant among these wavelengths. In addition, this method assumed an imaginary subject without absorptive substance and obtained the spatial distribution of concentration of absorptive substance using the mean optical pathlength in the imaginary subject, but it did not take the change of optical pathlength due to absorption into consideration. Therefore, the method described in the above bulletin was not satisfactory yet as to reliability of internal property distribution obtained. The present invention has been accomplished in view of the above conventional problems, and an object of the invention is to provide a method and apparatus that can obtain the reference value directly from measured values about the measured object without obtaining the reference value from the physical model or simulation model required before and without using the light having plural wavelengths for one constituent in the measured object, thus enabling to measure an internal property distribution in the measured object based on the reference value with high reliability, i.e., with high accuracy. The present inventors conducted research eagerly to achieve the above object and found that the above problems were solved by using, as a reference value for obtaining the internal property distribution, a mean value of plural measured-values obtained by a plurality of combinations of light incidence position and light detection position located on the surface of the measured object and in the positional relation being relatively identical with respect to a point in the object (for example, the center of the object), thus coming to attain the present invention. A measuring method of internal property distribution according to the present invention is a method comprising: a step of making measurement light incident from a plurality of light incidence positions on a surface of a measured object successively into the object; a step of detecting the measurement light having passed through the object successively or simultaneously at at least one light detection position out of a plurality of light detection positions on the surface of said object and in a predetermined positional relation with respect to a light incidence position at which the measurement light to be measured was incident; a step of obtaining a measured value of a predetermined parameter of said measurement light, based on each measurement light detected at each light detection position; a step of extracting a plurality of said measured values obtained by a plurality of combinations of said light incidence position and said light detection position said positional relation of which is relatively identical and calculating a mean value of the measured values to obtain a reference value in the positional relation; and a step of calculating a change amount of a predetermined internal property in each region of said object divided into a plurality of regions, using said plurality of measured values obtained by said plurality of combinations, and said reference value, thereby obtaining an internal property change amount distribution in the object. A measuring apparatus of internal property distribution according to the present invention is an apparatus comprising: light incidence means for making measurement light incident from a plurality of light incidence positions on a surface of a measured object successively into the object; light detection means for detecting the measurement light having passed through the object successively or simultaneously at at least one light detection position out of a plurality of light detection positions on the surface of the object and in a predetermined positional relation with respect to a light incidence position at which the measurement light to be measured was incident; measured value acquiring means for obtaining a measured value of a predetermined parameter of the measurement light, based on each measurement light detected at each light detection position; reference value calculating means for extracting a plurality of said measured values obtained by a plurality of combinations of said light incidence position and said light detection position said positional relation of which is relatively identical and calculating a mean value of the measured values to obtain a reference value in the positional relation; and internal property change amount calculating means for calculating a change amount of a predetermined internal property in each region of said object divided into a plurality of regions, using said plurality of measured values obtained by said plurality of combinations, and said reference value, and thereby obtaining an internal property change amount distribution in the object. In the method and apparatus of the present invention, the mean value of plural measured values obtained by the plurality of combinations of light incidence position and light detection position located on the surface of the measured object and in the positional relation relatively identical with respect to a point in the object (for example, the center of the object) is used as a reference value for obtaining the internal property distribution. Specifically, a change amount (difference) of an internal property in each region of the object divided into the plural regions is obtained by solving the equation described hereinafter, using the above reference value and each measured value. As described, in the present invention, the reference value is obtained from the mean value of measured values obtained in actual measurement and the change amount of internal property is calculated based on this reference value. Therefore, since the present invention does not use the reference value preliminarily obtained from the physical model or simulation model, there is no room to give rise to errors caused by the individual differences of measured object, the difference of condition occurring between the actual measured object and the physical model or simulation model, and so on. Further, the present invention eliminates the work to preliminarily obtain the reference value with the physical model or the like, thus decreasing the measurement time. "The reference value is obtained from the mean value of measured values obtained by actual measurement and a change amount of internal property or the like is obtained based on the reference value" is nothing but, for example when described with FIG. 1, "that each value of A, B, and C can be obtained if a difference from the mean value is known, without knowing the value from 0." The operation principle of the present invention is to obtain a difference or an absolute value of an internal property based on such a principle. Since the present invention permits the internal property distribution in the measured object to be obtained without using the light having a plurality of wavelengths for one constituent in the measured object, it is free from occurrence of errors resulting from the assumption that the mean optical pathlength distribution and attenuated light quantity (the quantity of light attenuated due to influence of scattering or the like) are constant among the plurality of wavelengths, thus enhancing the measurement accuracy. Further, the present invention can also prevent occurrence of errors similarly in the case in which multiple constituents in the measured object are analyzed using the light having a plurality of wavelengths. Namely, it is because in the present invention the scattering coefficient is obtained for each wavelength in correspondence to a change of wavelength dependence of scattering coefficient which exists in real objects. The positional relation between light incidence position and light detection position according to the present invention is defined with respect to the reference located, for example, at the center of the measured object, i.e., by an angle between a line connecting the center with the light incidence position and a line connecting the center with the light detection position, and "the positional relation is relatively identical" means, for example, that the angles defined above are identical. The measured values according to the present invention are preferably measured values of a predetermined parameter related to scattering and absorption of the measurement light inside the measured object, and preferable measured values are those of a parameter such as the light quantity of measurement light, a phase difference (or a phase delay), the amplitude, or time-resolved waveforms. Internal properties that can be measured by the method and apparatus of the present invention include the absorption coefficient, reduced scattering coefficient (or equivalent scattering coefficient), and refractive index, among which either one property can be obtained singly or a plurality of properties can be obtained simultaneously or successively. First described is the case wherein the internal property to be measured by the method and apparatus of the present invention is the absorption coefficient. In this case, the method of the present invention preferably further comprises a step of obtaining a mean absorption coefficient and a mean reduced scattering coefficient of the object (preferably, obtaining them based on said reference value), and-a step of selecting a spread function (a spread function for absorption coefficient) corresponding to said mean absorption coefficient and mean reduced scattering coefficient, whereby in said step of obtaining the internal property change amount distribution the change amount of the absorption coefficient in said each region can be calculated using said plurality of measured values, said reference value, and said spread function. Also, the apparatus of the present invention preferably further comprises mean absorption and scattering coefficient detecting means for obtaining a mean absorption coefficient and a mean reduced scattering coefficient of said object (preferably, obtaining them based on said reference value), and spread function selecting means for selecting a spread function (a spread function for absorption coefficient) corresponding to said mean absorption coefficient and mean reduced scattering coefficient, whereby in said internal property change amount calculating means the change amount of the absorption coefficient in said each region can be calculated using said plurality of measured values, said reference value, and said spread function. By such method and apparatus of the present invention, the change amount (difference) of the absorption coefficient in said each region is obtained based on the spread function selected in correspondence to the absorption coefficient and reduced scattering coefficient as mean values measured for the measured object with nonuniform inside. Therefore, when compared with the calculation with the absorption coefficient and/or reduced scattering coefficient assumed to be zero, the method and apparatus of the present invention can fully prevent occurrence of the errors resulting from the change of effective optical pathlength caused thereby, thus enhancing the measurement accuracy. Further, the above method of the present invention may further comprise a step of calculating an absolute value of the absorption coefficient in said each region, using said change amount of the absorption coefficient and said mean absorption coefficient, and thereby obtaining an absorption coefficient absolute value distribution in said object and/or a step of calculating a concentration of an absorptive constituent in said each region, using said absolute value of the absorption coefficient, and thereby obtaining an absorptive constituent concentration distribution in said object. Also, the above apparatus of the present invention may further comprise absorption coefficient absolute value calculating means for calculating an absolute value of the absorption coefficient in said each region, using said change amount of the absorption coefficient and said mean absorption coefficient, and thereby obtaining an absorption coefficient absolute value distribution in said object and/or absorptive constituent concentration calculating means for calculating a concentration of the absorptive constituent in said each region, using said absolute value of the absorption coefficient, and thereby obtaining an absorptive constituent concentration distribution in said object. By such method and apparatus of the present invention, the absolute value of absorption coefficient in each region is obtained from the change amount (difference) of absorption coefficient in said each region, based on the absorption coefficient as a mean value measured for the measured object with nonuniform inside. In this way the method and apparatus of the present invention obtain the absolute value of absorption coefficient in each region without using the reference value obtained from the phantom having the uniform absorption coefficient and the same contour as the measured object. Once the absolute value of absorption coefficient in each region is obtained, the concentration of the absorptive constituent in each region is obtained using the known molar absorption coefficient of absorptive constituent or the like. Since an error of the change amount distribution of absorption coefficient obtained by the present invention is extremely smaller than that in the case of the conventional method, the accuracy becomes high of the absolute value distribution of absorptive coefficient and the concentration distribution of absorptive constituent obtained based When the above method of the present invention is applied to an object containing at least two absorptive constituents, the measurement light incident into said object in the light incidence step preferably has at least two wavelengths at which absorption coefficients for the absorptive constituents are different from each other. In this case, it becomes possible that in said light detection step the measurement light having said at least two wavelengths is detected respectively; that in said step of obtaining the measured values said measured values are obtained for each of said measurement light having the at least two wavelengths; that in said step of obtaining the reference value said mean value is calculated for each of said measurement light having the at least two wavelengths; that in said step of obtaining the internal property change amount distribution the change amount of said absorption coefficient is calculated for each of said measurement light having the at least two wavelengths; that in said step of obtaining the absorption coefficient absolute value distribution said absolute value of the absorption coefficient is calculated for each of said measurement light having the at least two wavelengths; and that in said step of obtaining the absorptive constituent concentration distribution said concentration of the absorptive component is calculated for each of said measurement light having the at least two wavelengths, thereby obtaining a concentration distribution of said each absorptive constituent in said object at high accuracy. When the above apparatus of the present invention is used to measure the object containing at least two absorptive constituents, the measurement light incident into said object in said light incidence means preferably has at least two wavelengths at which absorption coefficients for the absorptive constituents are different from each other. In this case, it becomes possible that in said light detection means said measurement light having the at least two wavelengths is detected respectively; that in said measured value acquiring means said measured values are obtained for each of said measurement light having the at least two wavelengths; that in said reference value calculating means said mean value is calculated for each of said measurement light having the at least two wavelengths; that in said internal property change amount calculating means said change amount of the absorption coefficient is calculated for each of said measurement light having the at least two wavelengths; that in said absorption coefficient absolute value calculating means said absolute value of the absorption coefficient is calculated for each of said measurement light having the at least two wavelengths; and that in said absorptive constituent concentration calculating means said concentration of the absorptive constituent is calculated for each of said measurement light having the at least two wavelengths, thereby obtaining a concentration distribution of said each absorptive constituent in said object at high accuracy. Next described is the case wherein the internal property to be measured by the method and apparatus of the present invention is the reduced scattering coefficient. In this case, the method of the present invention preferably further comprises a step of obtaining a mean absorption coefficient and a mean reduced scattering coefficient of said object (preferably, obtaining them based on said reference value); and a step of selecting a spread function (a spread function for reduced scattering coefficient) corresponding to said mean absorption coefficient and mean reduced scattering coefficient, whereby in said step of obtaining the internal property change amount distribution, a change amount of the reduced scattering coefficient in said each region can be calculated using said plurality of measured values, said reference value, and said spread function. Also, the apparatus of the present invention preferably further comprises mean absorption and scattering coefficient detecting means for obtaining a mean absorption coefficient and a mean reduced scattering coefficient of said object (preferably, obtaining them based on said reference value); and spread function selecting means for selecting a spread function (a spread function for reduced scattering coefficient) corresponding to said mean absorption coefficient and mean reduced scattering coefficient; whereby in said internal property change amount calculating means, a change amount of the absorption coefficient in said each region can be calculated using said plurality of measured values, said reference value, and said spread function. By such method and apparatus of the present invention, the change amount (difference) of reduced scattering coefficient in said each region is obtained based on the spread function selected in correspondence to the absorption coefficient and reduced scattering coefficient as mean values measured for the measured object with nonuniform inside. Accordingly, when compared with the case of calculation based on the assumption that the absorption coefficient and/or reduced scattering coefficient is zero, the method and apparatus of the present invention can fully prevent occurrence of the errors based on the change of effective optical pathlength caused thereby, thus enhancing the measurement accuracy. Further, the above method of the present invention may further comprise a step of calculating an absolute value of the reduced scattering coefficient in said each region, using the change amount of said reduced scattering coefficient and said mean reduced scattering coefficient, and thereby obtaining a reduced scattering coefficient absolute value distribution in said object. Also, the above apparatus of the present invention may further comprise absorption coefficient absolute value calculating means for calculating an absolute value of the absorption coefficient in said each region, using said change amount of the absorption coefficient and said mean absorption coefficient, and thereby obtaining an absorption coefficient absolute value distribution in said object. By such method and apparatus of the present invention, the absolute value of reduced scattering coefficient in each region is obtained from the change amount (difference) of reduced scattering coefficient in said each region, based on the reduced scattering coefficient as a mean value measured for the measured object with nonuniform inside. In this way the method and apparatus of the present invention obtain the absolute value of reduced scattering coefficient in each region without using the reference value obtained from the phantom having the uniform reduced scattering coefficient and having the same contour as the measured object. When compared with the case by the conventional methods, the error of change amount distribution of reduced scattering coefficient obtained by the present invention becomes extremely smaller, thus enhancing the accuracy of absolute value distribution of reduced scattering coefficient obtained based thereon. Next described is the case wherein the internal property to be measured by the method and apparatus of the present invention is the refractive index. In this case the method of the present invention preferably further comprises a step of obtaining a mean absorption coefficient, a mean reduced scattering coefficient, and a mean refractive index of said object (preferably, obtaining them based on said reference value); and a step of selecting a spread function (a spread function for refractive index) corresponding to said mean absorption coefficient, mean reduced scattering coefficient, and mean refractive index; whereby in said step of obtaining the internal property change amount distribution, a change amount of the refractive index in said each region can be calculated using said plurality of measured values, said reference value, and said spread function. Also, the apparatus of the present invention preferably further comprises mean absorption and scattering coefficient detecting means for obtaining a mean absorption coefficient, a mean reduced scattering coefficient, and a mean refractive index of said object (preferably, obtaining them based on said reference value); and spread function selecting means for selecting a spread function (a spread function for refractive index) corresponding to said mean absorption coefficient, mean reduced scattering coefficient, and mean refractive index; whereby in said internal property change amount calculating means, a change amount of the refractive index in said each region can be calculated using said plurality of measured values, said reference value, and said spread function. By such method and apparatus of the present invention, the change amount (difference) of refractive index in said each region is obtained based on the spread function selected in correspondence with the absorption coefficient, reduced scattering coefficient, and refractive index as mean values measured for the measured object with nonuniform inside. Accordingly, as compared with the case of calculation under the assumption that the absorption coefficient and/or reduced scattering coefficient is zero, such method and apparatus of the present invention can fully prevent occurrence of the errors based on the change of effective optical pathlength caused thereby, thus enhancing the measurement accuracy. Further, the above method of the present invention may further comprise a step of calculating an absolute value of the refractive index in said each region, using said change amount of the refractive index and said mean refractive index, and thereby obtaining a refractive index absolute value distribution in said object. Also, the above apparatus of the present invention may further comprise refractive index absolute value calculating means for calculating an absolute value of the refractive index in said each region, using said change amount of the refractive index and said mean refractive index, and thereby obtaining a refractive index absolute value distribution in said object. By such method and apparatus of the present invention, the absolute value of refractive index in each region is obtained from the change amount (difference) of refractive index in said each region, based on the refractive index as a mean value measured for the measured object with nonuniform inside. In this way the method and apparatus of the present invention obtain the absolute value of refractive index in each region without using the reference value obtained from the phantom having the uniform refractive index and having the same contour as the measured object. The error of change amount distribution of refractive index obtained by the present invention becomes extremely smaller than that in the case by the conventional methods, thereby enhancing the accuracy of absolute value distribution of refractive index obtained based thereon. The above-stated method of the present invention may further comprise a step of displaying an image indicating the distribution inside said object, based on said distribution obtained. Also, the above-stated apparatus of the present invention may further comprise image display means for displaying an image indicating the distribution inside said object, based on said distribution obtained. Such method and apparatus of the present invention can display the image of the internal property distribution obtained at high accuracy. The present invention will be more fully understood from the detailed description given hereinbelow and the accompanying drawings, which are given by way of illustration only and are not to be considered as limiting the present invention. Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will be apparent to those skilled in the art from this detailed description. FIG. 1 is an explanatory drawing of the operation principle of the present invention. FIG. 2 is a schematic drawing to show a model of the scattering medium with uniform absorption. FIG. 3 is a schematic drawing to show a model of the scattering medium with nonuniform absorption. FIG. 4 is a schematic drawing to show an example of the internal property distribution measuring apparatus of the present invention. FIGS. 5A and 5B are a perspective view and a schematic view, respectively, to show an example of the light incidence fiber. FIG. 6 is a graph to show absorption spectra of hemoglobin and myoglobin. FIGS. 7A and 7B are schematic drawings to show an example of the light incidence means. FIG. 8 is a flowchart to show an example of the internal property distribution measuring method of the present invention. FIG. 9 is a flowchart to show another example of the internal property distribution measuring method of the present invention. FIG. 10 is a flowchart to show still another example of the internal property distribution measuring method of the present invention. FIG. 11 is a schematic drawing to show an example of arrangement of light incidence and/or light detection positions according to the present invention. FIG. 12 is a schematic drawing to show another example of arrangement of light incidence and/or light detection positions according to the present invention. FIG. 13 is a flowchart to show still another example of the internal property distribution measuring method according to the present invention. FIG. 14 is a flowchart to show still another example of the internal property distribution measuring method according to the present invention. FIG. 15 is a flowchart to show still another example of the internal property distribution measuring method according to the present invention. FIGS. 16A, 16B, 16C and 16D are schematic drawings each to show an example of the light incidence method into the scattering medium. FIG. 17 is a schematic drawing to show an example of the light detecting means. FIGS. 18A, 18B and 18C are schematic drawings each to show an example of the light detecting method. FIGS. 19A and 19B are schematic drawings each to show an example of the low-noise amplifying method of detection signal. FIGS. 20A and 20B are a perspective view and a top plan view, respectively, of the phantom used in the example. FIG. 21 is an auxiliary drawing for explaining the relation between the light incidence positions and light detection positions in the example. FIGS. 22A and 22B are photographs to show a half-tone image displayed on the display as a result of reconstruction of image by the conventional method. FIGS. 23A and 23B are photographs to show a half-tone image displayed on the display as a result of reconstruction of image by the method of the present invention. The preferred embodiments of the present invention will be described in detail with reference to the drawings. In the drawings identical or equivalent portions will be denoted by same reference The imaging principle of light CT used in the present embodiment will be first described referring to FIG. 2 and FIG. 3. It is necessary to handle light propagating as scattered with 3-dimensional coordinates, but the following discussion employs 2-dimensional coordinates for simplifying the description. First, let us divide the inside of a scattering medium into N voxels and consider the relation between quantity of incident light and quantity of emergent light (quantity of detected light) with respect to the scattering medium under a condition that the absorption coefficient exists. FIG. 2 shows a schematic diagram of the inside of the scattering medium with uniform reduced scattering coefficient μ'.sub.s and absorption coefficient μ.sub.a (N=25). Equation (1) below holds, where I.sub.0 is the quantity of incident light, I.sub.d0 the quantity of detected light, W.sub.1 an effective optical pathlength in each voxel when the reduced scattering coefficient μ'.sub.s its and absorption coefficient μ.sub.a inside the scattering medium are uniform, and D.sub.sr a damping factor to indicate a rate of outgoing light from the scattering medium to the incident light because of scattering and reflection or the like. I.sub.d0 =D.sub.sr -I.sub.0 -exp {-μ.sub.a (W.sub.1 +W.sub.2 + - - - +W.sub.N)} (1) Next, FIG. 3 shows a schematic diagram of the inside of another scattering medium that is the same as the one shown in FIG. 2 except that a medium having the same reduced scattering coefficient but a different absorption coefficient is put in some voxels. The relation between absorption coefficient μ.sub.ai (i=1, 2, . . . , N) of each medium used in the scattering medium shown in FIG. 3 and absorption coefficient μ.sub.a of the medium used in the scattering medium shown in FIG. 2 is the relation as shown in Eq. (2) below. μ.sub.a.sbsb.i =μ.sub.a +Δμ.sub.a.sbsb.i (i=1, 2, - - - N)(2) Letting I.sub.0 be the quantity of incident light at this time and I.sub.d1 be the quantity of detected light and supposing that the damping factor D.sub.sr to indicate the rate of outgoing light from the scattering medium to the incident light because of scattering and reflection or the like is equal to that when the absorption coefficient was uniform (FIG. 2), the quantity of detected light I.sub.d1 can be expressed by Eq. (3) below. I.sub.d1 =D.sub.SI -I.sub.0 -exp{-[W.sub.1 (μ.sub.a +Δ.sub.a.sbsb.1)+W.sub.2 (μ.sub.a +Δμ.sub.a.sbsb.2)+ - - - +W.sub.N (μ.sub.a +Δμ.sub.a.sbsb.N)]}=I.sub.d0 -exp{-(W.sub.1 Δμ.sub.a.sbsb.1 +W.sub.2 Δμ.sub.a.sbsb.2 + - - - +W.sub.N Δμ.sub.a.sbsb.N)} (3) Accordingly, Eq. (4) below is derived from Eq. (3). ##EQU1## In this way, use of reference light quantity I.sub.d0 permits a distribution of absorption coefficient μ.sub.a inside the scattering medium to be obtained from the relation between the absorption coefficient μ.sub.a desired to obtain and the detected light quantity I.sub.d1 that can be measured by an actual experiment system when the effective optical pathlength W.sub.j is determined. Eq. (4) indicates the relation that holds for a pair of light incidence position and light detection position. Accordingly, for example, for obtaining N absorption coefficients (unknowns), N combinations of light incidence position and light detection position are selected and simultaneous equations of N Eqs. (4) holding for the respective combinations are solved, thus obtaining the N absorption Namely, when the simultaneous equations of N Eqs. (4) that hold for the N combinations of light incidence position and light detection position are expressed in the form of matrix representation, Eq. (5) below is yielded. [ΔI]=[W][Δμ.sub.a ] (5) Here, ΔI represents (lnI.sub.d0 -lnI.sub.d1) and W represents a spread function to indicate a distribution of effective optical pathlength of each voxel. Letting X be the number of light incidence positions M (M.sub.1 to M.sub.x), x be the number of light detection positions m (m.sub.1 to m.sub.x), ΔI.sub.Mm be a change amount in the light quantity in the case of the light incidence position M and light detection position m, and W.sub.Mm be a spread function of each voxel in the case of the light incidence position M and light detection position m, [ΔI.sub.Mm ] is a matrix of (X*x)*1, [W.sub.Mm ] a matrix of (X*x)*N, and [Δμ.sub.an ] a matrix of N*1. Therefore, change amounts Δμ.sub.an of absorption coefficient can be obtained by solving the simultaneous equations of Eq. (6) below. The simultaneous equations of Eq. (6) are preferably to be solved by selecting such values of X and x as to satisfy X*x=N. [Δμ.sub.a.sbsb.n ]=[W.sub.Mm ].sup.-1 [ΔI.sub.Mm ](6) For quantifying the absorption coefficients inside the scattering medium by such an image reconstructing method, the state to be a reference as shown in FIG. 2 is basically necessary, and in the above case the absorption coefficient of each voxel was obtained from Eq. (2) and Eq. (4), because the uniform state of absorption coefficient was assumed to be the reference. However, such an imaging method requires only preliminarily knowing the value of internal absorption coefficient and the light quantity at each light detection position at that time, but does not force any specific restriction on the state of reference to be used actually. Specifically, for example, when an internal absorption coefficient is obtained with the reference at the value of internal absorption coefficient under a certain condition and the light quantity at each light detection position at that time, a value of the absorption coefficient is obtained in the form of a difference from the value of reference. Conventionally, such value of internal absorption coefficient and light quantity at each light detection position at that time, to be the reference, were obtained from a phantom model or a simulation model different only in internal absorption coefficient from the scattering medium as a measured object. However, the present invention employs a mean value of plural measured values obtained with a plurality of combinations of light incidence position and light detection position located on the surface of the measured object and in the relatively same positional relation with respect to a point in the object (the center of the object, for example), as a reference value for obtaining an internal property distribution. A method for producing the effective optical pathlength of each voxel is described in Japanese Patent Application No. 8-6619 of the present inventors entitled "Optical CT apparatus and image reconstructing method by optical CT" etc. In the present embodiment, according to this producing method, a distribution of effective optical pathlength (i.e., a spread function) of each voxel in certain relation of light incidence position and light detection position is preliminarily prepared based on a mean value of absorption coefficient, a mean value of reduced scattering coefficient, and the like. In this way, the present embodiment permits the reference value to be obtained directly from measured values about the measured object without obtaining the reference value from the physical model or simulation model conventionally required and permits the distribution of internal property of the measured object to be measured at high accuracy based on the reference value. The foregoing described the embodiment using CW measurement, but it is also possible to apply time-resolved measurement in the present invention. Specifically, Eq. (3) described above expresses the detected light as an integral value of detected light quantity received by the detector between times 0 and t (s), but the same relational expression also holds in the case of time-resolved waveforms obtained by the detector when a light source is of pulsed light. Rewriting Eq. (3) with respect to a certain time period t.sub.1 to t.sub.2, Eq. (7) below is obtained. [I.sub.d1 ].sub.t.sbsb.1.sub.-t.sbsb.2 =[I.sub.d0 ].sub.t.sbsb.1.sub.-t.sbsb.2 -exp{-([W.sub.1 ].sub.t.sbsb.1.sub.-t.sbsb.2 Δμ.sub.a.sbsb.1 +[W.sub.2 ].sub.t.sbsb.1.sub.-t.sbsb.2 Δμ.sub.a.sbsb.2 + - - - +[W.sub.N ].sub.t.sbsb.1.sub.-t.sbsb.2 Δμ.sub.a.sbsb.N)} (7) Here, [I.sub.d0 ].sub.t1-t2, [I.sub.d1 ].sub.t1-t2 represent light quantities of time-resolved waveforms of each detected light between times t.sub.1 and t.sub.2, and [W.sub.j ].sub.t1-t2 represents the spread function between times t.sub.1 and t.sub.2 (where j indicates a number of each voxel). Also, 0≦t.sub.1 ≦t.sub.2. Therefore, Eq. (8) below is derived from Eq. (7). ##EQU2## As described, the method using the time-resolved measurement permits an absorption coefficient of each voxel to be obtained by increasing the number of equations with various sections of measurement periods and solving N equations numbering in the same as the number of voxels. The above embodiment was described with the internal property to be measured being the absorption coefficient, but the present invention can be applied to other internal properties including the reduced scattering coefficient (or equivalent scattering coefficient) and index of refraction. Specifically, the light received after passed through the inside of object is affected not only by the absorption coefficient and reduced scattering coefficient which the inside of the object has, but also by the all internal properties which the object has, and they act on the received light linearly and independently. For obtaining internal properties affecting each other, independence can be assured by regarding them as one internal property. From these relations, values of the all internal properties which the object has are expressed by equations using the received light and functions to indicate contributions of the internal properties in each voxel to the received light (the spread functions), and use of these permits the distribution of reduced scattering coefficient or absorption coefficient to be obtained as described, for example, in "Forward and Inverse Calculations for 3-D Frequency-Domain Diffuse Optical Tomography" (Brian W. Pogue et al., SPIE vol. 2389, p. 328-p. 338). In such cases, the method of the present invention can also be applied as a deriving method of the reference value of a parameter such as the amplitude or the phase of detected light. Accordingly, when the reference value deriving method according to the present invention is applied to the relational expression using the received light and the functions to indicate contributions to the received light (the spread functions) and, for the internal properties such as the absorption coefficient, the reduced scattering coefficient, and the refractive index, quantification of these internal properties becomes possible. Examples of such relational expressions include the below equations. Namely, equations applicable to acquisition of absorption coefficient and reduced scattering coefficient are Eq. (4') and Eq. (8') below, which are modifications of above Eq. (4) and Eq. (8). ##EQU3## Further, equations applicable to acquisition of absorption coefficient, reduced scattering coefficient, and refractive index are Eq. (4") and Eq. (8") below, which are modifications of above Eq. (4) and Eq. (8). ##EQU4## Even in the case of one internal property being obtained, the measurement accuracy tends to be enhanced with use of spread functions corresponding to all internal properties affecting the detected light. Accordingly, when the measured object has the absorption coefficient, reduced scattering coefficient, refractive index, and so on as internal properties like a living body, there are some cases wherein use of at least the spread functions corresponding to the mean value of absorption coefficient and the mean value of reduced scattering coefficient (in the case of time-resolved measurement being carried out, the mean value of absorption coefficient, the mean value of reduced scattering coefficient, and the mean value of refractive index) is preferable even in the case of imaging only the distribution of absorption coefficient. Next described is an internal property distribution measuring apparatus of the present invention. FIG. 4 shows a schematic diagram of an embodiment of the apparatus of the present invention. The apparatus shown in FIG. 4 is provided with twelve fiber-optic holders 1 to 12 (which will also be referred totally as an "optical fiber holder group", if necessary), and the fiber-optic holders 1 to 12 are arranged at equal intervals around a cross section of scattering medium SM (in the apparatus shown in FIG. 4, they are placed on lines each extending radially at intervals of 30 degrees from the center of scattering medium SM) and are denoted by numbers of 1 to 12 in the clockwise direction. Each fiber-optic holder 1 to 12 has a light incidence fiber 1a to 12a and a light detection fiber 1b to 12b. The light incidence fiber 1a to 12a and light detection fiber 1b to 12b may be constructed in such structure that they are bundled in parallel as shown in FIG. 4, but they may also be formed in such bundled structure that a plurality of light detection fibers 1b (bundle fibers) surround a light incidence fiber 1a as shown in FIG. 5A or in such structure that a light incidence fiber 1a and a light detection fiber 1b are coupled by an optical coupler 1c in the fiber-optic holder as shown in FIG. 5B. Employment of the structures as shown in FIG. 5A and FIG. 5B will result in a tendency to reduce errors, because only one fiber end face is in contact with the periphery of scattering medium SM whereby positional deviation can be suppressed between the end of light incidence fiber and the end of light detection fiber, as compared with the cases wherein the two fibers are arranged vertically in two steps or horizontally in two columns. A light source 30 is optically connected through wavelength selector 20 to the light incidence fibers 1a to 12a. Then, light emitted from the light source 30 is subjected to wavelength selection in the wavelength selector 20 to be incident through the optical fiber holder 1 to 12 to the surface of scattering medium SM being a measured object. The light source 30 may be selected from various sources including light emitting diodes, laser diodes, He-Ne laser, and so on. The light source 30 may be one for generating pulsed light or rectangular-wave light, or modulated light thereof. The light source 30 used in the present embodiment may be one for emitting light (measured light) of a single wavelength, but it is preferably one capable of emitting light (measured light) of two or more wavelengths. The wavelength of the light used for measurement is properly selected depending upon a measured object. In general, in the case of living bodies, it is preferable to use light of 700 or more nm from absorption characteristics of hemoglobin or the like, particularly preferably, the visible light or near infrared light. For example, when the object is oxygenated hemoglobin and deoxygenated hemoglobin, because their absorption coefficients are different from each other as shown in FIG. 6, use of properly selected wavelength permits them to be measured as separating them. A detector 40 is optically connected to the light detection fibers 1b to 12b. Then, light (measurement light) transmitted as scattered in the scattering medium SM is guided through the light detection fiber 1b to 12b of fiber-optic holder 1 to 12 to the detector 40 and the photodetector 40 converts a signal of received light to an amplified detection signal (electric signal) and outputs the detection signal corresponding to each fiber. The photodetector 40 may be selected from all types of photodetectors including photomultiplier tubes, phototubes, photodiodes, avalanche photodiodes, PIN photodiodes, and so on. The point in selection of photodetector 40 is that the detector has spectral sensitivity characteristics capable of detecting light of the wavelength of the measurement light used. When light signals are weak, it is preferable to use a photodetector with high sensitivity or high gain. It is desirable to make the other places than light receiving surfaces of light detection fibers 1b to 12b and photodetector 40 in the structure to absorb or shield light. In the case wherein the light having propagated diffusely inside the scattering medium SM includes light of plural wavelengths, a wavelength selection filter (not illustrated) may be placed, if necessary, between the photodetector 40 and the scattering medium SM. A control unit 50 is connected to the light source and to the detector 40 and selection of fiber-optic holder 1 to 12 used in incidence or acceptance is carried out by the control unit 50. Namely, the control unit 50 performs such control that the measurement light is incident into the scattering medium SM at constant time intervals successively (for example, 1a→2a→3a→ . . . →12a) from the light incidence fibers and such control that in synchronism therewith the measurement light is detected from the light detection fibers located in the predetermined positional relation with respect to the light incidence fibers through which the measurement light was incident. In the present embodiment, the measurement light is detected from each of the all light detection fibers at different locations from the light incidence fiber through which the measurement light was incident (for example, from the light detection fibers 2b to 12b in the case of the light incidence fiber being 1a), but the combination does not have to be limited particularly to such a combination. When the measurement light having a plurality of wavelengths is used, a wavelength of the measurement light to be launched is also controlled by the control unit 50. Specific techniques include a technique for using light of different wavelengths as launched in time division and a technique for using light simultaneously including light components of different wavelengths as described below. Specific wavelength selecting means include a light beam switching device using a mirror, a wavelength switching device using a filter, a light switching device using an optical switch, and so on (FIG. 7A). The above light incidence fibers 1a to 12a, wavelength selector 20, light source 30, and control unit 50 compose light incidence means according to the present invention, while the above light detection fibers 1b to 12b, detector 40, and control unit 50 compose light detection means according to the present invention. A processing unit (for example, CPU) 60 is electrically connected to the control unit 50, and a memory unit (for example, a hard disk or a flexible disk) 70 and a display unit (for example, a display or a printer) 80 are electrically connected to the processing unit 60. A detection signal output from the detector 40 is guided through the control unit 50 to the processing unit 60. The above processing unit 60 and memory unit 70 compose measured value acquiring means, reference value calculating means, internal property change amount calculating means, mean absorption and scattering coefficient detecting means, spread function selecting means, absorption coefficient absolute value calculating means, absorptive constituent concentration calculating means, reduced scattering coefficient absolute value calculating means, and refractive index absolute value calculating means according to the present invention, while the above display unit 80 composes image display means. Such means according to the present invention will be described in detail based on the flowchart of an embodiment of the method of the present invention shown in FIG. 8. (1) In the method shown in FIG. 8, first, measurement data (I.sub.d1 {M, m}) by optical CT is acquired as described below (S100). Here, M represents a number of light incident fiber and m a number of light detection fiber. Specifically, the measurement light is incident from the light incidence fibers 1a to 12a successively into the scattering medium SM, and each measurement light having transmitted as scattered in the scattering medium SM is detected successively or simultaneously from the all light detection fibers located at different locations from the light incidence fiber through which the measurement light was incident (for example, from the light detection fibers 2b to 12b in the case of the light incidence fiber being 1a). For simultaneously detecting the respective measurement light beams, photodetectors 40 need to be prepared in the number corresponding to the number of light detection fibers. Then, the photodetector 40 outputs a detection signal based on each measurement light detected through each light detection fiber. Then, each of these detection signals is processed in the processing unit 60 to be converted into a measured value proportional to each detected light quantity of the measurement light detected, and measured values obtained are stored temporarily in the memory unit 70. Specifically, the processing unit 60 performs an integration arithmetic in a time domain for the detection signals, utilizing a signal synchronous with generation of light from the light source 30, and thus obtains measured values proportional to quantities of detected light. However, the synchronous signal can be omitted if pulsed light or the like is utilized. The arithmetic process of this type can be executed at high speed by a microcomputer or the like incorporated in the processing means. Also, the processing unit 60 may be arranged to correct the measured values utilizing averaging filtering, least square fitting or the like. 2) Next, the processing unit 60 extracts a plurality of measured values obtained by plural combinations of light incidence fibers with light detection fibers the positional relation of which is relatively identical, and calculates a reference value (I.sub.d0 {M, m}) being a mean value of those measured values (S110). Namely, in order to obtain the reference value for gaining a change amount in the absorption coefficient or the like, the processing unit 60 obtains the mean value of measured values for every pair of light incidence and light detection positions the positional relation of light incidence and light detection of which is relatively identical. Specifically describing it based on FIG. 4, for example, in the case of the positional relation wherein an angle made by the light incidence fiber, the center of scattering medium SM, and the light detection fiber is 180 degrees, combinations of light incidence-light detection positions being relatively identical are (1, 7), (2, 8), (3, 9), (4, 10), (5, 11), and (6, 12), when expressed by (number of light incidence holder, number of light detection holder). When the reciprocity theorem of light does not hold, it is necessary to take the opposite combinations of light incidence and light detection positions into consideration. When the respective measured values are I(1, 7), I(2, 8), I(3, 9), I(4, 10), I(5, 11), and I(6, 12), the mean value of these is given by the following equation. I(ave.sub.-- 180)={I(1, 7)+I(2, 8)+I(3, 9)+I(4, 10)+I(5, 11)+I(6, 12)}/6 This I(ave.sub.-- 180) is defined as a reference value when the positional relation of light incidence-light detection is 180 degrees. Similarly, I(ave.sub.-- 150), I(ave.sub.-- 120), I(ave.sub.-- 90), I(ave.sub.-- 60), and I(ave.sub.-- 30) are also obtained and these mean values are stored as reference values in the above respective positional relations temporarily in the memory unit 70. 3) Next, in the present embodiment, mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 can be obtained utilizing the photon diffusion theory or the like, based on the reference values in the above respective positional relations and the like (S120). Specifically, internal absorption coefficients and reduced scattering coefficients are obtained from the reference values of every angle, mean values thereof are further calculated, and they are temporarily stored as mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 inside the scattering medium, in the memory unit 70. There occurs no inconvenience when the mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 inside the scattering medium SM are obtained from only the reference value of either one angle, for example, from only the value of I(ave.sub.-- 180). A method for obtaining the absorption coefficients and reduced scattering coefficients inside the scattering medium SM from the above reference values is, for example, the method described in "Imaging diffusive media using time-independent and time-harmonic sources; dependence of image quality on imaging algorithms, target volume weight matrix, and view angles" (Jenghwa Chang et. al., SPIE vol. 2389). 4) Next, in the present embodiment, a spread function (Wθ) corresponding to the above mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 is selected (S130). Namely, a spread function matching with the mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 obtained above is selected out of spread functions preliminarily prepared and stored in the memory unit 70. In this case, because the spread function is selected based on the absorption coefficients and reduced scattering coefficients obtained from the actually measured values, error factors can be eliminated as compared with the case using those values assumed to be suitable. This "spread function" means a function to indicate the way of spread of light (measurement light) in each voxel, and is a notion involving a so-called weight function as to the effective optical pathlength in each voxel and a so-called contribution function as to a degree of contribution to measurement light in each voxel. The spread function according to the present invention may be either the above weight function or contribution function. Such spread function is described, for example, in "A Perturbation Model for Imaging in Dense Scattering Media: Derivation and Evaluation of Imaging Operation" (H. L. Graber et al., SPIE vol. IS11), "Initial assessment of a simple system for frequency domain diffuse optical tomography" (B. W. Pogue et al., Phys. Med. Biol. 40 (1995) p. 1709-p. 1729), and Japanese Patent Application No. 8-6619 by the present inventors entitled "Optical CT apparatus and image reconstructing method by optical CT." In the present embodiment, the spread functions are preliminarily prepared using the photon diffusion equation without time terms as described below, according to the producing method described in Japanese Patent Application No. 8-6619. ΔΦ-μ.sub.a D.sup.-1 Φ=0 Here, D=1/{3(1-g)μ.sub.a }=1/3μ'.sub.s, Φ: density of photons, D: photon diffusion constant, μ.sub.a : absorption coefficient, μ'.sub.s : reduced scattering coefficient, and g: mean cosine of scattering angle of photon due to the scattering medium. Further, the following shows the photon diffusion equation with time terms, which is preferably used in obtaining the refractive index distribution. ##EQU5## Here, D(r)=1/{3(1-g)μ.sub.a (r)}=1/3μ'.sub.s (r), Φ(r, t): density of photon at position r and at time t, C: speed of light in the medium, D: photon diffusion constant, μ.sub.a : absorption coefficient, S(r, t): light source, μ'.sub.s : reduced scattering coefficient, t: time, r: position, and g: mean cosine of scattering angle of photon due to scattering medium. Letting C' be the speed of light in a vacuum and n be a refractive index of a measured object, C=C'n. A spread function fitting the mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 is specifically a function to indicate the way of spread of light that would be obtained if the same relative relation of light incidence-light detection positions as in actual measurement were set for an object having the same mean absorption coefficient and mean reduced scattering coefficient as the mean absorption coefficient and mean reduced scattering coefficient of the measured object and having the same shape as that of the measured object, which is selected based on the mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0 and the like. The memory unit 70 may be arranged to store a correction term for correcting distortion occurring when the object is divided into plural blocks (voxels), and in that case the aforementioned measured values and/or the aforementioned reference values can be corrected in the processing unit 60 (S140). This correction concerning the voxels is such that, for example when total distances are different on the voxels depending upon the cutting way of voxels though the distance is the same between the light incidence position a and the light detection positions b, c, for example, a difference between them is utilized as a correction term for the measured values and/or reference values. 5) Subsequently, the processing unit 60 calculates a change amount Δμ.sub.a in absorption coefficient in each aforementioned region of the plural regions divided into, using the plurality of measured values obtained by the aforementioned plurality of combinations, the aforementioned reference values, and the aforementioned spread function (S150), and outputs it (S160). Specifically, a change amount in absorption coefficient is obtained using the foregoing reference value of each angle, the foregoing measured values, and the foregoing spread function. The relation holding on that occasion, when considered in correspondence to above Eq. (4), is such that the reference value I.sub.d0, for example where the positional relation of light incidence-light detection is 180 degrees, is I(ave.sub.-- 180) and the measured values I.sub.d1 are I(1, 7), I(2, 8), I(3, 9), I(4, 10), I(5, 11), and I(6, 12). At this time, the absorption coefficient of the reference value I.sub.d0 is the mean absorption coefficient of the inside of the scattering medium SM. Further, letting Wθ be the spread function where the positional relation of light incidence-light detection is 180 degrees, Eq. (4-1) to Eq. (4-6) below hold, and when these simultaneous equations (the simultaneous equations of above Eq. (8)) are obtained every positional relation (i.e., simultaneous equations in the same number as the number of unknowns) and when they are solved, the change amount Δμ.sub.a in absorption coefficient in each region is yielded. ##EQU6## For obtaining a spatial distribution of change amount of absorption coefficient or a spatial distribution of concentration change inside the scattering medium SM, the thus holding relations may be solved with simultaneous equations of the same number as the number of voxels (volume elements) obtained by dividing the inside of scattering medium SM. In the present embodiment the conjugate gradient method was employed. Even if the number of equations is smaller or greater than the number of voxels, a distribution of internal property will be obtained by using the singular value decomposition method or the like, because it changes a singular problem to a non-singular problem. An absorption distribution concerning absorption coefficient change amounts inside the measured object is obtained based on the change amount Δμ.sub.a in absorption coefficient in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 (S170). There are a variety of other methods known as methods for obtaining the absorption distribution from the calculation method of Δμ.sub.a in the processing unit 60 and displaying the image as described above. Such methods are described, for example, in "Optical Back Projection Tomography in Heterogeneous Diffusive Media" (S. B. Cloak et al., in Advances in Optical Imaging and Photon Migration, 1996 Technical Digest; Optical Society of America, Washington D.C., 1996, pp. 147-149), "Back-projection image reconstruction using photon density wave in tissues" (S. A. Walker et al., SPIE vol. 2389, pp. 350, 1995), "Optical tomography by the temporally extrapolated absorbance method" (Ichiro Oda et al., APPLIED OPTICS, vol. 35, No. 01, 1996), and so on. These methods are back projection methods or modifications thereof, which are methods for reconstructing images, in place of the above algorithm. It is also possible to calculate an absolute value of a concentration difference of an absorptive constituent in each region from the above change amount Δμ.sub.a in absorption coefficient in each region using a known molar absorption coefficient of the absorptive constituent (S180), a distribution concerning concentration differences of the absorptive constituent inside the measured object is obtained based on the absolute value of concentration difference of absorptive constituent in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 Further, it is possible to calculate an absolute value μ.sub.a of absorption coefficient in above each region using the above change amount Δμ.sub.a in absorption coefficient in each region and the foregoing mean absorption coefficient μ.sub.a0 (S200), a distribution concerning absolute values of absorption coefficient inside the measured object is obtained based on the absolute value μ.sub.a of absorption coefficient in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 (S210). Yet further, it is possible to calculate a concentration of an absorptive constituent in each region from the above absolute value μ.sub.a of absorption coefficient in each region using the known molar absorption coefficient of the absorptive constituent (S220), a distribution concerning concentrations of the absorptive constituent inside the measured object is obtained based on the concentration of the absorptive constituent in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 (S230). When the scattering medium SM contains at least two absorptive constituents, for example, when the scattering medium contains oxygenated and deoxygenated hemoglobins, a concentration distribution of each absorptive constituent is obtained by using measurement light having at least two wavelengths different from each other in absorption coefficient to those absorptive constituents, obtaining the foregoing measured values and foregoing reference values for each of the measurement light components having the respective wavelengths, and obtaining the absorption coefficient change amount and the absorption coefficient absolute value for each of the measurement light components having the respective wavelengths, based thereon. Described below is a measurement of concentration of hemoglobin using the above two-wavelength spectroscopy. Main absorptive constituents in a mammalian brain are water, cytochrome, and oxygenated and deoxygenated hemoglobins. Absorption of water and cytochrome in the near-infrared region is as little as almost negligible with respect to oxygenated and deoxygenated hemoglobins. Oxygenated and deoxygenated hemoglobins have different absorption spectra, as shown in FIG. 6. Further, the skull may be regarded as a scattering medium with respect to near-infrared rays. Supposing absorption coefficients μ.sub.a1 and μ.sub.a2 were obtained for light of two wavelengths, wavelengths λ.sub.1 and λ.sub.2, by the method described so far in the above sections, the following equations hold in accordance with the Lambert-Beer law. μ.sub.a1 =ε.sub.Hb, 1 [Hb]+ε.sub.HbO, 1 [HbO] μ.sub.a2 =ε.sub.Hb, 2 [Hb]+ε.sub.HbO, 2 [HbO] ε.sub.Hb, 1 : molar absorption coefficient [mm.sup.-1 λ.sub.1 ; ε.sub.HbO, 1 : molar absorption coefficient [mm.sup.-1 ; ε.sub.Hb, 2 : molar absorption coefficient [mm.sup.-1 λ.sub.2 ; ε.sub.HbO, 2 : molar absorption coefficient [mm.sup.-1 ; [Hb]: molar concentration [M] of deoxygenated hemoglobin; [HbO]: molar concentration [M] of oxygenated hemoglobin. Therefore, the molar concentration [Hb] of deoxygenated hemoglobin and the molar concentration [HbO] of oxygenated hemoglobin can be obtained from the known parameters ε.sub.Hb, 1, ε.sub.HbO, 1, ε.sub.Hb, 2, ε.sub.HbO, 2, and μ.sub.a1 and μ.sub.a2 calculated from the measured values. Quantification of respective concentrations of three components absorption spectra of which are known, as in the case of cytochrome being taken into consideration in addition to the above case, can be carried out using light of three or more wavelengths. In general, concentrations of n constituents absorption spectra of which are known can be quantitatively measured in the same manner as above from measured values of absorption coefficient at n or (n+1) wavelengths. Further, since the degree of saturation Y is μ.sub.a1 /μ.sub.a2 =[ε.sub.Hb, 1 +Y(ε.sub.HbO, 1 -ε.sub.Hb, 1)] -ε.sub.Hb, 2)], the degree of saturation Y can be calculated readily from the known parameters ε.sub.Hb, 1, ε.sub.HbO, 1, ε.sub.Hb, 2, ε.sub.HbO, 2, and μ.sub.a1 and μ.sub.a2 calculated from the measured values. In the above method, the present invention permits the absorption coefficients μ.sub.a1 and μ.sub.a2 for the light of each wavelength to be obtained with accuracy, so that each concentration can also be attained with accuracy. The above equations can be further simplified with use of the wavelength (800 nm, isosbestic wavelength) showing the same value of absorption for oxygenated and deoxygenated hemoglobins. The foregoing described the preferred embodiment of the present invention, but it is noted that the present invention is by no means limited to the above embodiment, of course. Specifically, the above embodiment was arranged to obtain the absorption coefficient as an internal property, but the present invention can also be applied to measurement of reduced scattering coefficient as described previously. FIG. 9 shows a flowchart of an embodiment to obtain the absorption coefficient and reduced scattering coefficient. In the method shown in FIG. 9, the measurement of absorption coefficient is carried out in the same manner as in the method shown in FIG. 8, but, in selecting the spread function (Wθ) associated with the mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0, it is preferable to select the spread function (Wμ.sub.a, j) for absorption coefficient and the spread function (Wμ'.sub.s, j) for reduced scattering coefficient (S130). Then the processing unit 60 calculates the change amount Δμ'.sub.s of reduced scattering coefficient in each region of the plural regions divided into, using the plural measured values obtained by the plurality of combinations, the reference values, and the spread function (S240), and outputs it (S250). Specifically, based on Eq. (4') described previously, the change amount of reduced scattering coefficient is obtained using the reference value of each angle, the measured values, the spread functions, and the change amount of absorption coefficient. More specifically, simultaneous equations hold based on Eq. (4'), similarly as Eq. (4-1) to Eq. (4-6) described previously, and these simultaneous equations are established every positional relation and solved, thereby calculating the change amount Δμ'.sub.s of reduced scattering coefficient in each region. A reduced scattering coefficient change amount distribution inside the measured object is obtained based on the change amount Δμ'.sub.s of reduced scattering coefficient in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 (S260). Further, it is possible to calculate the absolute value μ'.sub.s of reduced scattering coefficient in each region, using the above change amount Δμ'.sub.s of reduced scattering coefficient in each region and the mean reduced scattering coefficient μ'.sub.s0 (S270), a distribution concerning absolute values of reduced scattering coefficient inside the measured object is obtained based on the absolute value μ'.sub.s of reduced scattering coefficient in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 (S280). Additionally, the present invention can also be applied to measurement of refractive index, and FIG. 10 shows a flowchart of an embodiment for obtaining the absorption coefficient, reduced scattering coefficient, and refractive index. In the method shown in FIG. 10, the measurement of absorption coefficient and reduced scattering coefficient is carried out in the same manner as in the methods shown in FIG. 8 and FIG. 9, but, in obtaining the mean absorption coefficient μ.sub.a0 and mean reduced scattering coefficient μ'.sub.s0, based on the reference value or the like in each positional relation, a mean refractive index n.sub.0 is also obtained (S120). However, the refractive index of water (1.33) may be used as the mean refractive index n.sub.0. In selecting the spread function (Wθ), it is preferable to select the spread function (Wμ.sub.a, j) for absorption coefficient, the spread function (Wμ'.sub.s, j) for reduced scattering coefficient, and the spread function (W.sub.n, j) for refractive index (S130). Then the processing unit 60 calculates a change amount Δn of refractive index in each region of the plural regions divided into as described above, using the plural measured values obtained by the plurality of combinations, the reference values, and the spread function (S290), and outputs it (S300). Specifically, based on Eq. (4") described previously, the change amount of refractive index is obtained using the reference value of each angle, the measured values, the spread functions, the change amount of absorption coefficient, and the change amount of reduced scattering coefficient. More specifically, simultaneous equations hold based on Eq. (4"), similarly as Eq. (4-1) to Eq. (4-6) described previously, and these simultaneous equations are established every positional relation and solved, thus calculating the change amount Δn of refractive index in each region. A refractive index change amount distribution inside the measured object is obtained based on the change amount Δn of refractive index in each region thus obtained and an image indicating the distribution is displayed in the display unit 80 (S310). Further, it is possible to calculate the absolute value n of refractive index in above each region, using the above change amount Δn of refractive index in each region and the mean refractive index n.sub.0 (S320), a distribution concerning absolute values of refractive index inside the measured object is obtained based on the absolute value n of refractive index in each region thus obtained, and an image indicating the distribution is displayed in the display unit 80 (S330). Once the distribution concerning the refractive indices is attained in this way, it becomes possible to obtain a distribution of blood glucose concentration. A method for detecting the blood glucose concentration by a change of refractive index is, for example, the method described in "Possible correlation between blood glucose concentration and the reduced scattering coefficient of tissues in the near infrared" (John S. Maier et al., OPTICS LETTERS vol. 19, No. 24, Dec. 15, 1994). The glucose concentration of organism tissue greatly affects the refractive index of extracellular fluid and the reduced scattering coefficient in tissue is greatly dependent upon an index difference between extracellular fluid and cell. Thus, a change in the refractive index of extracellular fluid will result in affecting detected light. It becomes thus possible to obtain a distribution of blood cell concentration inside tissue by obtaining a refractive index distribution based on the detected In the above embodiment the plural light incidence and light detection positions were positioned around one cross section of the scattering medium, but the light incidence and/or light detection positions (which will be denoted by P) may be arranged stereoscopically as shown in FIG. 11 or FIG. 12. Namely, when the measured object is assumed to be a head or a mamma, the light incidence and/or light detection positions (P) may be placed as shown in FIG. 11; when the measured object is assumed to be an arm, a leg, a breast, or a mamma (under pressure), the light incidence and/or light detection positions (P) may be placed as shown in FIG. 12. The above embodiment employed the measured values of light quantity by the time integration method, as measured values, but the measured values applicable to the present invention are not limited to these. For example, they may be those of phase difference (or phase delay) or amplitude of measurement light. In addition, a specific technique for acquiring the measured values in the processing unit 60 may be properly selected depending upon desired measured values, and, for example, such means may be employed as phase difference and/or amplitude measurement by the phase modulation method or as time-resolved waveform measurement by the time-resolved spectroscopy. In the above embodiment the mean absorption coefficient and mean reduced scattering coefficient inside the scattering medium SM were obtained from the data obtained by the optical CT apparatus itself according to the present invention, but it is also possible to obtain the mean absorption coefficient and mean reduced scattering coefficient inside the scattering medium SM by another apparatus (S120a) and select the spread function based thereon (S130a) as shown in FIG. 13 to FIG. 15. The steps other than the above in FIG. 13 to FIG. 15 each correspond to the steps in FIG. 8 to FIG. 10. An advantage in this case is the simple configuration of the system of optical CT apparatus, because, for example, data obtained by the optical CT apparatus can be measured by CW (continuous-wave light) and the pulsed light or modulated light is used only in the apparatus for obtaining the mean absorption coefficient and mean reduced scattering coefficient. A technique for obtaining the mean absorption coefficient and mean reduced scattering coefficient by another apparatus may be the phase modulation method or the time-resolved spectroscopy. Methods for measuring the mean reduced scattering coefficient μ'.sub.s0 and mean absorption coefficient μ.sub.a0 as regarding the distribution of optical parameter as uniform inside the measured object in this way are described, for example, in "Development of Time Resolved Spectroscopy System for Quantitative Non-invasive Tissue Measurement" M. Miwa et al., SPIE vol. 2389 as to the time-resolved spectroscopy and, for example, in the bulletin of Japanese Laid-open Patent Application No. 6-221913 as to the phase modulation method. If the optical CT apparatus incorporates the above technique, it will be able to perform the calculation at the same time as acquisition of the measured values. Means for launching light into the scattering medium such as a living body, other than the method using the optical fibers shown in FIG. 4 and FIG. 16B, may be any method utilizing a condenser lens (FIG. 16A) or a pinhole (FIG. 16C), any method for launching light from the inside of body as in a gastrocamera (FIG. 16D), and so on. Since the mean diffusion length is approximately 2 mm in the scattering medium such as a living sample, incident light is scattered before it propagates about 2 mm straight, thus losing directionality of light. Therefore, influence of mean diffusion length is negligible in the case of scattering media with thickness of several cm or more, and thus the light may be made incident in a spot shape. Also, a thick beam of light may be made incident into the scattering medium. In this case, the beam may be regarded as a plurality of spot light sources arranged. In the embodiment shown in FIG. 4 the space is fine between the light incidence fiber and light detection fiber and the surface of scattering medium SM. In practical applications, this may be increased and this space may be filled with a liquid or a jelly substance (hereinafter referred to as an interface material) having the refractive index and reduced scattering coefficient nearly equal to those of the scattering medium SM being a measured object. Namely, no problem will arise, because the light is incident into the measured object as diffusely propagating in the interface material. When reflection on the surface of scattering medium SM is not negligible, proper selection of the interface material can decrease influence of surface reflection or the like. Further, in a case where a space between the light incidence fiber and/or light detection fiber and the surface of scattering medium SM such as a living sample is filled with such an interface material, a plurality of combinations of the light incidence position and the light detection position the positional relation of which is relatively identical can be easily and surely attained. Further, the above embodiment was described in an aspect using the light of different wavelengths as being launched in time division, but it is also permissible to employ a method for coupling light components of different wavelengths into coaxial beams by an optical coupler 35, selecting a wavelength by a wavelength selecting filter 20 provided immediately before an incident point of light, and letting light of each wavelength enter the scattering medium, or a method for letting the beams in parallel into the scattering medium as they are (FIG. 7B). In the case of the latter, it is, however, necessary to subject the detected light to wavelength selection by wavelength selecting filters 25a to 25c disposed immediately before the photodetectors 40a to 40c, as shown in FIG. 17. Means for receiving and detecting the light having propagated diffusely inside the scattering medium, other than the method using the optical fibers as shown in FIG. 4 and FIG. 18B, may be a direct detecting method (FIG. 18A), a method using a lens (FIG. 18C), and so on. When signals obtained by the photodetector 40 need to be amplified with low noise, a narrow-band amplifier (FIG. 19A), a lock-in amplifier (FIG. 19B), or the like may be utilized. In the case of the lock-in amplifier being used, the aforementioned synchronous signal is used as a reference signal. This method is effective in performing measurement in a high dynamic range, using the rectangular-wave light or pulsed light. Further, in the above embodiment a plurality of light incidence fibers and light detection fibers were positioned around the scattering medium and the light incidence position and light detection position were moved by successively changing the fibers respectively used for light incidence and for light detection, but the light incidence position and light detection position to the scattering medium may be scanned in synchronism. This arrangement permits an image indicating a distribution of internal information to be obtained by obtaining the internal information of each part of the scattering medium, storing it in a frame memory, and reading it by a television system. Further, measurements at different times allow a temporal change of internal information to be measured. The aforementioned memory unit 70 has a function to store the internal information thus obtained and the display unit 80 displays an intermediate status or a result thereof. In this case, these arithmetic processes can be executed at high speed by the computer device 60 provided with the memory 70, the display 80, and so on. With using still another method, as described in the bulletin of Japanese Laid-open Patent Application No. 6-221913 or in the bulletin of Japanese Laid-open Patent Application No. 6-129984, wherein a regularly circular holder is set around the measured object and measurement is carried out therein, differences among individuals are absorbed therein even in measuring the human head, and the measurement can be conducted in the state of the actual measurement system being close to the model system, thus tending to improve the accuracy. At least two light incidence positions and at least two light detection positions may be previously settled so that the positional relations of the combinations of the light incidence positions and the light detection positions become relatively identical. In this case, it becomes easy to obtain measured values at the combinations the positional relation of which is relatively identical and to calculate a mean value of the measured values. The above embodiment was arranged to calculate the spatial distribution of absorption coefficient from the known reduced scattering coefficient, but the spatial distribution of reduced scattering coefficient can also be obtained from the known absorption coefficient by the same technique. It is thus possible to obtain the spatial distributions of the both absorption coefficient and reduced scattering coefficient by the technique proposed this time. The above embodiment used the photon diffusion equation as an equation for obtaining the internal property, but the equation does not always have to be limited to it. For example, it is also permissible to use an equation derived from the fact that absorption of light inside the scattering medium can be expressed as a function of propagation distance, to use a relational equation between detected light and internal property obtained empirically, or the like. If correlation is attained between a disease or a body condition and the detected light, the present invention will enable to acquire useful information directly from the detected light. For example, when there is the correlation between a change of detected light and a structural change of tissue, the structural change can be obtained from the detected light utilizing the correlation. The following example shows an example of imaging of absorption coefficient when only the absorption coefficient is changed among the all internal properties (absorption coefficient, reduced scattering coefficient, refractive index, etc.) which the object has. In order to prove effectiveness of the present invention, experiments were conducted in the following procedures using the same apparatus as the one shown in FIG. 4 except that the intervals of locations of the light incident fibers were 20 degrees and the intervals of locations of the light detection fibers were 10 degrees. Since the phantom used in this example has the same shape in the direction along the height (or since it has symmetry in the z-axis direction), the 3-D (stereoscopic) problem can be dimensionally decreased to a 2-D (cross-sectional) problem. Specifically, the laser of CW light having the wavelength 800 nm and the power 50 mw was made incident through the light incidence fibers into the phantom shown in FIGS. 20A and 20B, and the light transmitted or scatteredly reflected by the phantom was detected by the light detection fibers to be guided to the detector. Specifications of the phantom used were as follows. (Material) Matrix=epoxy resin Scattering substance=silica particles Absorptive substance=dyes (Shape) Cylindrical solid phantom Diameter=8 cm Height=9 cm (Matrix) Absorption coefficient=0.01/mm Reduced scattering coefficient=1.00/mm (Absorptive substance) Diameter=1 cm Absorption coefficient=0.02/mm Reduced scattering coefficient=1.00/mm Light signals detected were converted to detection signals by a photomultiplier tube and the detection signals were added for ten seconds as being read by a counter. The sum was sent through GPIB (measurement interface bus to the computer. Defining the measurement up to this point as one measurement, data was acquired with clockwise movement of light incidence fiber every 20 degrees from A to R shown in FIG. 21 and with clockwise movement of light detection fiber every 10 degrees. Light detection positions per light incidence position were ten locations from 0 degree to 90 degrees with respect to the reference of each light incidence position. Experiment results are as follows. 1) The reference value I.sub.d0 was acquired with a phantom without absorptive substance prepared separately, and an image was reconstructed. The result is shown in FIG. 22A (in which the background is black and the index of absorption coefficient is shown in FIG. 22B). In the image shown in FIG. 22A a plurality of images appear and positional deviation occurs. The circle illustrated in FIG. 22A indicates the position of image expected. 2) According to the present invention, the mean value of measured values was obtained every combination of light incidence-light detection positions the positional relation of light incidence-light detection of which was relatively identical and an image was reconstructed using the mean value as the reference value I.sub.d0. The result is shown in FIG. 23A (in which the background is black and the index of absorption coefficient is shown in FIG. 23B). As seen in the image shown in FIG. 23A, it is apparent that the present invention improved the positional deviation and quantifying property as compared with the image shown in FIG. 22A obtained by the conventional method. The circle illustrated in FIG. 23A indicates the position of image expected. The present invention makes it possible to obtain the reference value directly from the measured values for the measured object without obtaining the reference value from a physical model or a simulation model necessitated before and without using light having a plurality of wavelengths for one component in the measured object, and to image an internal property distribution (for example, an absorption coefficient change amount distribution, an absorption coefficient distribution, an absorptive constituent concentration distribution, a reduced scattering coefficient change amount distribution, a reduced scattering coefficient distribution, a refractive index change amount distribution, or a refractive index distribution) in the measured object based on the reference value. Therefore, the present invention can avoid occurrence of an error due to the difference between the actual measured object and the physical model or simulation model, the individual differences of measured object, or the like, thus permitting measurement with high reliability, i.e., with high accuracy. In addition, the present invention eliminates the need for the work to preliminarily obtain the reference value with the physical model or the like, thus enabling to decrease the measurement time. Further, the present invention can prevent occurrence of an error resulting from the assumption that the mean optical pathlength distribution and attenuation light quantity (the quantity of light attenuated due to influence of scattering or the like) are constant among plural wavelengths, thus enhancing the measurement accuracy. From the invention thus described, it will be obvious that the invention may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended for inclusion within the scope of the following claims. The basic Japanese Applications No. 140711/1996 filed on May 10, 1996, and No. 334674/1996 filed on Nov. 29, 1996 are hereby incorporated by reference.
{"url":"http://www.google.com/patents/US6075610?dq=6,360,693","timestamp":"2014-04-18T04:29:53Z","content_type":null,"content_length":"204337","record_id":"<urn:uuid:cd85871d-9302-4cb0-9c57-301fef0f351d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Multi-group and bias corrected bootstrap Multi-group and bias corrected bootst... Scott R. Colwell posted on Friday, March 10, 2006 - 8:01 am Hello: I have a multigroup SEM model that has x->y which is partially mediated by two variables M1 and M2 such that x->y and x->M1->y and x->M2->y. The grouping variable is dichotomous 0 = low and 1 = high. I am using type=complex as it is a complex sample design. I would like to estimate the bias corrected bootstrapped SE, but you can't do that with type = complex. Is there a work around for this? For example if I were to create a phantom variable and fix its path (to x) to be the product of the x->m->y paths, then remove type = complex, would I get reliable bias-corrected SE for the phantom variable path since its fixed or would removing the type=complex create unreliable coefficients. Linda K. Muthen posted on Friday, March 10, 2006 - 9:53 am I don't see how the phantom variable approach would correct for nonindependence of observations due to clustering but perhaps I am missing something. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=1129","timestamp":"2014-04-17T03:51:49Z","content_type":null,"content_length":"17668","record_id":"<urn:uuid:8e27b833-4759-4f89-889f-37774053d4e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
If the columns/rows of an nXn matrix A are linearly independent, what is the rank(A)? October 14th 2012, 07:06 AM #1 Oct 2012 United Kingdom If the columns/rows of an nXn matrix A are linearly independent, what is the rank(A)? If the columns of an nXn matrix A are linearly independent as vectors, what is the rank of A? If the rows of an nXn matrix A are linearly independent as vectors, what is the rank of A? Re: If the columns/rows of an nXn matrix A are linearly independent, what is the rank The Rank of A and A transpose are the same. By defintion, the rank of a matrix is the number of linearly independant column vectors. So it would have Rank n October 14th 2012, 07:57 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/205296-if-columns-rows-nxn-matrix-linearly-independent-what-rank.html","timestamp":"2014-04-19T23:11:26Z","content_type":null,"content_length":"34815","record_id":"<urn:uuid:565efc88-d4f3-4d42-ba17-c1b03c1ca26a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Reaction Mechanisms - derive rate laws CAcT HomePage Reaction Mechanisms - derive rate laws Skills to develop • Explain reaction mechanism. • Derive a rate law from a given mechanism. Reaction Mechanisms - derive rate laws A reaction mechanism is a collection of elementary processes or steps (also called elementary steps) that explains how the overall reaction proceeds. A mechanism is a proposal from which you can work out a rate law that agrees with the observed rate laws. The fact that a mechanism explains the experimental results is not a proof that the mechanism is correct. As a mechanism, no proof is required. Proposing a mechanism is an interesting academic exercise for a mature chemist. Students in general chemistry will not be required to propose a mechanism, but they are required to derive the rate law from a proposed mechanism. Elementary Processes or Steps A summary of the elementary process or steps is given in a table form here. This previous link provides details about these steps. In the table below, A, B, and C represent reactants, intermediates, or products in the elementary process. Molecularity Elementary step Rate law 1 A -> products rate = k [A] 2 A + A -> products rate = k [A]^2 A + B -> products rate = k [A] [B] A + A + A -> products rate = k [A]^3 3 A + 2 B -> products rate = k [A] [B]^2 A + B + C -> products rate = k [A] [B] [C] Deriving Rate Laws from Mechanisms You all have experienced that an accident or road construction on a free way slows all the traffic on the road, because limitations imposed by the accident and constructions apply to all cars on that road. The narrow passing stretch limits the speed of the traffic. If several steps are involved in an overall chemical reaction, the slowest step limits the rate of the reaction. Thus, a slow step is called a rate determining step. The following examples illustrate the method of deriving rate laws from the proposed mechanism. Please learn the technique. Example 1 If the reaction 2 NO[2] + F[2] = 2 NO[2]F follows the mechanism, i. NO[2] + F[2] = NO[2]F + F (slow) ii. NO[2] + F = NO[2]F (fast) What is the rate law? Since step i is the rate-determining step, the rate law is 1 d[NO[2]] - --- ------ = k [NO[2]] [F[2]] 2 dt Since both NO[2] and F[2] are reactants, this is the rate law for the reaction. Addition of i. and ii. gives the overall reaction, but the step ii does not affect the rate law. Note that the rate law is not derived from the overall equation either. Example 2 For the reaction H[2] + Br[2] = 2 HBr, the following mechanism has been proposed i Br[2] ® 2 Br (both directions are fast) ii Br + H[2] k[2] HBr + H (slow) iii H + Br[2] k[3] HBr + Br (fast) Derive the rate law that is consistent with this mechanism. For a problem of this type, you should give the rate law according to the rate-determining (slow) elementary process. In this case, step ii is the rate-determining step, and the rate law is, 1 d[HBr] --- -------- = k[2] [H[2]] [Br] 2 dt The factor 1/2 results from the 2 HBr formed every time, one in step ii and one in step iii. Since [Br] is not one of the reactants, its relationship with the concentration of the reactants must be sought. The rapid reaction in both direction of step i implies the following relationship, k[1] [Br[2]] = k[-1] [Br]^2 [Br] = ((k[1]/k[-1]) [Br[2]])^(1/2) Substituting this in the rate expression results in, rate = k[2] (k[1]/k[-1])^(1/2) [H[2]] [Br[2]]^(1/2) The overall reaction order is 3/2, 1 with respect to [H[2]] and 1/2 with respect to [Br[2]]. The important point in this example is that the rapid equilibrium in step i allows you to express the concentration of an intermediate, ([Br]) in terms of concentrations of reactants ([Br[2]]) so that the rate law can be expressed by concentrations of the reactants. The ratio k[1]/k[-1] is often written as K, and it is called the equilibrium constant for the reversible elementary steps. Example 3 Derive the rate law that is consistent with the proposed mechanism in the formation of phosgene from Cl[2] and CO. (K[1] = k[1]/k[-1] and K[2] = k[2]/k[-2] may be considered as equilibrium constants of the elementary processes, and M is any inert molecule.) i. Cl[2] + M = 2 Cl + M (fast equilibrium, K[1]) ii. Cl + CO + M = ClCO + M (fast equilibrium, K[2]) iii. ClCO + Cl[2] = Cl[2]CO + Cl (slow, k[3]) The overall reaction is Cl[2] + CO = Cl[2]CO From the rate-determining (slow) step, --------- = k[3] [ClCO] [Cl[2]] - - - (1) You should express [ClCO] in terms of concentrations of Cl[2] and CO. This is done by considering step ii. [ClCO] = K[2] [Cl] [CO] - - - (2) You should express [Cl] in terms of [Cl[2]]. For this, you may use step i. [Cl] = K[1]^(1/2) [Cl[2]]^(1/2) - - - (3) Substituting (3) in (2) and then in (1) gives the Rate, Rate = k[3] K[1]^(1/2) K[2] [CO] [Cl[2]]^(3/2) = k [CO] [Cl[2]]^(3/2) where k = k[1] K[1]^(1/2) K[2], the observed rate constant. The overall order of the reaction is 5/2, strange but that is the observed rate law. This example shows how the concentrations of intermediates are related to those of the reactants in a two-step equilibrium. If the third step is iii. ClCO + Cl = Cl[2]CO (slow, k[3]) the rate law will be different from the result derived above. As an exercise, derive the rate law using this alternate step. Example 4 In an acid solution, the mechanism for the reaction NH[4]^+ + HNO[2] -> N[2] + 2 H[2]O + H^+ i. HNO[2] + H^+ = H[2]O + NO^+ (equilibrium, K[1]) ii. NH[4]^+ = NH[3] + H^+ (equilibrium, K[2]) iii. NO^+ + NH[3] -> NH[3]NO^+ (slow, k[3]) iv. NH[3]NO^+ -> H[2]O + H^+ + N[2] (fast, k[4]) Derive the rate law. From the rate-determining step, you have ------------- = k[3] [NO^+] [NH[3]] - - - - (4) Neither NO^+ nor NH[3] is a reactant. You must express their concentrations in terms of [NH[4]^+] and [HNO[2]] from elementary processes i and ii. From i, [NO^+] = K[1] [HNO[2]] [H^+] / [H[2]O] - - (5) From ii, [NH[3]] = K[2] [NH[4]^+] / [H^+] - - - - (6) Substituting (6) and (5) in (4) gives, [HNO[2]] [NH[4]^+] Rate = k[3] K[1] K[2] ----------------- = k [HNO[2]] [NH[4]^+] where k = k[3] K[1] K[2] / [H[2]O] is the overall rate constant. Confidence Building Questions • If the reaction 2 A + B[2] = 2 AB follows the mechanism, i. A + B[2] = AB + B (slow) ii. A + B = AB (fast) What is the order with respect to [B[2]]? Skill - Figure out the order from a given mechanism. What is the overall order? • Kinetic studies of the reaction A[2] + B[2] -> 2 AB suggested the mechanism as follows, i. A[2] = 2 A (fast equilibrium) ii. A + B[2] -> AB + B (slow) iii. B + A[2] -> AB + A (fast) What is the order of the reaction with respect to [A[2]]? Discussion - If you got rate = k [A[2]]^(1/2) [B[2]], congratulations. • If the mechanism for the reaction 2 NO + O[2] -> 2 NO[2] i. NO + NO = N[2]O[2] (fast equilibrium) ii. N[2]O[2] + O[2] -> 2 NO[2] (slow) What is the power of [NO] in the differential rate law? Skill - Figure out the order from a given mechanism. You should get: rate = k [NO]^2 [O[2]] • If the reaction mechanism consists of these elementary processes, i. A = 2 B (fast, equilibrium) ii. B + 2 C -> E (slow) iii. E -> F (fast) Choose the correct differential rate law for the reaction A + 4 C -> 2 F (a) (1/2)(d[F]/dt) = k[A] [C]^4 (b) rate = k[A] [C]^2 (c) -d[A]/dt = k[A]^(1/2) [C] (d) d[F]/dt = k[A] [C] (e) (d[F]/dt) = k[A]^(1/2)[C]^2 The method - The rate determining step is ii. Express [B] in terms of [A] from i.
{"url":"http://www.science.uwaterloo.ca/~cchieh/cact/c123/ratemech.html","timestamp":"2014-04-21T09:41:22Z","content_type":null,"content_length":"12700","record_id":"<urn:uuid:709d8e69-7453-4e76-b00e-16043cb9c605>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Pseudo random number generation Random number generator. The method is attributed to B.A. Wichmann and I.D.Hill, in 'An efficient and portable pseudo-random number generator', Journal of Applied Statistics. AS183. 1982. Also Byte March 1987. The current algorithm is a modification of the version attributed to Richard A O'Keefe in the standard Prolog library. Every time a random number is requested, a state is used to calculate it, and a new state produced. The state can either be implicit (kept in the process dictionary) or be an explicit argument and return value. In this implementation, the state (the type ran()) consists of a tuple of three integers. It should be noted that this random number generator is not cryptographically strong. If a strong cryptographic random number generator is needed for example crypto:rand_bytes/1 could be used seed() -> ran() Seeds random number generation with default (fixed) values in the process dictionary, and returns the old state. seed(A1, A2, A3) -> undefined | ran() Seeds random number generation with integer values in the process dictionary, and returns the old state. One way of obtaining a seed is to use the BIF now/0: {A1,A2,A3} = now(), random:seed(A1, A2, A3), seed({A1, A2, A3}) -> undefined | ran() seed({A1, A2, A3}) is equivalent to seed(A1, A2, A3). seed0() -> ran() Returns the default state. uniform()-> float() Returns a random float uniformly distributed between 0.0 and 1.0, updating the state in the process dictionary. uniform(N) -> integer() Given an integer N >= 1, uniform/1 returns a random integer uniformly distributed between 1 and N, updating the state in the process dictionary. uniform_s(State0) -> {float(), State1} Given a state, uniform_s/1returns a random float uniformly distributed between 0.0 and 1.0, and a new state. uniform_s(N, State0) -> {integer(), State1} • N = integer() • State0 = State1 = ran() Given an integer N >= 1 and a state, uniform_s/2 returns a random integer uniformly distributed between 1 and N, and a new state. Some of the functions use the process dictionary variable random_seed to remember the current seed. If a process calls uniform/0 or uniform/1 without setting a seed first, seed/0 is called automatically.
{"url":"http://erldocs.com/R14B/stdlib/random.html","timestamp":"2014-04-21T07:08:50Z","content_type":null,"content_length":"6711","record_id":"<urn:uuid:ce1a6ecd-6fcb-4c8c-99e3-88a2c4d7b656>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
A Closer Look At Intel's 2013 Outlook Last week I gave my initial thoughts on Intel's (INTC) Q4. Revenues for Intel are only expected to increase a few percentages and gross margins are actually expected to decline from 62% to 60%. Yet operating expenses are also projected to increase 4%. What really caught my eye and has provoked me to take a closer look was the massive $13B of capital spending that Intel has planned for 2013. I know that Intel has massive operating cash flow, but can Intel really afford to do this much capital spending, pay its dividend and buy back stock? The thing we need to look at is the FY 2013 outlook provided by Intel: Full-Year 2013 □ Revenue: low single-digit percentage increase. □ Gross margin percentage: 60 percent, plus or minus a few percentage points. □ R&D plus MG&A spending: $18.9 billion, plus or minus $200 million. □ Amortization of acquisition-related intangibles: approximately $300 million. □ Depreciation: $6.8 billion, plus or minus $100 million. □ Impact of equity investments and interest and other: net gain of approximately $100 million. □ Tax Rate: approximately 25 percent. □ Full-year capital spending: $13.0 billion, plus or minus $500 million. It may not look like much, but there is a major amount of information in this outlook. Now, in this analysis, I will be using both the "best case" and "worst case" numbers and "midpoint" numbers: Revenue: FY 2012 revenue came in at $53.3B. A "low single-digit" increase could be any number from 1-5. I will use 5 for the best case and 1 for the worst case and 3 for the midpoint. Multiplying $53.3B by 1.05 gives us an estimated $55.96B in 2013 best case revenue. Multiplying $53.3B by 1.01 gives us an estimated $53.83B in 2013 worst case revenue. Multiplying $53.3B by 1.03 gives us an estimated $54.90B in 2013 midpoint revenue. Gross margin percentages: The 2013 outlook for gross margin is 60 percent, plus or minus a few percentage points. A "few" percentage points could be almost any single digit number. For my analysis, I will be using 58% for the worst case gross margin and 62% for the best case margin and 60% for the midpoint. Using the best case margin and revenue, we arrive at $34.70B in gross margin. Using the worst case margin and revenue, we arrive at $31.22B in gross margin. Using the midpoint margin and revenue, we arrive at $32.94B in gross margin. Operating expenses: The 2013 outlook for R&D plus MG&A spending is $18.9 billion, plus or minus $200 million. Amortization of acquisition-related intangibles for 2013 are estimated to be approximately $300 million. For the worst case operating expenses, I will be using $19.1B in R&D plus MG&A and $0.3B in amortization, for a total of $19.4B in operating expenses. For the best case operating expenses, I will be using $18.7B in R&D plus MG&A and $0.3B in amortization, for a total of $19.0B in operating expenses. For the midpoint operating expenses, I will be using $18.9B in R&D plus MG&A and $0.3B in amortization, for a total of $19.2B in operating expenses. Depreciation: The 2013 outlook for depreciation is $6.8 billion, plus or minus $100 million. For the best case number, I will be using the $6.9B in depreciation. For the worst case number, I will be using the $6.7B in depreciation. For the midpoint number, I will be using the $6.8B in depreciation. Impact of equity investments and interest and other: I will be using the 2013 outlook of a net gain of approximately $100 million for all three projections. Tax Rate: I will be using the 2013 outlook of 25 percent for all three projections. Capital spending: The 2013 outlook for capital spending is $13.0 billion, plus or minus $500 million. For the best case number, I will be using the $12.5B in capital spending. For the worst case number, I will be using the $13.5B in capital spending. For the midpoint number, I will be using the $13B in capital spending. Now that we got these numbers, let us figure out some other key metrics: Operating Income: Using the best case gross margin of $34.70B and subtracting the best case operating expenses of $19.0B, we arrive at the best case operating income of $15.7B. Using the worst case gross margin of $31.22B and subtracting the worst case operating expenses of $19.4B, we arrive at the worst case operating income of $11.82B. Using the midpoint gross margin of $32.94B and subtracting the midpoint operating expenses of $19.2B, we arrive at the midpoint operating income of $13.74B. Taxable income: For this, we only need to add $100 million to the operating income for all three projections. For the best case, this leads to $15.8B. For the worst case, this leads to $11.92B. For the midpoint, this leads to $13.84B. Taxes: For all three projections, we are using the 25% tax rate. For the best case, this leads to $3.95B in taxes. For the worst case, this leads to $2.98B in taxes. For the midpoint, this leads to $3.46B in taxes. Net income: We arrive at net income by subtracting taxes from taxable income. Using the best case operating income of $15.7B and subtracting the best case taxes of $3.95B, we arrive at the best case net income of $11.75B. Using the worst case operating income of $11.82B and subtracting the worst case taxes of $2.98B, we arrive at the worst case net income of $8.84B. Using the midpoint operating income of $13.74B and subtracting the midpoint taxes of $3.46B, we arrive at the midpoint net income of $10.28B. Operating Cash Flow: We arrive at operating cash flow by adding net income and depreciation. Using the best case net income of $11.75B and adding the best case depreciation of $6.9B, we arrive at the best case operating cash flow of $18.65B. Using the worst case net income of $8.84B and adding the worst case depreciation of $6.7B, we arrive at the worst case operating cash flow of $15.54B. Using the midpoint net income of $10.28B and adding the midpoint depreciation of $6.8B, we arrive at the midpoint operating cash flow of $17.08B. Operating cash flow is a key figure, as it is used to pay dividends, capital spending and share buybacks. Earnings per share: Using the Q4 shares outstanding of 5.095B, we can also estimate Intel's EPS. The best case EPS would be $2.31. The worst case EPS would be $1.74. The midpoint EPS would be $2.02. Note that announced share buybacks will inflate this number. Free cash flow: We arrive at free cash flow by subtracting capital spending from operating cash flow. Using the best case operating cash flow of $18.65B and subtracting the best case capital spending of $12.5B, we arrive at the best case free cash flow of $6.15B. Using the worst case operating cash flow of $15.54B and subtracting the worst case capital spending of $13.5B, we arrive at the worst case free cash flow of $2.04B. Using the midpoint operating cash flow of $17.08B and subtracting the midpoint capital spending of $13B, we arrive at the best case free cash flow of $4.08B. Free cash flow is the cash that is left over to make dividend payments and share buybacks. As of Q4, Intel had 5.095 billion shares outstanding. Intel's current annual dividend is $0.90 per share. This means Intel needs to make about $4.58B per year in dividend payments. Only the "best case" FCF numbers have Intel cover the dividend, and only then barely. It is no wonder Intel needed to issue debt to pay for the previously announced share buyback program. It is very likely that Intel will be spending more cash than it generates in 2013 and will therefore be FCF negative. I can now see why Intel's stock dropped so sharply on Friday. Using the midpoint of Intel's own 2013 estimates, we can see that EPS would fall 5%. Using the worst case scenario, EPS would fall 19% from 2012 levels. Only the best case scenario would see an EPS increase of 8%. The average analyst estimate for FY 2013 EPS is actually $1.93. The prospect of negative free cash flow is also not appealing. However, I do not think Intel's dividend is in danger. As of Q4, Intel had $8.5B in cash and $4B in short term assets. That being said, I do think that a large dividend increase for 2013 is now unlikely. Intel will, in my opinion, give instead a token increase of the dividend (less than 5%).
{"url":"http://seekingalpha.com/article/1126511-a-closer-look-at-intel-s-2013-outlook","timestamp":"2014-04-18T09:05:40Z","content_type":null,"content_length":"83465","record_id":"<urn:uuid:c6ed14af-b051-4152-9fb4-b3f4cbb9c977>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
computation in a sentence Example sentences for computation These days it seems theories of mind and theories of computation can't help but have deep implications for each other. As computation power has increased and data sets have grown, computers can now uncover more and more arbitrage opportunities. The computation may be complex, but it is not impossible. The resources of this country are almost beyond computation. The decision problem asks, in essence, whether reasoning can be reduced to computation. Computation, in any case, is im- possible to dramatIze. There is much more computation needed for the same problem done by clustering than if it were done the standard way. The trouble is this computation contains so many imponderables. Number two, our contract doesn't say anything about such a computation. Simply grade the e-mail, and include it in the student's final grade computation. Costs of communication and computation have tumbled. But keeping track of who to groom-and why-demands quite a bit of mental computation. The computation itself is performed by a further series of laser pulses. The computation involved must be done fast, since the mirror has to respond much more quickly than the blink of an eye. By reducing the amount of computation done on board the device, caching speeds things up and saves battery life in the process. If computation is the same thing as cognition then these results can be accepted. It often, indeed usually changes with time, thereby invalidating even an accurate initial computation. Seventh-graders plunge into a math program that combines straight computation with sophisticated problems in symbolic logic. Some of us must wish that ubiquitous computation would simply go away and leave us alone. So far, however, quantum computation has not been tested in the laboratory. The limitations of data collection and computation made precise predictions and good decisions difficult to make. And you may not have any conscious access to how the computation was made. On it is an intricate computation in a neat, squarish hand. But the jacket-less book is far from boring: it's a computation book, complete with stains and handwriting. Such computers would harness the physical properties of quantum bits, or qubits, to expand the reach of computation. One specialized in engineering, one in chemistry and one in computation. Work probably needs to be done to scale up computation on the one hand, and to increase accuracy of the computation on the other. Her research made advances in symbolic computation and algebraic algorithms, including ideas that can be used in cryptography. The primary component in any valuation computation is cash flow. These days, that answer is becoming less popular all the time, because of a seemingly unrelated field: quantum computation. It has long seemed to me the easiest solution to why our reasoning is flawed is that computation is costly. While still in his early thirties he created the theoretical framework for an entirely new discipline called quantum computation. If you think of the brain as a computer, all of a sudden computation takes on a mysterious quality. The remaining critical component they could not replace was computation. However, the point of the computation is to understand why it prevails. It's a window into an ungodly amount of computation and engineering innovation and talent. If you have a powerful enough computation system, then you really don't need anything else. The theory of computation has had a profound influence on philosophical thinking. When a maze is created, the answer is already embedded in its structure, well before any computation begins. One place where this looming problem is particularly acute is in the ultrafast clocks used to pace computation. These longer response times were a reflection of heavier computation requirements. Computation goes as the cube of the processing gain. Famous quotes containing the word computation Human beings are distinguished by a capacity for experience as well as by their behavior, and homosexuality is as much a... Computers are good at swift, accurate and at storing great masses of information. The brain, on... I suppose that Paderewski can play superbly, if not quite at his best, while his thoughts wander to the other end of the... Explore Dictionary.com Nearby Words
{"url":"http://www.reference.com/example-sentences/computation","timestamp":"2014-04-20T19:44:20Z","content_type":null,"content_length":"31796","record_id":"<urn:uuid:0c2f783f-abcf-480d-9426-e0228cbebd02>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: P=NP Harvey Friedman friedman at math.ohio-state.edu Sat Dec 20 00:19:44 EST 1997 This is a reply to Martin Davis, 11:08AM 12/19/97. >OK. Let me stick my neck out. I think the betting is 50-50 on P=NP. Not to the people I talk to. Has there been any survey of opinion on this matter? It would be quite interesting to conduct such a survey in the computer science community. And it would be a lot easier to conduct such a survey on P=NP than on the continuum hypothesis - as Neil surely knows by I'll stick my neck out. The survey will show at least 95% notP=NP among theoretical computer scientists. And at least 90% among academic computer scientists as a whole. Why don't you embarrass me by citing an existing survey or conducting one (off the fom of course)? >heuristic evidence on the basis of which P \ne NP is so widely believed is >based on what I call the Cook-Karp thesis identifying P-time computability >with "feasible" computability. And the evidence for this is very weak >From the asymptotic viewpoint, identifying P-time with "feasible" is a very sensible first pass. Since our ignorance is so extreme, we don't really need any second, third, etcetera passes at the moment. >I am quite prepared to believe that there is no very good algorithm >for any of the myriad of NP-complete problems that have been produced, but I >know of no good reason to believe that there are no P-time algorithms for >them (say with asymptotic running time 10000(n^1000)). This illustrates what is so great about this problem. It's the flexibility. Suppose you are right - that there is no "very good algorithm" for the myriad of NP-complete problems. And someone shows that there is a polynomial time algorithm for them with ridiculous exponents and coefficients. Then the problem takes on the altered form of, e.g.: is there a quadratic algorithm for (most of) the myriad of NP-complete problems? Then this problem assumes the same importance as the original problem. Of course, one can get prematurely fussy at this stage, and worry about the realism surrounding all asymptotic algorithms. But then finite model complexity kicks in. The problem survives practically no matter what happens!! Everytime there is a surprise in the = direction - there hasn't been any real surprise yet - there is a corresponding adjustment to the problem that takes over. Surely from this point of view, Martin, you will agree that there will be crucially important negative results - eventually? By the way, Lou - as you might have heard, Michael Freedman of Poincare conjecture fame, has decided to move to Microsoft and work on P=NP. Have you considered contacting him to find out some of the considerations in his change of research direction, in order to test out some of your ideas (that I have pretty much called plain old stubborn)? >>P=NP is a deep conceptual problem as well as a technical problem. >If indeed P \ne NP, then one would have to agree that deep conceptual issues >are involved. If it turns out the other way, many a dissertation in CS will >collapse along with the P-time hierarchy, but the problem itself will in >this case be seen as of little conceptual interest in itself. In light of the point that I am making - the inexhaustable adjustments of the P=NP question - your conclusion would be unwarranted. Agreed? Thank you for your help in keeping the fom at an interesting, timely, useful, and professional level. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000604.html","timestamp":"2014-04-18T10:37:03Z","content_type":null,"content_length":"5750","record_id":"<urn:uuid:9cbc7036-3a97-484a-a17b-2daff940f582>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Residential Access Remote Access 1. Use of Elimination Theory for the Computation of Minimal Polynomials of Quartic Imprimitive Extensions of the Rational Number Field. Konstantinos Draziotis (Technological and Educational Institute of Kavala, Greece) 2. Algebraic closure models of stratified turbulent boundary layers. Oleksii Rudenko (Weizmann Institute of Science, Rehovot, Israel) 3. Symbolic Computational Methods in Nonequilibrium Thermodynamics and in Constitutive Theory. Heiko Herrmann (Institute Of Cybernetics, Tallinn University of Technology, Estonia) 4. Symbolic Computation in Studies of Dynamical Systems. Valerij Romanovskij (Center of Applied Mathematics and Theoretical Physics, University of Maribor, Slovenia) 5. Two Topics in Theorem Proving and Symbolic Computation. Andrei Voronkov (School of Computer Science, University of Manchester, UK) 6. Computing with Underspecified Matrices. Mathematical Theory Exploration. Volker Sorge (School of Computer Science, University of Birmingham, UK) 7. Learning from Theorema's expertise in formalizing mathematics. Manuel Maarek (School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK) 8. Dynamics of mesoscopic systems. Victor Lvov (Weizmann Institute of Science, Rehovot, Israel) 9. Analytical solution to the model of developing boundary layers. Oleksii Rudenko (Weizmann Institute of Science, Rehovot, Israel) 10. Cross-over from Regular to Chaotic behavior in discrete wave systems. Sergey Nazarenko (Mathematics Institute, University of Warwick, UK) 11. Scheme-based theory exploration in the Theorema system. Adrian Craciun (Institute e-Austria, West University of Timisoara, Romania) 12. Algebraic Methods in Control Design. Enrique Pico Parco (Dept. of Systems Engineering and Control, Technical University of Valencia, Spain) 13. Symbolic Computation in Program Verification. Laura Kovács (École Polytechnique Fédérale de Lausanne (EPFL), Switzerland) 14. Applications of the symbolic computation to 1D and 2D finite element electromagnetic field problems. Dumitru Cazacu. (Dept. for electronics, communications and computers, University of Pitesti, Romania) 15. Development of Fast "Rational" Noncommutative Groebner Bases in Ore-Localized Algebras of Operators and Applications to Special Functions. Viktor Levandovskyy (RWTH Aachen University, Germany), Hans Schoenemann (Technical University of Kaiserslautern, Germany) and Oleksandr Motsak (Technical University of Kaiserslautern, Germany). 16. Cluster Dynamics of Planetary Waves. Victor Lvov (Weizmann Institute of Science, Rehovot, Israel) 17. Toward stochastic description of large triad-clusters. Anna Pomyalov (Weizmann Institute of Science, Rehovot, Israel) 18. Cluster computations in laminated wave turbulence. Miguel Bustamante (University of Warwick, UK) 19. Factorization of linear partial differential operators. Wilhelm Plesken (RWTH Aachen University, Germany) and Arne Lorenz (RWTH Aachen University, Germany). 20. Differential invariants of Lie groups. Evelyne Hubert (INRIA Sophia Antipolis, France). 21. Ideal Intersections in Rings of Partial Differential Operators. Fritz Schwarz (Fraunhofer Institute for Algorithms and Scientific Computing SCAI, Germany). 22. Applications of Moving Frames. Elizabeth L. Mansfield (University of Kent, UK). 23. From functions to numeric interpretations: mechanizing proofs of termination using numeric algebras. Salvador Lucas (Universidad Politecnica de Valencia, Spain) 24. Implementing the DPLL Method within Multi-Domain Logic to Solve the SAT Problem. Gabor Kusper (Eszterhazy Karoly College, Hungary) 25. Implementations and applications of univariate real solving. Elias Tsigaridas (INRIA Sophia-Antipolis, France) 26. Automated Theory Exploration. Roy McCasland, Lucas Dixon, Lilia Georgieva, Markus Guhe, Fiona McNeill, Omar Montano-Rivas, and Alison Pease (University of Edinburgh, UK) 27. Methods for Creative Computre Supported Mathematical Theory Exploration. Adrian Craciun (Institute e-Austria, West University of Timisoara, Romania) 28. Differential Equations with Nonlocal Boundary Conditions. Sigita Peciulyte (Vytautas Magnus University, Kaunas, Lithuania) and Justina Jachimaviciene (Institute of Mathematics and Informatics, Vilnius, Lithuania). 29. Stability Analysis of the Optical Resonators toward Feedback-controlled Radiation Pressure Cooling. Mark Vilensky (Weizmann Institute of Science, Israel). 30. Symbolic Computation in Mathematics and Control. Jenny Santoso (University of Stuttgart, Germany). 31. Algebraic Analysis and Computer Algebra. Jean-Francois Pommaret (Ecole Nationale des Ponts et Chaussées, France) and Alban Quadrat (INRIA Sophia Antipolis, France). 32. Differential-Difference Equations Satisfied by Higher Hypergeometric Functions. Diego Dominici (Technische Universitaet Berlin, Germany) 33. Differential implicitization and differential parametrization. Sonia Rueda (Universidad Politechnica de Madrid, Spain) 34. Mathematical General Relativity. Juan Antonio Valiente Kroon (Queen Mary, University of London, UK) 35. Towards A Framework for Practical Computer-Supported Mathematical Theory Exploration. Adrian Craciun (West University of Timisoara, Romania) 36. Efficient Algorithms for Geometric Optimization Problems. Pedro Ramos (Universidad de Alcala, Spain) 37. Towards Weighted Linear Temporal Logic. George Rahonis (Aristotle University of Thessaloniki, Greece) 38. Mathematical Modelling of Thermal Processes in Laser and Electrothermal Technologies. Gerda Jankeviciute(Vytautas Magnus University, Kaunas, Lithuania) 39. Recostruction of Geometry Complex 3D Objects or CAD Systems, with Low Numbers of Measurements Data. Michal Rychlik (Poznan University of Technology, Poland) 40. Differential Equations with Different Nonlocal Conditions. Zivile Jeseviciute (Institute of Mathematics and Informatics, Vilnius, Lithuania) 41. Learning Symbolic Computation Software. Markos Farao (National and Kapodistrian University of Athens, Greece) 42. Learning Symbolic Computation Techniques for Polynomial Algebra. Maria Sofouli (National and Kapodistrian University of Athens, Greece) 43. Algorithmic Methods for Algebraic Curves and Applications. Juana Sendra Pons (Universidad Politecnica de Madrid, Spain) 44. Algebraic Analysis of Stability and Bifurcation for Biological Systems. Geometric Reasoning and Knowledge Management. Dongming Wang (CNRS, France) 45. Efficient Algorithms for Combinatorial Designs using Computer Algebra Tools and Symbolic Computation. Dimitris Simos (National Technical University of Athens, Greece) 46. Debugging of Declarative Programs. David Insa Cabrera (Universidad Politechnica de Valencia, Spain) 47. Euler Sums of Hyperharmonic Numbers and Applications of Euler-Seidel Matrices. Aihan Dil (University of Akdeniz, Turkey) 48. Computation of pi-Flat Outputs of Linear Time-Varying Control Systems with Delays. Felix Antritter (Universitaet der Bundeswehr Muenchen, Germany) 49. Characterization of Polynomials Satisfying a Four Term Recurrence. Diego Dominici (Technische Universitaet Berlin, Germany) 50. Sage. Simon King (National University of Ireland, Galway), Sebastian Pancratzm (University of Oxford, UK) and Richard Kreckel (Erlangen, Germany) 51. Singular. Alexander Dreyer (Fraunhofer ITWM, Germany) and Oleksandr Motsak (University of Kaiserslautern, Germany) 52. Special Functions. Nico Temme (CWI, Netherlands), Fredrik Johansson (Chalmers University of Technology, Sweden), Stephen Buckley (University of Oxford, UK). 53. Differential Algebra. Felix Ulmer (Universite de Rennes 1, France), Alban Quadrat (INRIA Sophia Antipolis, France), and Thomas Bächler (RWTH, Aachen, Germany) 54. Cryptography and Sage. Martin Albrecht (Royal Holloway, University of London, UK), Ciaran Mullan (Royal Holloway, University of London, UK), and Sedat Akleylek (Middle East Technical University, Ankara, Turkey) 55. Number Theory/Modular Forms. Sever Achimescu (Institute of Mathematics, Romanian Academy) 56. Algebraic Ordinary Differential Equtions of Order 1. Rafael Sendra (University of Alcalá, Spain) 57. Weyl Algebras and D-Modules. Daniel Andres, Albert Heinle, and Viktor Levandovskyy (RWTH Aachen, Germany) 58. Desingularization and Simplification Methods for Linear Differential Systems. Carole El Bacha, Moulay A. Barkatou, and Thomas Cluzeau (University of Limoges, France) 59. Integro-Differential Operators. Georg Regensburger (INRIA, France) and Markus Rosenkranz (University of Kent, UK) 60. Differential Elimination for Analytic Functions. Wilhelm Plesken and Daniel Robertz (RWTH Aachen, Germany) 61. Algebraic Limit Cycles of Polynomial Vector Fields in R^2. Jaume Llibre (Universitat Autonoma de Barcelona, Spain) 62. Pseudogroups, Their Invariants, and Noether's Second Theorem. Elizabeth Mansfield (University of Kent, UK) 63. Spencer Operator and Macaulay Inverse System. Jean-Francois Pommaret (Ecole Nationale des Ponts et Chaussées, France) 64. Triangularization of General Linear Systems of Partial Differential Equations Based on Pure Differential Modules. Alban Quadrat (INRIA, France) 65. Solving Linear Inhomogeneous Differential Equations. Fritz Schwarz (Fraunhofer Institute for Algorithms and Scientific Computing SCAI, Germany) 66. Polynomial Equations. Lorenzo Robbiano (University of Genoa, Italy) 67. Special Functions and Applications in Combinatorics. Clemente Cesarano (International Telematic University, Rome, Italy) 68. Variable Tree Automata. Irini Eleftheria Mens (Aristotle University of Thessaloniki, Greece) 69. Weighted LTL with Discounting. Eleni Mandrali and Marilena Vretta (Aristotle University of Thessaloniki, Greece) 70. Decomposition of Varieties. Michael Möller (TU Dortmund, Germany) 71. Symmetries, Fine Grading of sl(n,C) and Mutually Unbiased Bases. Miroslav Korbelar (Masaryk University, Brno, Czech Republic) 72. Grammars, Automata, Algebras, Coalgebras. Peter Padawitz (TU Dortmund, Germany) 73. Orthonomic Differential Systems. Michal Marvan (Silesian University in Opava, Czech Republic) 74. Combinatorial Applications of Gröbner Bases. Lajos Ronyai (Computer and Automation Institute, Hungarian Academy of Sciences) 75. Multi-Domain Logic as a Tool for Solve Bounded Model Checking Problems. Gabor Kusper and Gergely Kovásznai (Eszterhazy Karoly College, Hungary) 76. Hadamard Matrices, Designs, Secret-Sharing Schemes. Zlatko Varbanov (University of Veliko Tarnovo, Bulgaria) 77. Equational Theories for Automata. Zoltan Esik (University of Szeged, Hungary) 78. Diophantine Equations. Dimitrios Poulakis (Aristotle University of Thessaloniki, Greece) 79. Conditional Rule-Based Transformations for Unranked Trees. Besik Dundua (University of Porto, Portugal) 80. String Rewriting and Step by Step Decoding in Group Metrics. Emilio Suarez Canedo (University of Valladolid, Spain) 1. Symbolic Computation in Nanocomposites Research. Mario Stiavnicky (Academy of Armed Forces of General Milan Rastislav Stefanik, Slovakia) 2. Symbolic Computational Methods in Nonequilibrium Thermodynamics and in Constitutive Theory. Heiko Herrmann (Institute Of Cybernetics, Tallinn University of Technology, Estonia) 3. Symbolic Computation in Studies of Polynomial Dynamical Systems. Valerij Romanovskij (Center of Applied Mathematics and Theoretical Physics, University of Maribor, Slovenia) 4. Algebraic Error-Correcting Codes and their Applications. Emanuele Betti (Dept. of Mathematics, University of Florence, Italy) 5. Towards Automatic Proofs of Inequalities Involving Elementary Functions. Behzad Akbarpour (Computer Laboratory, University of Cambridge, UK) 6. Integrable PDE's: Algebraic Structures, Classification Problems, Solution Methods. Sara Lombardo (Dept. of Mathematics, Vrije Universiteit Amsterdam, The Netherlands) 7. Research Topics in Nonlinear Intact Ship Dynamics Where Symbolic Computation Tools could be Applied. Gabriele Bulian (Dept. of Naval Architecture, Ocean and Environmental Engineering, University of Trieste, Italy) 8. Connecting Dynamical Clients with Computer Algebra Systems using a Web- or a Grid service. Marc Frincu (Dept. of Computer Science and Mathematics, West University of Timisoara, Romania) 9. Commutative and Computational Algebra in Cryptography. Anna Rimoldi (Dept. of Mathematics, University of Trento, Italy) 10. Determining Minimal Graded Free Resolutions of Special Classes of Homogeneous Ideals in a Polynomial Ring. Oscar Fernandez Ramos (Faculty of Sciences, Dept. of Algebra, Geometry, and Topology. University of Valliadolid, Spain) 11. Investigation on the Structure of Reduced Configuration and Phase Space of Gauge Field Theory. Szymon Charzynski (Center for Theoretical Physics, Polish Academy of Sciences, Warsaw, Poland) 12. Implementation of root isolation techniques. Multihomogeneous resultant matrices. Training in Axiom and CoCoA. Angelos Mantzaflaris (Department of Informatics and Telecommunications, University of Athens, Greece) 13. Counting the number of RNA structures. Mohammad Ganjtabesh (Laboratoire d'Informatique (LIX), Ecole Polytechnique, France) 14. Absolute factorization of polynomials with several variables and irreducible decomposition of curves in C^n. Cristina Bertone (Department of Mathematics, Universita di Torino, Italy) 15. Exact resonances in three- and four-waves interaction processes of waves of different nature. Oleksii Rudenko (Weizmann Institute of Science, Rehovot, Israel) 16. Symbolic Computations in Studies of Polynomial Systems of Differential Equations. Valerij Romanovskij (Center of Applied Mathematics and Theoretical Physics, University of Maribor, Slovenia) 17. Algebraic Methods in Control Design. Enrique Pico Parco (Dept. of Systems Engineering and Control, Technical University of Valencia, Spain) 18. Integral solutions of curves of genus 0 over arbitrary number fields. Paraskevas Alvanos (Aristotle University of Thessaloniki, Greece) 19. Solving Polynomial Systems of Differential Equations. Valerij Romanovskij (Center of Applied Mathematics and Theoretical Physics, University of Maribor, Slovenia) 20. Efficient Algorithms for Combinatorial Designs using Computer Algebra Tools and Symbolic Computation. Dimitris Simos (National Technical University of Athens, Greece) 21. Symbolic Computations in Studies of Polynomial Systems of Differential Equations. Valerij Romanovskij (Center of Applied Mathematics and Theoretical Physics, University of Maribor, Slovenia) 22. Algorithms for diagonalization of matrices over Euclidean Ore domains and Jacobson form. Viktor Levandovskyy (RWTH, Aachen, Germany)
{"url":"http://www.risc.jku.at/projects/science/access/tap_users.html","timestamp":"2014-04-17T15:27:56Z","content_type":null,"content_length":"22029","record_id":"<urn:uuid:3fbe21c9-0c37-40c9-9641-05b844f86664>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: March 2003 [00607] [Date Index] [Thread Index] [Author Index] RE: a little more complex list operation • To: mathgroup at smc.vnet.net • Subject: [mg40256] RE: [mg40237] a little more complex list operation • From: "David Park" <djmp at earthlink.net> • Date: Fri, 28 Mar 2003 04:31:09 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com Mathematica is great at this stuff and if you expect to do a lot of it you should read up on Map and pure functions. data = {{x1, y1}, {x2, y2}, {x3, y3}}; {f[First[#]], g[Last[#]]} & /@ data {{f[x1], g[y1]}, {f[x2], g[y2]}, {f[x3], g[y3]}} Another method that has a pure function with another pure function inside. MapThread[#1[#2] &, {{f, g}, #}] & /@ data {{f[x1], g[y1]}, {f[x2], g[y2]}, {f[x3], g[y3]}} Here is an example with specific functions, squaring for f and Cos for g. We have more pure functions embedded. Remember that a pure function stands in the place of a name of a function. (MapThread[#1[#2] & , {{#1^2 & , Cos}, #1}] & ) /@ data {{x1^2, Cos[y1]}, {x2^2, Cos[y2]}, {x3^2, Cos[y3]}} Probably not as efficient for long data lists is data /. {x_, y_} :> {f[x], g[y]} {{f[x1], g[y1]}, {f[x2], g[y2]}, {f[x3], g[y3]}} I don't know exactly what you mean by "applying only one transform". It is probably not f[{{x1, y1}, {x2, y2}, {x3, y3}}] f /@ data {f[{x1, y1}], f[{x2, y2}], f[{x3, y3}]} but more likely Map[f, data, {2}] {{f[x1], f[y1]}, {f[x2], f[y2]}, {f[x3], f[y3]}} f @@ # & /@ data {f[x1, y1], f[x2, y2], f[x3, y3]} You can see that there are many ways of manipulating data lists with Mathematica. The ability of functional programming to manipulate data structures is one of Mathematica's neatest features. David Park djmp at earthlink.net From: Nathan Moore [mailto:nmoore at physics.umn.edu] To: mathgroup at smc.vnet.net Thanks very much for all the previous replies. Another simple question - though I guess this will require more complex syntax. Suppose I have the data = {{x1,y1},{x2,y2},{x3,y3}} and I want to produce the result (for fitting/plotting) data2 = {{f[x1],g[y1]},{f[x2],g[y2]},{f[x3],g[y3]}} I assume that if there was only one transform I wanted to apply, f[x], then I could say data2 = f[data] but how about this more complicated transform with functions f and g? again thanks! Your replies have been very helpful! Nathan Moore, University of Minnesota Physics
{"url":"http://forums.wolfram.com/mathgroup/archive/2003/Mar/msg00607.html","timestamp":"2014-04-20T00:58:30Z","content_type":null,"content_length":"36316","record_id":"<urn:uuid:0ac3b189-a384-4c12-af8e-1444937baf7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Frank Morgan's Math Chat - Numbers on your Computer Screen July 19, 2001 Old Challenge (Branislav Kisacanin). There are some positive numbers on my computer screen, and each of them is half the sum of the others. What could the numbers be? Answer There must be exactly three numbers, and they must all be the same. If there were more than three, the smallest would be less than half the sum of the others. If there were just two, the larger would be more than half the other. If there was just one, it would be more than zero (half the sum of the others). Furthermore, if each of the three equals the average of the other two, all three must be equal. (Awards this week to first-time winners: Laurence Draper, Escantidu, Bappaditya Das, Todd Culbertson, and Sonny Kunnakkat.) Al Zimmermann gives two other technically correct solutions: (1) there could be zero numbers on the screen, for which the condition holds vacuously; (2) there could be infinitely many infinities on the screen. New Challenge (inspired by a walk across the campus of the University of California at Berkeley). If two houses on the side of a hill are at the same height, is there a path between them at that height, without going uphill or downhill? A Mathematician at Heaven's Gate: a play by Frank Morgan continued from last column. Last time our mathematician had finally made it into heaven's waiting room. Scene III He finds himself surrounded by the most wonderful toys you could imagine. One is a small counterexample to the Poincare conjecture. You could understand it at a glance, but playing with it gives the deepest intellectual satisfaction and joy. There is a model of the Weaire-Phelan partitioning of three-space, which includes a most marvelous proof at a glance. There is a multidimensional model in which all of the primes are visible from infinitely many sides, each exhibiting a beautiful property, from Goldbach's Conjecture to the Riemann hypothesis, all somehow visible at once. Most of the toys are beyond description. He is soon approached by an enthusiastic welcoming party. Incredibly enough, he really is the first mathematician, and everyone is very interested. A woman speaks up: "Before I came here, I was afraid of math. But now I'm not afraid of anything." She shows him an infinite dimensional box with a theorem on every side, surrounded by theorems implying it or implied by it. He is entranced: "Amazing. It makes the whole structure of geometry apparent in an instant." Conclusion in the next Math Chat. Copyright 2001, Frank Morgan. Send answers, comments, and new questions by email to Frank.Morgan@williams.edu, to be eligible for Flatland and other book awards. Winning answers will appear in the next Math Chat. Math Chat appears on the first and third Thursdays of each month. Prof. Morgan's homepage is at www.williams.edu/Mathematics/fmorgan. THE MATH CHAT BOOK, including a $1000 Math Chat Book QUEST, questions and answers, and a list of past challenge winners, is now available from the MAA (800-331-1622).
{"url":"http://www.maa.org/frank-morgans-math-chat-numbers-on-your-computer-screen?device=mobile","timestamp":"2014-04-18T03:51:08Z","content_type":null,"content_length":"22907","record_id":"<urn:uuid:333bc1c6-3071-41e5-82b6-4a4d187e8a91>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical Paradoxes Hilbert’s Hotel Hilbert’s Hotel is a (hypothetical) hotel with an infinite number of rooms, each one of which is occupied. The hotel gives rise to a paradox: the hotel is full, and yet it has vacancies. That the hotel is full is obvious. It has an infinite number of rooms, and an infinite of guests; every room is occupied. That the hotel has vacancies is a little more difficult to demonstrate. Suppose that a new visitor arrives; can he be accommodated? At first it seems that he cannot, but then the hotel clerk has an idea: He moves the guest in Room 1 to Room 2, and the guest in Room 2 to Room 3, and so on. Every guest is moved to the next room along. For every guest, in every room, there is another room into which they can be moved. This leaves Room 1 vacant for the new visitor. Although the hotel is full, then, the new guest can be accommodated in Room 1. It is not only one new guest that can be accommodated; in fact, Hilbert’s Hotel has an infinite number of vacancies. By moving every guest to the room the number of which is double the number of their current room, all of the odd numbered rooms can be vacated for new guests. There are, of course, an infinite number of odd numbered rooms, and so an infinite number of new guests can be
{"url":"http://www.logicalparadoxes.info/hilberts-hotel/","timestamp":"2014-04-19T19:33:00Z","content_type":null,"content_length":"7831","record_id":"<urn:uuid:e5e44fed-5c27-4d10-ab38-db5a051874b1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
give a number x and y coordinates Join Date Mar 2012 Rep Power Is your question how to convert a number in the range 0-63 into the row and column values to index a 2D array? To go the other way, given row and column: row*row_length+column Use some algebra on that equation to get the other one. Look at using % and / If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power hi sorry about the delay it was late last night when i posted and just checked in the morning to see if i had any replies, yeah so my question to make it clearer. just say i have colums and rows 4 * 4 and you populate it with the numbers 1-16 . in the first row and colum is the number 1 and the last row and colum is the number 16. so just say the x y coordinates for the number 1 are 0,0 . and the the x y coordinate for the number 16 are 4, 4. i was just wondering if i take a random number how could i automatically give it x y coordinates using java Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power Are the numbers in that matrix in proper ascending order? kind regards, cenosillicaphobia: the fear for an empty beer glass Join Date Mar 2012 Rep Power Using the formula I suggested you compute, you'd subtract 1 from the numbers to make them 0 based. If you don't understand my response, don't ignore it, ask a question. Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power The problem is easy then (aamof, you don't need a matrix at all). For any number i in an n*n matrix, that number is stored in row #(i-1)/n and column #(i-1)%n. That's all there is to it. kind regards, cenosillicaphobia: the fear for an empty beer glass Join Date Mar 2012 Rep Power great guys thanks for help i appreciate, sorry if question was a bit dumb i just new to java Join Date Mar 2012 Rep Power Your y axis is the opposite direction of how it is used in java programming (java is top down). You will have to convert the java coordinates to your coordinates. If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power my code to assign x y coordinates to positions on the grid is not working properly any ideas why this would be ? x = (number-1)%9; y = (number-1)/9; Can you show some examples? Have several values of number and show the x,y values that are generated by your code. Also show how the squares are numbered. Were are square 1 and square 2 If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power my code is below , basically i want to take 2 numbers from the user to create a grid, then take a random number from this grid and give it corresponding x y coordinates. import java.util.Scanner; import java.util.Random; public class baby { public static void main(String args[]) System.out.println("enter first number for room length: "); Scanner scan = new Scanner(System.in); int num1 = scan.nextInt(); System.out.println("enter second number for room length: "); int num2 = scan.nextInt(); int room_length = num1 * num2; System.out.println("The room length is :"+num1 +" x "+num2+" = "+room_length); Random furniture = new Random(); int table = furniture.nextInt(room_length); int chair = furniture.nextInt(room_length); System.out.println("table random number is: "+table); System.out.println("chair random number is: "+chair); int x_coordinate = (table -1)%num1; int y_coordinate = (table -1)/num2; System.out.println("x coordinate is: "+ x_coordinate +" y coordinate is :" +y_coordinate); Please execute the program and copy and post here the output showing the value of the number and the values of x and y for several values. Add comments to the output that show what the x, and y values should be. How are the squares in the grid numbered? Where is square # 1 and square #2? If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power i want grid to be populated as so so 1 xy coordinates to be 0,3 2 xy coordinates to 1,3 3 xy coordinates to be 2,3 and so on examples of program run enter first number for room length: enter second number for room length: The room length is :4 x 4 = 16 table random number is: 4 x coordinate is: 3 y coordinate is :0 enter first number for room length: enter second number for room length: The room length is :4 x 4 = 16 table random number is: 9 x coordinate is: 0 y coordinate is :2 enter first number for room length: enter second number for room length: The room length is :4 x 4 = 16 table random number is: 7 x coordinate is: 2 y coordinate is :1 You want> square 1 (top left corner) xy coordinates to be 0,3 That is not what you said in post#3??? In your post you needed to add comments to the output to show what the values of x,y should be. 7 gives x coordinate: 2 y coordinate: 1 >>> What are the correct values? Take a piece od paper and draw a grid with the squares. In each square write the square number and the x,y values it should have. Then look at the drawing and find the formulas that will generate the desired values from the square number. Post the square contents for a couple of squares. Say the top left and the bottom right squares. If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power 1 = 0,3 2= 1,3 3 = 2,3 4 = 3,3 5 = 0,2 6 = 1,2 7 = 2,2 8 = 3,2 9 = 0,1 10 = 1,1 11 = 2,1 12 = 3,1 13 = 0,0 14 = 1,0 15= 2,0 16 = 3,0 Join Date Mar 2012 Rep Power thats all i can post for tonight, i'll reply to any futher post tomorrow Now you need to find the formulas to compute those values. Your x,y values appear to be in column, row order which is the reverse of what I have seen used before. Normally x is the row number and y is the column number. Are you reversing their common usage also? You want> x is the column and y is the row? If you don't understand my response, don't ignore it, ask a question.
{"url":"http://www.java-forums.org/new-java/57363-give-number-x-y-coordinates.html","timestamp":"2014-04-18T12:38:57Z","content_type":null,"content_length":"124942","record_id":"<urn:uuid:027f6630-196a-449b-a6f7-2f6d94e0f930>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Kinetic Energy of an Object You can use physics to calculate the kinetic energy of an object. When you start pushing or pulling a stationary object with a constant force, it starts to move if the force you exert is greater than the net forces resisting the movement, such as friction and gravity. If the object starts to move at some speed, it will acquire kinetic energy. Kinetic energy is the energy an object has because of its motion. Energy is the ability to do work. So how do you calculate kinetic energy? A force acting on an object that undergoes a displacement does work on the object. If this force is a net force that accelerates the object (according to Newton’s second law), then the velocity changes due to the acceleration. The change in velocity means that there is a change in the kinetic energy of the object. The change in kinetic energy of the object is equal to the work done by the net force acting on it. This is a very important principle called the work-energy theorem. After you know how work relates to kinetic energy, you’re ready to take a look at how kinetic energy relates to the speed and mass of the object. The equation to find kinetic energy, KE, is the following, where m is mass and v is velocity: Using a little math, you can show that work is also equal to (1/2)mv^2. Say, for example, that you apply a force to a model airplane in order to get it flying and that the plane is accelerating. Here’s the equation for net force: F = ma The work done on the plane, which becomes its kinetic energy, equals the following: Net force F equals mass times acceleration. Assume that you’re pushing in the same direction that the plane is going; in this case, cos 0 degrees = 1, so W = Fs = mas Assuming constant acceleration, you can tie this equation to the final and original velocity of the object. Use the equation where v[f] equals final velocity and v[i] equals initial velocity. Solving for a gives you If you plug this value of a into the equation for work, W = mas, you get the following: If the initial velocity is zero, you get This is the work that you put into accelerating the model plane — that is, into the plane’s motion — and that work becomes the plane’s kinetic energy, KE: This is just the work-energy theorem stated as an equation. You normally use the kinetic energy equation to find the kinetic energy of an object when you know its mass and velocity. Say, for example, that you’re at a firing range and you fire a 10-gram bullet with a velocity of 600 meters/second at a target. What’s the bullet’s kinetic energy? Using the equation to find kinetic energy, you simply plug in the numbers, remembering to convert from grams to kilograms first to keep the system of units consistent throughout the equation: The bullet has 1,800 joules of energy, which is a lot of energy to pack into a 10-gram bullet.
{"url":"http://www.dummies.com/how-to/content/how-to-calculate-the-kinetic-energy-of-an-object.html","timestamp":"2014-04-21T07:33:20Z","content_type":null,"content_length":"56264","record_id":"<urn:uuid:805f5642-822c-42cf-a17f-49404f6cae11>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Albion College Mathematics and Computer Science Colloquium Title: Unusual Behavior in Rubber Cubes Speaker: Darren E. Mason Associate Professor Mathematics and Computer Science Albion College Abstract: In this talk we will consider the mathematical problem associated with special linear deformations of an incompressible and nonlinear elastic cube. We will discover that the problem admits a wide variety of different solutions, depending on the magnitude and direction of external isotropic forces. To understand why certain solutions are preferred by nature, we will then study an associated energy minimization problem that leads to a selection criterion to determine the optimal deformed state of the cube. Finally, we will connect the mathematical appearences of these multiple solutions, natural and mathematical stability, and the fundamentals of bifurcation theory. Location: Palenske 227 Date: 4/7/2011 Time: 3:10 PM author = "{Darren E. Mason}", title = "{Unusual Behavior in Rubber Cubes}", address = "{Albion College Mathematics and Computer Science Colloquium}", month = "{7 April}", year = "{2011}"
{"url":"http://mathcs.albion.edu/scripts/mystical2bib.php?year=2011&month=16&day=7&item=a","timestamp":"2014-04-16T07:17:15Z","content_type":null,"content_length":"2104","record_id":"<urn:uuid:2c3fb215-103c-49d6-bc87-5719fb269e9b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Communication theory has been formulated best for symbolic-valued signals. Claude Shannon published in 1948 The Mathematical Theory of Communication, which became the cornerstone of digital communication. He showed the power of probabilistic models for symbolic-valued signals, which allowed him to quantify the information present in a signal. In the simplest signal model, each symbol can occur at index nn with a probability Pr a k a k , k=1…K k 1 … K . What this model says is that for each signal value a KK-sided coin is flipped (note that the coin need not be fair). For this model to make sense, the probabilities must be numbers between zero and one and must sum to one. ∑k=1KPrak=1 k 1 K ak 1 This coin-flipping model assumes that symbols occur without regard to what preceding or succeeding symbols were, a false assumption for typed text. Despite this probabilistic model's over-simplicity, the ideas we develop here also work when more accurate, but still probabilistic, models are used. The key quantity that characterizes a symbolic-valued signal is the entropy of its alphabet. H⁢A=−∑kkPr a k ⁢log2Pr a k H A k k a k 2 a k Because we use the base-2 logarithm, entropy has units of bits. For this definition to make sense, we must take special note of symbols having probability zero of occurring. A zero-probability symbol never occurs; thus, we define 0⁢log20=0 0 2 0 0 so that such symbols do not affect the entropy. The maximum value attainable by an alphabet's entropy occurs when the symbols are equally likely ( Pr a k =Pr a l a k a l ). In this case, the entropy equals log2K 2 K . The minimum value occurs when only one symbol occurs; it has probability one of occurring and the rest have probability zero. Derive the maximum-entropy results, both the numeric aspect (entropy equals log2K 2 K ) and the theoretical one (equally likely symbols maximize entropy). Derive the value of the minimum entropy Equally likely symbols each have a probability of 1K 1 K . Thus, H⁢A=−∑kk1K⁢log21K=log2K H A k k 1K 2 1K 2 K . To prove that this is the maximum-entropy probability assignment, we must explicitly take into account that probabilities sum to one. Focus on a particular symbol, say the first. Pr a 0 a 0 appears twice in the entropy formula: the terms Pr a 0 ⁢log2Pr a 0 a 0 2 a 0 and (1−Pra0+…+Pr a K-2 )⁢log2(1−Pra0+…+Pr a K-2 ) 1 a0 … a K-2 2 1 a0 … a K-2 . The derivative with respect to this probability (and all the others) must be zero. The derivative equals log2Pr a 0 −log2(1−Pra0+…+Pr a K-2 ) 2 a 0 2 1 a0 … a K-2 , and all other derivatives have the same form (just substitute your letter's index). Thus, each probability must equal the others, and we are done. For the minimum entropy answer, one term is 1⁢log21=0 1 2 1 0 , and the others are 0⁢log20 0 2 0 , which we define to be zero also. The minimum value of entropy is zero. A four-symbol alphabet has the following probabilities. Pra0=12 a0 1 2 Pra1=14 a1 1 4 Pra2=18 a2 1 8 Pra3=18 a3 1 8 Note that these probabilities sum to one as they should. As 12=2-1 1 2 2 , log212= -1 2 1 2 -1 . The entropy of this alphabet equals H⁢A=−(12⁢log212+14⁢log214+18⁢log218+18⁢log218)=−(12⁢-1+14⁢-2+18⁢-3+18⁢-3)=1.75⁢ bits H A 1 2 2 1 2 1 4 2 1 4 1 8 2 1 8 1 8 2 1 8 12 -1 14 -2 18 -3 18 -3 1.75 bits
{"url":"http://cnx.org/content/m0070/latest/?collection=col10040/latest","timestamp":"2014-04-20T11:08:29Z","content_type":null,"content_length":"111598","record_id":"<urn:uuid:06db01a2-c1a9-4dcd-a0d8-47e689e68410>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
CE 200 - Data Analysis and Parameter Estimation - Duke University CE 200: Engineering Data Analysis Department of Civil and Environmental Engineering Edmund T. Pratt School of Engineering Duke University - Box 90287, Durham, NC 27708-0287 Henri Gavin, Ph.D., P.E., Associate Professor Spring 2003 Syllabus Academic Integrity Numerical Recipes in C ... On Line CRC Engineering Handbooks On-Line: http://www.ENGnetBASE.com/ The Engineering Handbook, CRC Press, Richard C. Dorf, Editor The Measurement, Instrumentation, and Sensors Handbook, CRC Press, John G. Webster, Editor Matlab Tutorials On-Line Linear Systems Lab from Johns Hopkins University Engineering Statistics Handbook from NIST On-Line Statistics Lab from Duke University On-Line Statistics Lab from Rice University Weibull.com On-Line Statistical Reliability Resources StatisticalEngineeirng.com On-Line Statistics Resources Error Analysis Tutorials from the University of Michigan The Matlab Statistics Toolbox The Matlab Optimization Toolbox The Wavelet Transform from the European Southern Observatory The Wavelet Digest from E.P.F.L. More Links to Wavelet Resources from surveillance-video Introduction to Singular Value Decomposision by Todd Will Iterative Methods for Optimization, by C.T. Kelley, N.C.S.U. Math 3016, Optimization Dr. Huifu Xu, University of Southampton Least Squares, Modeling, and Signal Processing by James A. Cadzow Least Squares Optimization by Eero P. Simoncelli Math 5630. Numerical Optimization, Michigan Tech., Prof. Mark S. Gokenbach Methods for Nonlinear Least Squares Problems, 2nd ed., Technical University of Denmark, K. Madsen, H.B. Nielsen, O. Tingleff An Analysis of the Total Least Squares Problem by Gene H. Golub and Charles F. Van Loan Total Least Squares by Yves Nievergelt An Introduction to Total Least Squares by P. de Groen The Extended Least Squares Criterion by Arie Yeredor A Regularized Total Least Squares Algorithm by Hongbin Guo and Rosemary A. Reneaut Efficient Algorithms for Solution of Regularized Total Least Squares by Rosemary A. Reneaut and Hongbin Guo Independent Component Analysis - A Tutorial by Aapo Hyvarinen and Erkki Oja, 1999 Other Courses Home Page © 2002-2004 Henri P. Gavin; Updated: 8-14-2003, 9-14-2004,
{"url":"http://people.duke.edu/~hpgavin/ce200/index.html","timestamp":"2014-04-16T13:11:58Z","content_type":null,"content_length":"6656","record_id":"<urn:uuid:8f77e633-3e78-4d0d-9be6-746d5e287653>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructing an injective reduction of equivalence relations up vote 9 down vote favorite [Metastuff: I asked this question in a slightly different way on mathSE last week, and it didn't go anywhere, which is why I am asking here. I added the DST tag because it's basically a problem about Borel equivalence relations stripped of all the Borelness constraints. I do need help, so helpful redirection is appreciated.] I am trying to give a somewhat constructive definition of a function. It's somewhat constructive because I'll freely assume that I can well-order any set. Aside from that, I want to say what the function looks like. I have two equivalence relations $E$ and $F$ on spaces $X$ and $Y$, respectively. There are no restrictions on the sizes of anything. I want to define a function $f : X \to Y$ such that $$ x E y \ Leftrightarrow f(x) F f(y)\;\;\;\text{ and }\;\;\;f(x) = f(y) \Rightarrow x = y $$ for all $x,y \in X$. This makes $f$ send all points in an $E$-class to the same $F$-class and also be injective on equivalence classes (i.e., injective as $X/E \to Y/F$) and on the underlying space. Let $I$ be the class of nonzero cardinals. For every $i \in I$, the number of $F$-classes of size at least $i$ is greater than or equal to the number of $E$-classes of size at least $i$. I want to give a mostly-constructive proof that this is sufficient for there to be a function as described above (from $E$ to $F$), i.e., I want to describe the function. I have been struggling with this on and off for several weeks. Below are some possible time-savers for you guys. If you already have a solution, you can skip it. The problem is extremely easy in the slightly nicer situation where, for every $i \in I$, the number of $F$-classes of size exactly $i$ is greater than or equal to the number of $E$-classes of size exactly $i$. Just partition the set of $E$-classes by size and put a well-order on each set in the partition. Do the same for $F$-classes. Then send the $n$th $E$-class of size $i$ to the $n$th $F$-class of size $i$. The complication for the original case is that you might have to send an $E$-class of size $i$ to an $F$-class of size $j$ with $i < j$. Two problems arise this way. First, you can't use the larger classes wastefully by sending relatively small classes to them. E.g., if $E$ has solely two classes, one of size $2$ and one of size $5$, and $F$ has solely two classes, one of size $4$ and one of size $6$, you cannot send the class of size $2$ to the class of size $6$. The only way that I can think to avoid this problem is inductively: (i) well-order the classes in some way, (ii) send the least $E$-class to the least $F$-class that is big enough, (iii) remove these, and (iv) repeat from step (ii). This creates the second problem: how to choose the well-order for step (i). If you try, e.g., to order the classes by increasing size with an arbitrary order among classes of the same size, you run into the following problem (as Brian Scott pointed out to me on mathSE a week ago). Suppose $E$ has $\omega$ many classes of size $1$ and one class of size $2$. Suppose $F$ has one class each of every finite size. Then the above won't work because $F$ has order-type $\omega$, but $E$ has order-type $\omega+1$. You can fix this case with the same trick that you use to well-order the rationals. Put the $E$-classes of size $i$ into a column and well-order each column. Then move along the diagonals like so. But it's not clear to me what this looks like when you have any number of columns and rows rather than just countably many. Edit to explain potential solution: It sounds plausible to me that sending an $E$-class to an $F$-class of the smallest available size that is large enough will avoid fatally wasteful assignments regardless of the order in which you make assignments. E.g., given an $E$-class of size $5$, if $F$-classes of sizes $4,7,$ and $9$ are available, choose one of size $7$. The problem then is just how to iterate through the $E$-classes. This sounds problematic generally, but my knowledge of ordinals is weak. E.g., is there always some sense in which you can iterate through all the members of an initial ordinal? Put the $E$-classes into an array like this one so that an $E$-class has an index (column,row). Let the index $(s,p)$ mean that $s$ is the size of the $E$-class and $p$ is its arbitrarily-assigned position in the column. Consider the case where you have at most countably many $E$-classes of each size and only countably many possible infinite sizes. That is, $p \in \omega$ and $s \in \omega \times \lbrace 0,1\rbrace$. That is, you have countably many finite sizes (tagged with $0$) and countably many infinite sizes (tagged with $1$). Then you just have two copies of the above array; one for finite sizes and one for infinite sizes. Separately snake through each of them in the way depicted in the linked picture. For the array of finite sizes, this hits indices in this order: $((1,0),1), ((1,0),2), ((2,0),1), ((1,0),3), \ldots$, where the $0$ indicates that you're in the "finite" array. For the array of infinite sizes, this hits the indices in this (analogous) order: $((1,1),1), ((1,1),2), ((2,1),1), ((1,1),3), \ldots$. Here, $(1,1)$ denotes some infinite size such as $\omega$; $(2,1)$ might be $2^\omega$, and so on. Finally, interleave the two orders that you got from snaking through each array. Most simply, you can take one member from each order at a time. This gives: $$((1,0),1), ((1,1),1), ((1,0),2), ((1,1),2), ((2,0),1), ((2,1),1), ((1,0),3), \ldots$$ I apologize for the somewhat cumbersome notation, but I hope that the pattern becomes clear. Incidentally, you won't necessarily have an $E$-class for all of the points in the above arrays. E.g., you might not have any $E$-classes of size $2$. I am assuming the fullest possible case for simplicity, as it still defines a well-order when you remove some of the points. lo.logic descriptive-set-theory co.combinatorics It might help to make it explicit that under the Axiom of Choice the problem is equivalent to the following one: Given two sets $A,B$ and two ordinal-valued functions $s,t$ on $A,B$ respectively such that for every ordinal $\alpha$ we have $|\lbrace x \in A : s(x) \ge \alpha \rbrace| \le |\lbrace y \in B : t(y) \ge \alpha \rbrace|$, construct an injection $F:A \to B$ such that $t(F(x)) \ ge s(x)$ for all $x \in A$. – Trevor Wilson Aug 29 '12 at 13:43 1 This is a nice problem! – Joel David Hamkins Aug 29 '12 at 14:26 @Joel: Uh-oh. Does that mean it's not getting an answer soon? :^) This is just for an expository paper where I am trying to establish what happens with reductions of equivalence relations 1 generally before I confine myself to Borel reductions of countable Borel equivalence relations. I want to be as constructive as possible now to inform later proofs. I might have made it too difficult by removing the cardinality constraints, esp. because I don't think I appreciate the vastness of the ordinals or even cardinals, which leaves me unsure of whether I am considering all that can happen. – Rachel Basse Aug 29 '12 at 20:44 1 @Trevor: I edited some further explanation into my question. – Rachel Basse Aug 30 '12 at 0:28 3 Welcome to MO, Rachel! When JDH compliments your set-theoretical question you know you're on to a good thing. – David Roberts Aug 30 '12 at 0:29 show 6 more comments 1 Answer active oldest votes This is a very nice problem, which I like very much. The answer is that yes, indeed, there is such an injective reduction of $E$ to $F$. And one can give a recursive construction. (This answer now incorporates several simplifications to my original construction.) Specifically, let $\delta_\kappa^E$ be the number of $E$ equivalence classes of size at least $\kappa$, and similarly $\delta_\kappa^F$ for $F$. Your assumption is that $\delta_\kappa^E\ leq\delta_\kappa^F$ for any cardinal $\kappa$. As $\kappa$ increases, this number is non-increasing, and since it can drop only finitely many times, because there is no infinite descending sequence of ordinals, it follows that there are only finitely many values for $\delta_\kappa^E$. Let $\kappa_1$ be the least cardinal such that $E$ has no class of size $\kappa_1$ or larger, and so $\delta_{\kappa_1}^E=0$. Below this, there is some minimal $\kappa_0\lt\kappa_1$ where $\delta_\kappa=\delta$ has constant nonzero value $\delta$ for all $\kappa\in[\kappa_0,\kappa_1)$. Consider the largest classes of $E$, those of size at least $\kappa_0$. We may enumerate them in a sequence of length $\delta$. We shall map them to corresponding $F$ classes of equal or larger size in a recursive procedure. Specifically, at stage $\alpha<\delta$, consider the $\alpha^{\rm th}$ class of $E$ of size at least $\kappa_0$; it has some size $\kappa$; we've used up only $|\alpha|$ many $F$ classes of size at least $\kappa$; since this is less than $\delta$, we still have $F$ classes of size at least $\kappa$ remaining, and so we may map the $\ alpha^{\rm th}$ class to any unused $F$ class of equal or larger size (and there is no need to be optimal or to minimize size here). up vote 4 down vote Thus, we are able to map all the $E$ classes of size at least $\kappa_0$ to distinct corresponding $F$ classes of equal or larger size. Consider the $E$ and $F$ classes that remain, a accepted smaller instance of the problem. Notice that this smaller instance of the problem still satisfies the size hypothesis, using the minimality of $\kappa_0$, since for $\kappa<\kappa_0$ we have $\delta^E_\kappa$ is strictly larger than $\delta$, and hence also $\delta^F_\kappa$ is that large. Since we used up exactly $\delta$ many $F$ classes of size at least $\kappa_0$, there are still sufficient $F$ classes of size at least each $\kappa\lt\kappa_0$. (To illustrate with the example from your question, where you had infinitely many $E$ classes of size $1$ and only one of size $2$, my algorithm proceeds here by mapping the size $2$ class first, and then realizing that the hypothesis is still true for what remains.) Furthermore, this smaller instance of the problem now has a strictly smaller version of $\kappa_0$, and it can therefore be handled by induction. So the proof is complete. To understand the reduction constructively, without induction, one should imagine it working like this. The reduction breaks into finitely many pieces. The first piece consists of the largest classes, on a maximal interval of cardinals $\kappa$ where $\delta_\kappa^E$ is constant value $\delta$, whether finite or infinite. Since there are $\delta$ classes here, we enumerate them in a $\delta$ sequence, and so we never run out of classes on the $F$ side during the course of this $\delta$ process. Then, we move to the preceeding maximal interval of cardinals $\kappa$ on which $\delta_\kappa^E$ has a new strictly larger constant value. Our previous part of the reduction does not interfere with this part, precisely because the new $\ delta$ value is now strictly larger than on the first piece, and so there are plenty of $F$ classes of the desired size. And so on for finitely many steps, thereby completing the entire I find some similarity between the posted problem and a lopsided version of Hall's marriage Theorem. Do you see it also? If so, there are some concerns in the infinite case, and I do not know of any constructive versions of proofs of Hall's theorem. I would be interested in your thoughts on the matter. Gerhard "Ask Me About System Design" Paseman, 2012.08.30 – Gerhard Paseman Aug 30 '12 at 15:36 Gerhard, that is an interesting idea. Here, the reductions need not be bijective, which makes it different from the marriage problem, but I agree that there is a family resemblence. – Joel David Hamkins Aug 30 '12 at 15:43 I think a bit more argument is required to show that you can remove $\delta$ many $F$-classes of size at least $\kappa_0$ while maintaining the desired inequalities for the remaining part of $F$. You can remove half of the classes of any given size above $\kappa_0$ from $F$, provided that there are infinitely many, but what if there are only finitely many classes of that size? For example, if for any given size above $\kappa_0$ there is only one $F$-class of that size, you can still do it but you have to use another method. – Trevor Wilson Aug 30 '12 at 17:14 1 Ok, your last post addresses my quibble. Thanks. – Trevor Wilson Aug 30 '12 at 17:46 @Joel: I'll read your simplified argument now. So that you don't feel ignored in the meantime: I thought your original argument was very nice because (i) I was mildly perturbed that I 1 hadn't used the non-increasing property of these sequences yet and (ii) I think that handling the largest classes first makes it clearest that you avoid problems. So you gave me those two bonuses also. I am currently thinking about how to say more about the assignments within each constant segment. Since $\delta_{k_0} = \delta$ is the number of classes in the segment, it at least gives me a limit to work with. – Rachel Basse Aug 30 '12 at 23:27 show 9 more comments Not the answer you're looking for? Browse other questions tagged lo.logic descriptive-set-theory co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/105814/constructing-an-injective-reduction-of-equivalence-relations","timestamp":"2014-04-19T12:33:49Z","content_type":null,"content_length":"74622","record_id":"<urn:uuid:d1f4badc-9803-473e-8a7c-c0893a807b41>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Gelfand Correspondence Program in Mathematics -> Books and Assignments Books and Assignments used in GCPM The educational materials for GCPM consist of books and assignments written for the program by I. Gelfand and his colleagues. Three books are currently being used in GCPM: Method of Coordinates Functions and Graphs Two of these books, Functions and Graphs and Method of Coordinates, were written for the Moscow School by Correspondence. They were and remain popular not only among students of this School. It is common to find these books in the home library of parents who want to enhance their children's curiosity and intellectual interest. These books present basic mathematical concepts to students in a clear and simple form. For this reason they are well suited for independent study allowing children to work on them without their parents' guidance. These books have been translated into many languages and are used in schools throughout the world. The English translations, published in 1990 in Birkhauser, are now used in GCPM. The third book, Algebra (published in Birkhauser in 1993), was written in English by I. Gelfand and A. Shen for use in GCPM. Two additional books Trigonometry (by I. Gelfand, M. Saul) and Geometry (by T. Alekseyevskaya, I. Gelfand) will be published soon. The essential part of the participation in the GCPM is the work with the assignments. There are 5 or 6 assignments, written by T. Alekseyevskaya and I. Gelfand, for each of three levels of GCPM. The assignments consist of detailed explanations on important topics. Some of these topics are chosen from the books; others supplement the books. The assignments also contain samples of important problems together with their solutions, problems for students to solve and instructions for how to proceed. Material from the new books, Trigonometry and Geometry, are already included in Method of Coordinates The Method of Coordinates is the method of transferring a geometrical image into formulas, a method for describing pictures by numbers and letters denoting constants and variables. It is fundamental to the study of calculus and other mathematical topics. The systematic development of this method was proposed by the outstanding French philosopher and mathematician Rene Descartes about 350 years ago. It was a great discovery and very much influenced the development not only of mathematics but of other sciences as well. Even today you cannot avoid the method of coordinates. Any image on the computer or TV, every transmission of the picture from one place to the other uses the transformation of the visual information into numbers - and vice versa. Functions and Graphs Functions and Graphs provides instruction in transferring formulas and data into geometrical form. Thus, drawing graphs is one of the ways to "see" formulas and functions and to observe the way in which the function changes. This skill, to see simultaneously the formula and its geometrical representation, is very important not only for studies in mathematics but in studies of other subjects as well. It will be a skill that will remain with you for the rest of your life, like riding a bicycle, typing, or driving a car. Graphs are widely used in economics, engineering, physics, biology, applied mathematics, and of course, in business. Back To Main Page
{"url":"http://gcpm.rutgers.edu/books.html","timestamp":"2014-04-16T16:33:01Z","content_type":null,"content_length":"5550","record_id":"<urn:uuid:b48efd6a-f415-44d7-8fd3-73d7112c1d28>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
West Easton, PA Algebra 1 Tutor Find a West Easton, PA Algebra 1 Tutor ...While English is my specialty, math is dear to my heart. I enjoy bringing real life examples and common sense to help connect math to a young student's life. I love the creative thoughts science creates in children's minds. 25 Subjects: including algebra 1, reading, English, grammar ...One of the things I emphasize is learning how to do basic math without the use of calculators. Calculators are great for some problems, like say 23.98342 x .00498, or for graphing multiple equations, but they were not designed for everything. For instance every student should be able to compute 12 x 40 by hand. 11 Subjects: including algebra 1, Spanish, calculus, geometry ...I am also available to tutor for the quantitative section of the GRE. As the entrance exam for graduate school, I scored in the 96th percentile. I have several resources which allowed me to perform to a high level in this exam and I will share these resources with my students. 19 Subjects: including algebra 1, calculus, precalculus, statistics ...I communicate in French every week with my family both verbally and through E-mail (writing). I taught French to the adult community for 8 weeks four years ago and feel very confident that I can help students who would like to practice conversation. I have a strong math background from college ... 7 Subjects: including algebra 1, chemistry, French, geometry ...I've had many students struggle in those areas so I came up with fun ways to help out using games, story telling, etc. I also have many hands on materials to help out with reading and comprehension as well. I have taught kindergarten, first, second, fifth and sixth grade for 17 years. 19 Subjects: including algebra 1, reading, English, writing Related West Easton, PA Tutors West Easton, PA Accounting Tutors West Easton, PA ACT Tutors West Easton, PA Algebra Tutors West Easton, PA Algebra 2 Tutors West Easton, PA Calculus Tutors West Easton, PA Geometry Tutors West Easton, PA Math Tutors West Easton, PA Prealgebra Tutors West Easton, PA Precalculus Tutors West Easton, PA SAT Tutors West Easton, PA SAT Math Tutors West Easton, PA Science Tutors West Easton, PA Statistics Tutors West Easton, PA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Alpha, NJ algebra 1 Tutors Durham, PA algebra 1 Tutors Easton, PA algebra 1 Tutors Forks Township, PA algebra 1 Tutors Freemansburg, PA algebra 1 Tutors Glendon, PA algebra 1 Tutors Martins Creek algebra 1 Tutors Milford, NJ algebra 1 Tutors Nazareth, PA algebra 1 Tutors Palmer Township, PA algebra 1 Tutors Phillipsburg, NJ algebra 1 Tutors Riegelsville algebra 1 Tutors Stewartsville, NJ algebra 1 Tutors Stockertown algebra 1 Tutors Tatamy algebra 1 Tutors
{"url":"http://www.purplemath.com/West_Easton_PA_algebra_1_tutors.php","timestamp":"2014-04-17T15:39:40Z","content_type":null,"content_length":"24242","record_id":"<urn:uuid:6d9ba0db-4f5f-4454-ae45-548152d303f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Find x and y so A^2 = A February 20th 2010, 08:10 PM #1 Find x and y so A^2 = A If A = x 4 .........y 7 (^^^^ that is a 2x2 matrix) determine the values of x and y for which A^2=A. No idea how to start this one. i didnt think it was possible because A^2 = x^2 16 ..........y^2 49 A = x 4 ......y 7 so how can A^2 ever equal A because no matter what the x and y values are, 4 and 7 will never equal 16 and 49? Help please Thank you If A = x 4 .........y 7 (^^^^ that is a 2x2 matrix) determine the values of x and y for which A^2=A. No idea how to start this one. i didnt think it was possible because A^2 = x^2 16 ..........y^2 49 A = x 4 ......y 7 so how can A^2 ever equal A because no matter what the x and y values are, 4 and 7 will never equal 16 and 49? Help please Thank you The first thing you should do is review how to multiply matrices. Because A^2 is NOT equal to what you got. Note: $A^2 = A \times A$. Duh haha thank you. its getting late here and i shouldnt be doing math when im tired. i always forget little things like that. thank you again. February 20th 2010, 08:39 PM #2 February 20th 2010, 08:45 PM #3
{"url":"http://mathhelpforum.com/advanced-algebra/129859-find-x-y-so-2-a.html","timestamp":"2014-04-17T08:00:04Z","content_type":null,"content_length":"37089","record_id":"<urn:uuid:f5a752db-085e-4029-a52d-f565270069e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 11 - 20 of 589 - The Computer Journal , 2002 "... ..." - In STACS , 2003 "... We generalize Cuckoo Hashing [23] to d-ary Cuckoo Hashing and show how this yields a simple hash table data structure that stores n elements in (1 + ffl) n memory cells, for any constant ffl ? 0. Assuming uniform hashing, accessing or deleting table entries takes at most d = O(ln ffl ) probes ..." Cited by 47 (4 self) Add to MetaCart We generalize Cuckoo Hashing [23] to d-ary Cuckoo Hashing and show how this yields a simple hash table data structure that stores n elements in (1 + ffl) n memory cells, for any constant ffl ? 0. Assuming uniform hashing, accessing or deleting table entries takes at most d = O(ln ffl ) probes and the expected amortized insertion time is constant. This is the first dictionary that has worst case constant access time and expected constant update time, works with (1 + ffl) n space, and supports satellite information. Experiments indicate that d = 4 choices suffice for ffl 0:03. We also describe variants of the data structure that allow the use of hash functions that can be evaluted in constant time. , 2002 "... In the survivable network design problem (SNDP), the goal is to find a minimum-cost spanning subgraph satisfying certain connectivity requirements. We study the vertex-connectivity variant of SNDP in which the input specifies, for each pair of vertices, a required number of vertex-disjoint paths con ..." Cited by 44 (5 self) Add to MetaCart In the survivable network design problem (SNDP), the goal is to find a minimum-cost spanning subgraph satisfying certain connectivity requirements. We study the vertex-connectivity variant of SNDP in which the input specifies, for each pair of vertices, a required number of vertex-disjoint paths connecting them. - Machine Learning , 2003 "... Relational reinforcement learning is a Q-learning technique for relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. In this case, the learning algorithm used to approximate the mapping between stat ..." Cited by 40 (9 self) Add to MetaCart Relational reinforcement learning is a Q-learning technique for relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. In this case, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value has to be not only very reliable, but it also has to be able to handle the relational representation of state-action pairs. In this paper we investigate... - Proceedings of the First International Workshop on Mining Graphs, Trees and Sequences , 2003 "... Abstract. Recently, kernel methods have become a popular tool for machine learning and data mining. As most ‘real-world ’ data is structured, research in kernel methods has begun investigating kernels for various kinds of structured data. One of the most widely used tools for modeling structured dat ..." Cited by 35 (0 self) Add to MetaCart Abstract. Recently, kernel methods have become a popular tool for machine learning and data mining. As most ‘real-world ’ data is structured, research in kernel methods has begun investigating kernels for various kinds of structured data. One of the most widely used tools for modeling structured data are graphs. In this paper we study the trade-off between expressivity and efficiency of graph kernels. First, we motivate the need for this discussion by showing that fully general graph kernels can not even be approximated efficiently. We also discuss generalizations of graph kernels defined in literature and show that they are either not positive definite or not very useful. Finally, we propose a new graph kernel based on subtree patterns. We argue that while a little more computationally expensive, this kernel is more expressive than kernels based on walks. 1 - IN: WWW 2008. REFEREED TRACK: RICH MEDIA , 2008 "... In this paper, we cast the image-ranking problem into the task of identifying “authority” nodes on an inferred visual similarity graph and propose an algorithm to analyze the visual link structure that can be created among a group of images. Through an iterative procedure based on the PageRank compu ..." Cited by 34 (0 self) Add to MetaCart In this paper, we cast the image-ranking problem into the task of identifying “authority” nodes on an inferred visual similarity graph and propose an algorithm to analyze the visual link structure that can be created among a group of images. Through an iterative procedure based on the PageRank computation, a numerical weight is assigned to each image; this measures its relative importance to the other images being considered. The incorporation of visual signals in this process differs from the majority of largescale commercial-search engines in use today. Commercial search-engines often solely rely on the text clues of the pages in which images are embedded to rank images, and often entirely ignore the content of the images themselves as a ranking signal. To quantify the performance of our approach in a real-world system, we conducted a series of experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. - IN PROCEEDINGS OF THE 13TH IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE , 1998 "... We study the expressive power of inflationary fixed-point logic IFP and inflationary fixed-point logic with counting IFP+C on planar graphs. We prove the following results: (1) IFP captures polynomial time on 3-connected planar graphs, and IFP+C captures polynomial time on arbitrary planar graphs. ..." Cited by 34 (12 self) Add to MetaCart We study the expressive power of inflationary fixed-point logic IFP and inflationary fixed-point logic with counting IFP+C on planar graphs. We prove the following results: (1) IFP captures polynomial time on 3-connected planar graphs, and IFP+C captures polynomial time on arbitrary planar graphs. (2) Planar graphs can be characterized up to isomorphism in a logic with finitely many variables and counting. This answers a question of Immerman [7]. (3) The class of planar graphs is definable in IFP. This answers a question of Dawar and Grädel [16]. - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2003 "... We introduce a novel optimization method based on semidefinite programming relaxations to the field of computer vision and apply it to the combinatorial problem of minimizing quadratic functionals in binary decision variables subject to linear constraints. ..." Cited by 30 (6 self) Add to MetaCart We introduce a novel optimization method based on semidefinite programming relaxations to the field of computer vision and apply it to the combinatorial problem of minimizing quadratic functionals in binary decision variables subject to linear constraints.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=70837&sort=cite&start=10","timestamp":"2014-04-16T05:58:31Z","content_type":null,"content_length":"33989","record_id":"<urn:uuid:14d9a20f-4cd2-4dae-9faa-b7c216f71f68>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Silver Spring, MD Prealgebra Tutor Find a Silver Spring, MD Prealgebra Tutor ...More specifically, I have successfully completed algebra 1 & 2 classes, calculus 1, 2 & 3, general chemistry, physical chemistry, analytical chemistry, organic chemistry and college physics 1 & 2. I spend some of my free time tutoring classmates to help them understand concepts they are having p... 18 Subjects: including prealgebra, chemistry, calculus, physics Hi, my name is Kay and I earned a bachelor of science degree in Human Development from University of California at Davis. I also have a master of arts degree in Education from California State University, Sacramento. I have experience in tutoring Algebra, Geometry, and science to elementary and high school students, and currently teaching freshman seminar at University of Maryland. 4 Subjects: including prealgebra, algebra 1, elementary (k-6th), study skills ...I am very excited to begin my journey on WyzAnt as a tutor! I am currently a student at University of Maryland with a 3.6 GPA. I am majoring in Middle School Education in Math and Science. 16 Subjects: including prealgebra, calculus, geometry, biology ...I minored in economics and went on to study it further in graduate school. My graduate work was completed at the University of Maryland College Park, where I specialized in international development and quantitative analysis. I currently work as a professional economist. 16 Subjects: including prealgebra, calculus, geometry, statistics ...I also have a minor in mathematics. I have a Masters in Chemistry from the University of Maryland. I have also served as a graduate TA for Organic Chemistry at the college level. 6 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/silver_spring_md_prealgebra_tutors.php","timestamp":"2014-04-20T03:59:00Z","content_type":null,"content_length":"24435","record_id":"<urn:uuid:ae7a0e00-f17b-407d-9e88-f5942c364c50>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Set of Fixed Points and Periodic Points of Continuously Differentiable Functions International Journal of Mathematics and Mathematical Sciences Volume 2013 (2013), Article ID 929475, 5 pages Research Article On the Set of Fixed Points and Periodic Points of Continuously Differentiable Functions Department of Mathematics, Berks College, Pennsylvania State University, Tulpehocken Road, P.O. Box 7009, Reading, PA 19610-6009, USA Received 6 December 2012; Accepted 16 February 2013 Academic Editor: Paolo Ricci Copyright © 2013 Aliasghar Alikhani-Koopaei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In recent years, researchers have studied the size of different sets related to the dynamics of self-maps of an interval. In this note we investigate the sets of fixed points and periodic points of continuously differentiable functions and show that typically such functions have a finite set of fixed points and a countable set of periodic points. 1. Introduction and Notation The set of periodic points of self-maps of intervals has been studied for different reasons. The functions with smaller sets of periodic points are more likely not to share a periodic point. Of course, one has to decide what “big” or “small” means and how to describe this notion. In this direction one would be interested in studying the size of the sets of periodic points of self-maps of an interval, in particular, and other sets arising in dynamical systems in general (see [1–5]). For example, typically continuous functions have a first category set of periodic points (see [1, 5]). This result was generalized in [2] for the set of chain recurrent points. At times, even the smallness of these sets in some sense could be useful. For example, in [6] we showed that two commuting continuous self-maps of an interval share a periodic point if one has a countable set of periodic points. Schwartz (see [7]) was able to show that if one of the two commuting continuous functions is also continuously differentiable, then it would necessarily follow that the functions share a periodic point. Schwartz's result along with the results given in [6] may suggest that continuously differentiable functions have a countable set of periodic points. This is not true in general. However, in this note we show that typically such functions have a finite set of fixed points and a countable set of periodic points. Here denotes the set of fixed points of . For and we define by induction: The orbit of under is given by the sequence . For , let and be the set of periodic points of order ; that is, Two functions on a given interval are said to be of the same monotone type if both are either strictly increasing or strictly decreasing on that interval. Here, for a partition of the interval , is the length of the largest subinterval of , is the open ball about with radius , denotes the set of interior points of , and is the length of the interval . 2. Continuously Differentiable Functions For , consider and to be the family of all continuous maps and continuously differentiable maps from into itself, respectively. Recall that the usual metrics and on and , respectively, are given by It is well known that the metric spaces and are complete and hence Baire's category theorem holds in these spaces. We say that a typical function in or has a certain property if the set of those functions which does not have this property is of first category in or in . (Some authors prefer using the term generic instead of typical.) It is known that typically continuous self-maps of an interval have -perfect, measure zero sets of periodic points (see [2]). Here we show that typically members of have a finite set of fixed points and a countable set of periodic points. Lemma 1. Let and so that is not finite. Then there exists so that and . Proof. Suppose is not finite, and then we can choose a strictly monotone sequence in . Without loss of generality, assume so that for each . Let , and then, for each , . Thus by Rolle’s Theorem we have , so that . Let , then , and , and from the continuity of and it follows that and . Lemma 2. For each and there is a polynomial with . Proof. Let where . Take , , and . It is easy to see that , , and . Let and be a polynomial with . Then is a polynomial and . Thus we have hence . From and it follows that . Lemma 3. Let be a positive integer and . If is a periodic point of order with , then . Proof. We show that . The case follows similarly. Let with for . Since , there exist and so that . Since and , there exists so that and is strictly increasing or strictly decreasing on . Let be strictly increasing on , and then for we have ; hence, , implying that , a contradiction to . On the other hand when is strictly decreasing on , for we have ; hence, implying that , a contradiction to . Lemma 4. For , the set is closed in . Proof. Let and . Then there exists such that and . Without loss of generality, we may assume that , then . Let be arbitrary. Due to the uniform continuity of on there exists a positive integer such that, for , . Since , there exists a positive integer such that, for ,. Choose the positive integer so that for , and let . Then for we have Implying that . We also have Thus we have and . Theorem 5. There exists a residual subset of such that, for every , is finite. Proof. If has infinitely many elements, then from Lemma 1 it follows that there exists a point so that . Let From Lemma 4 the set is closed in . To show that has no interior point, let and , and then, by Lemma 2, there exists a polynomial such that and for some . Let Choose so that for all and take . Then ,,, and on , implying that is of first category and is residual. Theorem 6. The set of functions with , , and is a nowhere dense subset of . Proof. Let . It is easy to see that is a closed subset of . To show that is nowhere dense, let and . Choose and such that for . Let , where It is clear that ,,, and . It is easy to see that , , and . Theorem 7. Let , , and be a finite set, and for . Then there exists a function with such that . Proof. Let . Let be the elements of with distinct orbits, that is, for and . It is clear that each is a periodic point of with some period where is a factor of . This suggests that if one can construct a function that is sufficiently close to , and either or , then By repeating this process for each in a finite number of steps we can construct the desired function . Thus for convenience, we assume that, for ,. Consider the partition of obtained by , , and choose a positive less than such that, for each , and each ,, and if , then . Given that, for , , is continuous, and on , we may choose the nondegenerate closed intervals , of length less than and the positive numbers such that(i) is the midpoint of and for ,(ii) is strictly monotone on each for , , where ,(iii)the intervals are mutually disjoint for , ,(iv), for ,(v), where . Let for some , , and be the associated and , respectively. Define We have for , , , , and . By considering four different cases, we construct a function so that, for Cases A and B, and for Cases C and D. From conditions (iv) and (v) we have , and from condition (v) it follows that is of the same monotone type as on each ,. Case A. Let for and .(i)If is strictly increasing on , then by taking we have on and on and the function is also strictly increasing on , and as a result on . Thus .(ii)If is strictly decreasing on , then by taking we have on and on and the function is also strictly decreasing on and as a result, on . Thus . Case B. Let for and . If is strictly increasing on , take , and if is strictly decreasing on , take . Then similar to Case A, we may show that for ; hence, . Case C. Let for , for , and .(i)If is strictly increasing on , then by taking we have for and for , and the function is also strictly increasing on . Thus for and for ; hence, .(ii)If is strictly decreasing on , then by taking we have for and for , and the function is also decreasing on . Thus for and for ; hence, . Case D. Let for , for , and . If is strictly increasing on , take , and if is strictly decreasing on , take . Then similar to Case C, we may show that for and for . Thus . Note that is strictly increasing on , so we have Thus for either or , of which in such case , , and for . Thus We have Also Hence . Doing this recursively for each , , we get a function such that . Theorem 8. Typically continuously differentiable self-maps of intervals have a countable set of periodic points. Proof. Take The sets and are first category sets (see Theorems 5 and 6). For from Lemmas 4 and 1 we have that is closed and . Without loss of generality we may assume that is neither a constant function nor a polynomial of first degree, and then by Lemma 2 we may choose a polynomial of degree such that and . Using Theorem 7 we can construct so that and the set , thus and , hence . This implies that for each the set is nowhere dense. Thus is a first category set, so is a residual set, and, for , is countable. The author would like to thank professor David Preiss for his valuable comments and suggestions as well as the anonymous referees for their comments. 1. S. J. Agronsky, A. M. Bruckner, and M. Laczkovich, “Dynamics of typical continuous functions,” Journal of the London Mathematical Society, vol. 40, no. 2, pp. 227–243, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 2. A. A. Alikhani-Koopaei, “On the size of the sets of chain recurrent points, periodic points, and fixed points of functions,” Global Journal of Pure and Applied Mathematics, vol. 4, no. 2, pp. 113–121, 2008. 3. A. A. Alikhani-Koopaei, “On periodic and recurrent points of continuous functions,” in Proceedings of the 28th Summer Symposium Conference, Real Analysis Exchange, pp. 37–40, 2004. 4. A. Alikhani-Koopaei, “On common fixed points, periodic points, and recurrent points of continuous functions,” International Journal of Mathematics and Mathematical Sciences, vol. 2003, no. 39, pp. 2465–2473, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. T. H. Steele, “A note on periodic points and commuting functions,” Real Analysis Exchange, vol. 24, no. 2, pp. 781–789, 1998-1999. View at MathSciNet 6. A. Alikhani-Koopaei, “On common fixed and periodic points of commuting functions,” International Journal of Mathematics and Mathematical Sciences, vol. 21, no. 2, pp. 269–276, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. A. J. Schwartz, “Common periodic points of commuting functions,” The Michigan Mathematical Journal, vol. 12, pp. 353–355, 1965. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
{"url":"http://www.hindawi.com/journals/ijmms/2013/929475/","timestamp":"2014-04-20T04:11:24Z","content_type":null,"content_length":"574872","record_id":"<urn:uuid:9232276e-9f04-4e90-a1b5-f6d2c34ef74d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Percentage of Solution by Mass Concentration: Percentage of Solution by Mass Basic Concept The concentration of a solution is often expressed as the percentage of solute in the total amount of solution. For the extremely dilute solutions the concentration unit parts per million (ppm) is often used. Since the amounts of solute and solution present can be stated in terms of either mass or volume, different types of percent and ppm exist: 1. mass-mass 2. volume-volume 3. mass-volume Feature Overview This module is to compute percentage of solute and parts per million (ppm) by mass or to calculate solute or solvent by knowing concentration. concentration = mass solute / mass solution concentration in mass %= (mass solute / mass solution) x 100 concentration in ppm (m/m) = (mass solute / mass solution) x 10^6 1. Calculate concentration by inputting the solute and solution amounts in the above formulae: The mass of solutions is sum of the mass of solute and the mass of solvent: 2. Knowing the concentration and one of three: solute, solvent or solution, calculate the other two: a. knowing solute • solution = solute / concentration • solvent = solution - solute b. knowing solvent • solute = concentration x solvent / (1 - concentration) • solution = solute + solvent c. knowing solution • solute = concentration x solution • solvent = solution - solute User Instructions This is one step process, enter the known data and press Calculate to output the unknowns. 1. Select Percent by Mass link from the front page or Percent by Mass tab from the Solution module. The Input and Output screen appears. 2. In the Input area, enter the two known quantities with a proper significant figure. Select the units associated with the input. 3. Click Calculate to output the answer. 4. The Show Work area on the right shows you step-by-step how your problem has been solved. To start a new problem, click Reset. All Input fields will be cleared. Follow Step 1-3 again.
{"url":"http://www.molecularsoft.com/help/Solutions-Percent_By_Mass.htm","timestamp":"2014-04-20T10:46:05Z","content_type":null,"content_length":"13632","record_id":"<urn:uuid:aebf83dd-8b09-4135-acab-bbc167cfdfd5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
DA on WSs? So after looking through the wiki I found that Double Attack can proc on the first two hits of a multi-hit WS. My question is: how does this affect damage? Is is a +0%/50%/100% extra damage on a 2-hit, for example? I can't say for sure, but I'm pretty certain it is not % based. For Each Double Attack, I usually see a simple increase of 1 normal attack. So, depending on the mob, on average between 150~200 dmg per Dbl Attack. What Enil said, pretty much. OK, thanks. This seems to makes DA great for one-handed weapons with high damage, and kinda sucky for lower damage weapons. This seems to makes DA great for one-handed weapons with high damage, and kinda sucky for lower damage weapons. Double-Attacking look like a regular extra hit, as is obvious when the damage on a SAM's one-hit WS doesn't nearly double when you get an extra attack. But, of course, if you get an extra hit for free, the higher damage the better. For tp, of course, the puny weapons swing faster and so get more chances to DA, so it all balances out. like all bonuses, DA on WS is best expressed as a % bonus if you're trying to compare X% DA to Y something-else. however, you have to look at the WS to see what % WS DoT you'll get from DA. abstracting away from everyone's favor topic of 2008/2009 (the diminishing returns of DA)... like you said, you can only DA on the first 2 hits of a WS. DAs are like additional hits in multihit WSs, in that a) they have fTP = 1. fTP is what TP affects in some WS; for example, SAM WS have a large fTP modifier, which is what makes hagun good. WS gorgets also affect fTP, by adding .1 to it. fTP only ever applies to the first hit of a WS (edit: which is another way of saying that DAs and other additional hits in WSs have fTP=1. fTP is a multiplier; it not applying is the same as it being 1, since X times 1 is X). b) they are affected by so-called secondary mods, or WSC. so, for example, if you double attack on tachi: gekko, the 75% STR mod does affect the additional hit, but hagun/TP do not. in order to see what % bonus you'll get from this, for a 1-hit WS you can multiply the first hit by fTP and then divide it into 1 (the 1 extra hit). so for tachi: gekko w/ 200%TP (100% TP with hagun has the fTP of 200%TP), it's 1 divided by 1.875 equals 53.3%. so, 5% DA would be 2.6%~ WS DoT ...but it's a bit more complicated, because the first hit of many WSs gets both an accuracy bonus (true of all WS, and basically autocaps the ACC for that hit) and a pDIF (what ATT does) bonus. so taking that same tachi: gekko example, we get that 2.6%~ WS DoT only if ATT is capped relative to your target, as is ACC. otherwise, the DA hit is going to hit more weakly than the first (b/c it doesn't have the pDIF bonus) and miss more often (b/c it doesn't get the ACC bonus). i've never seen extensive testing showing that DAs of the first hit don't get these bonuses, but any SAM who knows the numbers will tell you that they've hit 1st hit miss + DA connecting WSs for much lower damage than they would get if the DA'd hit had a pDIF bonus. for a multihit WS, say king's justice, you just add in the additional hits to see the % gain. for example, a 100%TP king's justice has fTP=1 on the first hit, so a DA will give you 1 divided by 3 = 33.3%~ damage. to find the % increase from 5% DA, you'd then take into account that you can DA on the first two hits and then multiply the chances of DAing by the % increase you get from DAing to find your damage. it might be because i'm drunk, but at the moment, factoring in the 2 times to DA part is beyond my effort capacity. as for only being able to DA twice, that's been well tested on ffxionline. some guy did a ton of multihit WS, and figured out how many 0/1/2/3/4/etc hits he should get if you can DA on all hits, 2 hits, 1 hit, etc. after doing enough, you could see that it must be the case that you can only DA twice. the goodness of DA does not depend on whether your weapon has high or low base damage... it's X% increase, whether it's increasing a number like 100 or 1000. it is true that 1hand weapons benefit more from base damage increases, so in that sense DA's value might go down a bit (because you may be comparing some amount of DA to, say, some amount of STR, which might raise base damage; if base damage is more powerful for little weapons, DA is ipso facto less valuable relative to STR, though of course a given amount of DA could still beat an insufficient amount of STR). however, it's fTP, pDIF/ACC bonuses on first hit, and number of hits in a WS that mainly influence DA's value. basically, DA does less with huge fTP or lots of native hits in the WS. edit: typos and such. Edited, Apr 4th 2010 1:53am by milich
{"url":"http://wow.allakhazam.com/forum.html?forum=258&mid=1270196849142131500","timestamp":"2014-04-19T02:45:30Z","content_type":null,"content_length":"56801","record_id":"<urn:uuid:bec457a0-f5ca-41b3-b3e6-018333f12e82>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
give a number x and y coordinates Join Date Mar 2012 Rep Power Is your question how to convert a number in the range 0-63 into the row and column values to index a 2D array? To go the other way, given row and column: row*row_length+column Use some algebra on that equation to get the other one. Look at using % and / If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power hi sorry about the delay it was late last night when i posted and just checked in the morning to see if i had any replies, yeah so my question to make it clearer. just say i have colums and rows 4 * 4 and you populate it with the numbers 1-16 . in the first row and colum is the number 1 and the last row and colum is the number 16. so just say the x y coordinates for the number 1 are 0,0 . and the the x y coordinate for the number 16 are 4, 4. i was just wondering if i take a random number how could i automatically give it x y coordinates using java Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power Are the numbers in that matrix in proper ascending order? kind regards, cenosillicaphobia: the fear for an empty beer glass Join Date Mar 2012 Rep Power Using the formula I suggested you compute, you'd subtract 1 from the numbers to make them 0 based. If you don't understand my response, don't ignore it, ask a question. Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power The problem is easy then (aamof, you don't need a matrix at all). For any number i in an n*n matrix, that number is stored in row #(i-1)/n and column #(i-1)%n. That's all there is to it. kind regards, cenosillicaphobia: the fear for an empty beer glass Join Date Mar 2012 Rep Power great guys thanks for help i appreciate, sorry if question was a bit dumb i just new to java Join Date Mar 2012 Rep Power Your y axis is the opposite direction of how it is used in java programming (java is top down). You will have to convert the java coordinates to your coordinates. If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power my code to assign x y coordinates to positions on the grid is not working properly any ideas why this would be ? x = (number-1)%9; y = (number-1)/9; Can you show some examples? Have several values of number and show the x,y values that are generated by your code. Also show how the squares are numbered. Were are square 1 and square 2 If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power my code is below , basically i want to take 2 numbers from the user to create a grid, then take a random number from this grid and give it corresponding x y coordinates. import java.util.Scanner; import java.util.Random; public class baby { public static void main(String args[]) System.out.println("enter first number for room length: "); Scanner scan = new Scanner(System.in); int num1 = scan.nextInt(); System.out.println("enter second number for room length: "); int num2 = scan.nextInt(); int room_length = num1 * num2; System.out.println("The room length is :"+num1 +" x "+num2+" = "+room_length); Random furniture = new Random(); int table = furniture.nextInt(room_length); int chair = furniture.nextInt(room_length); System.out.println("table random number is: "+table); System.out.println("chair random number is: "+chair); int x_coordinate = (table -1)%num1; int y_coordinate = (table -1)/num2; System.out.println("x coordinate is: "+ x_coordinate +" y coordinate is :" +y_coordinate); Please execute the program and copy and post here the output showing the value of the number and the values of x and y for several values. Add comments to the output that show what the x, and y values should be. How are the squares in the grid numbered? Where is square # 1 and square #2? If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power i want grid to be populated as so so 1 xy coordinates to be 0,3 2 xy coordinates to 1,3 3 xy coordinates to be 2,3 and so on examples of program run enter first number for room length: enter second number for room length: The room length is :4 x 4 = 16 table random number is: 4 x coordinate is: 3 y coordinate is :0 enter first number for room length: enter second number for room length: The room length is :4 x 4 = 16 table random number is: 9 x coordinate is: 0 y coordinate is :2 enter first number for room length: enter second number for room length: The room length is :4 x 4 = 16 table random number is: 7 x coordinate is: 2 y coordinate is :1 You want> square 1 (top left corner) xy coordinates to be 0,3 That is not what you said in post#3??? In your post you needed to add comments to the output to show what the values of x,y should be. 7 gives x coordinate: 2 y coordinate: 1 >>> What are the correct values? Take a piece od paper and draw a grid with the squares. In each square write the square number and the x,y values it should have. Then look at the drawing and find the formulas that will generate the desired values from the square number. Post the square contents for a couple of squares. Say the top left and the bottom right squares. If you don't understand my response, don't ignore it, ask a question. Join Date Mar 2012 Rep Power 1 = 0,3 2= 1,3 3 = 2,3 4 = 3,3 5 = 0,2 6 = 1,2 7 = 2,2 8 = 3,2 9 = 0,1 10 = 1,1 11 = 2,1 12 = 3,1 13 = 0,0 14 = 1,0 15= 2,0 16 = 3,0 Join Date Mar 2012 Rep Power thats all i can post for tonight, i'll reply to any futher post tomorrow Now you need to find the formulas to compute those values. Your x,y values appear to be in column, row order which is the reverse of what I have seen used before. Normally x is the row number and y is the column number. Are you reversing their common usage also? You want> x is the column and y is the row? If you don't understand my response, don't ignore it, ask a question.
{"url":"http://www.java-forums.org/new-java/57363-give-number-x-y-coordinates.html","timestamp":"2014-04-18T12:38:57Z","content_type":null,"content_length":"124942","record_id":"<urn:uuid:027f6630-196a-449b-a6f7-2f6d94e0f930>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Westside, GA Geometry Tutor Find a Westside, GA Geometry Tutor I have a BS and MS in Physics from Georgia Tech and a Ph.D. in Mathematics from Carnegie Mellon University. I worked for 30+ years as an applied mathematician for Westinghouse in Pittsburgh. During that time I also taught as an adjunct professor at CMU and at Duquesne University in the Mathematics Departments. 10 Subjects: including geometry, calculus, physics, algebra 1 I am a junior Mathematics major at LaGrange College. I will be graduating in 2015. I have been doing private math tutoring since I was a sophomore in high school. 9 Subjects: including geometry, algebra 1, algebra 2, precalculus ...I speak fluent Spanish, too. However, most pupils have problems in maths, so this is what I tutored most. If anybody may be interested in learning German - you are right here. 29 Subjects: including geometry, English, Spanish, reading ...In classes, I am often a student that others look to for advice. My success as a student speaks for itself- I graduated as valedictorian of my high school class, and scored a perfect 36 on my ACT. I am heavily invested in being successful in my endeavors, and tutoring is no different. 28 Subjects: including geometry, chemistry, physics, calculus ...In 8 or less hourly sessions, I will teach your child the skills needed to excel in math throughout their academic life and beyond. I can teach the fundamentals of elementary math which consist of: Add, Subtract, Multiply, Divide Numbers Basic Geometry Basic Probability Collect, Analyze & ... 18 Subjects: including geometry, accounting, ASVAB, finance Related Westside, GA Tutors Westside, GA Accounting Tutors Westside, GA ACT Tutors Westside, GA Algebra Tutors Westside, GA Algebra 2 Tutors Westside, GA Calculus Tutors Westside, GA Geometry Tutors Westside, GA Math Tutors Westside, GA Prealgebra Tutors Westside, GA Precalculus Tutors Westside, GA SAT Tutors Westside, GA SAT Math Tutors Westside, GA Science Tutors Westside, GA Statistics Tutors Westside, GA Trigonometry Tutors Nearby Cities With geometry Tutor Barrett Parkway, GA geometry Tutors Chatt Hills, GA geometry Tutors Embry Hls, GA geometry Tutors Fort Gillem, GA geometry Tutors Fry, GA geometry Tutors Gainesville, GA geometry Tutors Green Way, GA geometry Tutors Madison, SC geometry Tutors Marble Hill, GA geometry Tutors North Metro geometry Tutors Penfield, GA geometry Tutors Penfld, GA geometry Tutors Philomath, GA geometry Tutors Rockbridge, GA geometry Tutors White Stone, GA geometry Tutors
{"url":"http://www.purplemath.com/Westside_GA_Geometry_tutors.php","timestamp":"2014-04-17T04:46:30Z","content_type":null,"content_length":"23903","record_id":"<urn:uuid:2f4535c8-7fc8-45e9-8947-7f9cc4a4d5c1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Wittgenstein Prizes 2000 awarded to the Anthropologist Gingrich and the Mathematician Markowich 5 START Prizes for outstanding young researchers Excerpts from the Press Release of the FWF, July 3, 2000 On Monday (3 July) Austrian Science Minister Gehrer will award this year's START and Wittgenstein Prizes. The anthropologist and ethnologist Andre Gingrich and the mathematician Peter Markowich, both from the University of Vienna, will each receive 20 million ATS for their future research. The Wittgenstein Prize is the most prestigious and financially the highest recognition for Austrian scientists. In addition, five young researchers will be awarded START prizes, receiving financial support for six years at a level of between 2.0 and 2.5 million ATS per year. Peter Markowich (University of Vienna) is researching in applied mathematics. The equations he is studying, which use the "language" of the fundamental work of Leibnitz, Newton and Maxwell, describe dynamic physical processes from atomic to galactic levels. Markowich is working on basic methods as well as on concrete modelling problems and computer simulation of physical phenomena. He has taught and researched abroad for many years and returned to Austria two years ago. He hopes to use his prize money to establish Vienna as an internationally recognized centre for applied mathematics. The reseaerch area of Peter Markowich is Nonlinear Analysis and Partial Differential Equations. These equations, which use the language of the fundamental works of Leibnitz, Newton and Maxwell, describe dynamical physical processes ranging from atomistic to galactic dimensions. Important examples are the Botzmann equation ofgas kinetics, the Schroedinger equation of quantum physics, Einstein's field equations of the relativity theory and the Navier-Stokes equations of fluid dynamics. Partial differential equations are a central research area in modern mathematical analysis as well as in modern mathematical physics. Peter Markowich is working on the methodological basis, on concrete modelling problems and on issues of the numerical computer simulation of physical phenomena employing differential equation models. For example, he contributed to practical design problems for highly integrated semiconductor devices, to the understanding of basic questions on entropy techniques for kinetic equations and diffusion processes and to the rigorous analysis of the connection of classical and quantum mechanics. Markowich spent many years at universities and research centers abroad. Two years ago he returned to Austria and is now more than ever active in international research projects. His dream is to establish Vienna as internationally known center for Applied Mathematics. ┃ O.Univ.Prof.Dr. Peter MARKOWICH (*December 16, 1956) ┃ ┃ Institut für Mathematik, Universität Wien ┃ ┃ Strudelhofgasse 4, A-1090 Wien ┃ ┃ Tel.: 01/4277/50611 / Fax: 01/4277/9506 ┃ ┃ e-mail: peter.markowich@univie.ac.at ┃ ┃ Wittgenstein Prize ┃ ┃ Area of Research: Applied Mathematics ┃
{"url":"http://homepage.univie.ac.at/peter.markowich/press/prizes.html","timestamp":"2014-04-18T08:04:09Z","content_type":null,"content_length":"3919","record_id":"<urn:uuid:13c3255c-8e0c-4f41-a5e4-7e3dc0e7c8dc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus: Derivatives of Inverse Functions Video | MindBites Calculus: Derivatives of Inverse Functions About this Lesson • Type: Video Tutorial • Length: 12:13 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 131 MB • Posted: 11/18/2008 This lesson is part of the following series: Calculus (279 lessons, $198.00) Calculus Review (48 lessons, $95.04) Calculus: Inverse and Hyperbolic Functions (14 lessons, $19.80) Calculus: Inverse Functions & Logarithmic Diff (6 lessons, $10.89) In this lesson, we will review the properties of inverse functions and learn about finding the derivative of an inverse function using a formula. For invertible functions, the inverse of f(x) is continuous if f(x) is continuous, is differentiable if f(x) is differentiable, is increasing if f(x) is monotonically increasing, and is decreasing if f(x) is monotonically decreasing. There is a technique that you can use to find the derivative of the inverse of a function without even having to find the inverse of its function. Once you find the derivative of a function's inverse using the provided formula, you will also be able to find the inverse function from the derivative you've calculated. Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hôpital's Rule, functions and their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus, College Calculus and Calculus II topics. Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of America. In 2006, Reader's Digest named him in the "100 Best of America". Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals, including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the theory of continued fractions. Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews derivative of the inverse function ~ james95 the video never played.when i clicked the mouse to play the lesson nothing happended.After paying $2.97 for the lesson I received nothing in return.I'm very disapointed! jim. ~ LizzK31415 This was well explained, but I would need at least one more example. derivative of the inverse function ~ james95 the video never played.when i clicked the mouse to play the lesson nothing happended.After paying $2.97 for the lesson I received nothing in return.I'm very disapointed! jim. ~ LizzK31415 This was well explained, but I would need at least one more example. Elementary Functions and Their Inverses Calculus of Inverse Functions Derivatives of Inverse Functions Page [1 of 2] Okay, so now let’s take a look at the calculus of inverse functions. What do we tend to look at with calculus, taking derivatives and so forth and so on? So if I give you a function, a basic question is how can you find the derivative of its inverse? So suppose you had a function and you know you can take the inverse of it, it’s invertible, how can you take the derivative of its inverse? Here are some basic facts about a function that has an inverse. So suppose you have a function and it actually has an inverse. Well then, if the function is continuous, then the inverse will be continuous. If the function is very, very smooth, there’s no bumps in it, then, in fact, the inverse will be smooth. Because remember all we’re doing, when you think of the graph, is just flipping it over the y = x line. If a function is increasing, then its inverse will be increasing. If a function is decreasing, its inverse will be decreasing. So these are all properties that are sort of easy to see just be thinking about that flip along the diagonal. But what about taking derivatives? Let’s think about this. What I really want to find out here, my mission, is what is the derivative? So given an invertible function, f(x), I want to find the derivative of its inverse. So I’ll write it this way: . Now, why am I writing it that way? Well, if you use the prime notation, how would you do that? So you’d have f-1’(x). So, yes, you could do that. So if you like that, that’s good, but there’s so many little things dangling there. It looks like maybe f-11, and I was a little bit naughty. So I’ll just use the d, dx to mean to take the derivative of the inverse function. If you like the prime, use the prime, but I won’t. So how would you do that? So you’re given a function, you know it has an inverse and you want to find the derivative of its inverse. I don’t know. There’s only one thing I know, and that is the connection that links a function to its inverse, and that’s this: f f-1(x) = x. That’s the only relationship I know that has f-1 in it, for sure, because that’s the definition. That’s one of the properties that’s required to be an inverse. So what if I took this whole thing and differentiated it? So let’s differentiate this whole thing with respect to x. So now, if I differentiate this with respect to x, what’s going to happen? I have to differentiate this side with respect to x, and then differentiate this side with respect to x. Well, this side with respect to x is pretty easy. The derivative of x = 1, so that’s not a problem. But what about this? This looks so complicated. Well, actually, it’s nothing more than a chain rule, because I have an inside, namely, that, and then I have an outside, namely, f of stuff. So what I’ve got to do is use a chain rule to take the derivative of this side, and then the derivative of this side is just going to be 1, taking the derivative with respect to x. So let’s see what we get. If we try this, what I see is – well, what’s the derivative of f of junk? It’s f’ of junk, using the chain rule now, so the junk is f-1(x). But I’m not done yet. I’ve got to multiply that by the derivative of the inside and that is the derivative of the inverse function. So that’s the derivative of the inside. Now this may look a little bit weird, so let’s just pause and make sure that we’re all okay on this. I’m taking the derivative of this thing with respect to x. But we have to realize that, in fact, this has an inside and outside. There’s all this junk inside here, all this blob, and then we have the outside, the f. So I've got an inside and then an outside, so I use the chain rule. The chain rule says take the derivative of something like this, I take the derivative of the outside with respect to the blob, and then multiply that by the derivative of the inside with respect to x. So if I’ve got f of junk, the derivative of that is f’ of junk and I just copy the stuff there. Multiply that by the derivative of the inside, the derivative of the junk, with respect to x. So that’s just the derivative of the inside. And what does that equal? Well, it equals the derivative of this, which is 1. And this is fantastic, because now I can actually solve for this. Remember I was asked to find this, that was my mission in life, and now I see it. It’s dangling right in there. All I’ve got to do, in fact, is divide by this. So if I divide by this, I can find out what the derivative of an inverse is. So what do I see? What I see is . And there’s the formula. So how do you find the derivative of the inverse function? It’s just the reciprocal, one over, the derivative of the original function. This is f’, so the derivative of the good old-fashioned function would then plug in the inverse function. Just take the inverse function and plug it in. And that gives you the derivative of the inverse function. Now, there’s something that you should be cautious of here. When is this thing actually undefined? It’s undefined whenever this thing is zero. So I have to make sure that this thing is never zero; otherwise, that derivative doesn’t make sense. So I have to make sure that f’ f-1(x) ? 0, because I can’t divide by 0, but that’s clear. So as long as we make sure that the bottom is not zero, then this is the derivative. So let’s actually try a little example so you can see this thing in action. So here’s a good little example. Let’s find the derivative of f-1, and let’s find the derivative at this point: x = ?, where – and I’ll tell you a few things. First of all, I’ll tell you what f is. I’ve got to tell you something. But I’ll tell you what f is. f(x) = 2x + cos x. And I’ll even tell you something else. I’ll tell you what f-1 is at ?. I’ll tell you it’s . So this is also given. So let’s make sure we understand the question. Here’s this function and what I’d like for you to do is I’d like for you to find the derivative not of the function, but I want you to find the derivative of the inverse, evaluated with x = ?. And I’ll help you out and tell you that the value of the inverse at ? is . So I’ll give you that much, but I want to now find the derivative, I want to find the slope of the tangent at that point. Well now, how can we proceed? Well, first of all, there’s one thing we should check. Does this function have an inverse? Is this an invertible function? Well, to check that, we’ve got to take a derivative. So let’s do a little sidebar. So we take the derivative of the function – I see 2, and then what’s the derivative of cosine? It’s -sine. And so I see that f’(x) = 2 – sin x. Now, what are the values for sin x? Well, sin x wiggles. It wiggles between1 and –1. So the biggest this thing could be is 1 and the smallest it could be is –1. If I make this thing as big as it can be, namely, 1, this is 2 – 1, which is 1. If I make this thing as small as it can be, then it’s –1 and I see 2 minus –1, which is 3. No matter what, this number is going to be between 1 and 3, namely, it’s always positive. This is an increasing function. If it’s an increasing function, we know it has an inverse. And, in fact, we could even do more. We could actually check to make sure that, in fact, this point makes sense. How could we check to see if this really is a true statement? Well, all we have to do is reverse the roles, because if f-1(?) = , then f( ) = ?. They must undo each other. So what is f( )? If I plug in , this would be 2( ), that’s just ?, and I’d see cos ( ). And the cos ( ) = 0. So all I see is just ?, and that’s what I have here. So everything is okay. Now, how could I actually now find the derivative? Well, I’ll just use this fact here. So if I use that fact, I could immediately report the news. I see that that the derivative of f-1(x) = one over – and now I've got to take the derivative, which I already – oh, I don’t have it anymore, but I can do it for you live. Here we go, here’s the derivative. It’s 2 +, and now the derivative, cosine is –sine, but I don’t plug in x, I plug in f-1. So that’s the derivative and I want to know the value at this point. So let’s plug in ?. And if I plug in ?, what do I see? Well, I see that at ?, this equals . But what is f-1(?)? We know that’s . So we’re given that fact, so let’s put that in right now. So this equals , and what’s the sin ( )? You think back to the graph. That’s actually a high point, so that’s actually 1, so what I see is . So, in fact, this equals 1. So the derivative of the inverse function of this at ? turns out to equal 1. So the slope of the tangent line of the inverse function at ? turns out to be 1. So, using this formula, you can actually find inverses of functions and the derivative of the inverse of the function. Okay, I’ll see you at the next lecture. Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/854-calculus-derivatives-of-inverse-functions","timestamp":"2014-04-18T15:42:13Z","content_type":null,"content_length":"66233","record_id":"<urn:uuid:12ff150f-f714-49a3-9ddf-5940b140be03>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Robel Taame on Saturday, November 23, 2013 at 4:40pm. A garden will be made up of a parallelogram, a rectangle and a triangle. The garden must have an area of 500m2. Draw two possible gardens. Determine the dimensions of each part of the garden, and justify your choice of these dimensions. • Grade 8 Math - Steve, Saturday, November 23, 2013 at 7:11pm a rectangle 10x10 a right triangle with legs 20 and 20 a parallelogram with base 20 and height 10 area: 100+200+200 I'm sure you can come up with other scenarios. • Grade 8 Math - Aryan, Tuesday, December 10, 2013 at 7:27am Length = 20m Width = 20m Area = 200m2 An isosceles triangle Base = 25m Height = 16m Area = 200m2 Base: 25m Height: 4m Area = 100m2 Total Area = 200m2 + 200m2 + 100m2 = 500m2. • Grade 8 Math - Tiffany, Tuesday, December 10, 2013 at 7:30am Oh, so all we need to do is make combinations, so that all the area adds up to 500m2. Thank you Steve and Aryan! • Grade 8 Math - Steve, Tuesday, December 10, 2013 at 7:31am Yes, thank you! • Grade 8 Math - Aryan, Tuesday, December 10, 2013 at 7:32am Yes, no problem at all. Your very welcome, and thank you! Related Questions math - A garden will be made up of a parallelogram, a rectangle and a triangle. ... math - Please show all of your work in the space below. Please present and ... math - Jim is putting up fence posts around a rectangular garden. He used 50 ... Science - A garden contains 20 different species of plants. Five of these plants... Math - The dimensions of Mrs. Singleton's square garden are dilated using a ... 11th grade Algebra II - Chad had a garden that was in the shape of a rectangle. ... 5th grade math - suppose you are building a fence for a garden in your yard. ... MATH 11 help PLease! - A rectangular garden is made at the side of the house. ... math - You are planting a rectangular garden. It is 5 ft longer than 3 times its... Math - Please show me how to set this up: A local citizen wants to fence a ...
{"url":"http://www.jiskha.com/display.cgi?id=1385242835","timestamp":"2014-04-20T03:03:05Z","content_type":null,"content_length":"9711","record_id":"<urn:uuid:f0b0ed8e-dc3d-4480-8ab3-8dcfbb151d4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
CodeF00 [ Coding ] Fixed Point Class Fixed Point math class for c++. It supports all combinations which add up to a native data types (8.8/16.16/24.8/etc). The template parameters are the number of bits to use as the base type for both the integer and fractional portions, invalid combinations will yield a compiler error, the current implementation makes use of boost static assert to make this more readable. It should be a nice drop in replacement for native float types. Here's an example usage: typedef numeric::Fixed<16, 16> fixed; fixed f; This will declare a 16.16 fixed point number. Operators are provided though the use of boost::operators. multiplication and division are implemented in free functions named numeric::multiply and numeric::divide which use boost::enable_if to choose the best option. If a larger type is available, it will use the accurate and fast scaled math version. If here is not a larger type, then it will fall back on the slower multiply and emulated divide (which unfortunately has less precision). This system allows the user to specialize the multiplication and division as needed. With this design, on usual x86/amd64 systems, fixed types as large as numeric::Fixed<32, 32> are supported (though the 64-bit fixed types have a lower precision divide) Option Parser Class Here is a c++ Option Parser. It was written using standard c++ so it should compile on any standards compliant c++ compiler. It supports GNU style "double dash" options which may have 0 or 1 operand. Any option may or may not be required. It is fairly easy to use and pretty robust overall. It also has nice --help support. Rotation Operators A coworker of mine complained that c++ doesn't define any ROL or ROR operators, namely rotate left and rotate right. So here are two quick and dirty template functions which do a little bit twiddling to get it done for anyone who is interested. It is template based, so it should work for all integer types, signed or unsigned. bit_ops.h Matrix Class This Matrix class is the basis for the maps in my RPG engine. It is copyable, resizable, and uses the c-like [] operators which are properly bound checked and may throw std::out_of_range. For Matrix<int> m(2, 3); m[1][2] = 10; will create a matrix object with a width of 2 and a height of 3 and assign 10 the last valid element. The current version is row major only, but a future version will support column major as well. Binary Grep I found myself wanting to be able to quickly search a large binary file for a certain byte pattern the other day, only the be dissapointed and find nothing. So bgrep was born. It's usage is simple, you specify one or more bytes on the command line and it will search stdin for that byte pattern reporting the results as it goes. For example: ./bgrep 01 02 03 04 < my_file This will search my_file for the byte pattern 01 02 03 04 (which of course is 04030201 when viewed as a little-endian 32-bit value). You may search for patterns of any length which is 1 byte or more. The concept of properties is something that C++ lacks, yet other languages have. In general, they are not needed, but sometimes can be useful in making code more clear. So I developed a template based solution to implementing properties with the least amount of intrusive code in the class it's being applied to. Check it out here. I've also provided an example of its usage. uint128 Class Because I wanted to support larger fixed point values, I decided to make a uint128 class which acts like a normal unsigned integer type in pretty much every way. All the basic mathematical operations are there and can be output to a std::ostream as well. Check it out here.
{"url":"http://www.codef00.com/coding","timestamp":"2014-04-20T13:18:29Z","content_type":null,"content_length":"6578","record_id":"<urn:uuid:9cf97ccb-e101-4f2c-87c0-b9131483e4b4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
User gowers bio website gowers.wordpress.com visits member for 4 years, 6 months seen Mar 16 at 23:40 stats profile views 29,457 Mathematics professor at Cambridge Jun Are there very strongly pseudorandom permutations? 19 comment I now think it may be possible to do something by composing polynomially many Feistel permutations. 19 answered Are there any good websites for hosting discussions of mathematical papers? Jun Are there very strongly pseudorandom permutations? 19 comment Yes. I was vague about it, but the precise requirement I would like is that $k$ should be at most a polynomial function of $n$ (or perhaps a very slightly superpolynomial function). Jun Are there very strongly pseudorandom permutations? 18 comment Good point -- thanks for the tip. Jun Are there very strongly pseudorandom permutations? 18 comment I have now found a source that seems to suggest that the Luby-Rackoff construction won't give hardness greater than $2^n$. So it looks as though a different idea would be needed. But maybe there are some different ideas out there. 18 asked Are there very strongly pseudorandom permutations? Jun Are there any very hard unknots? 13 comment I drew the "quotient" knot and the picture has been sitting on my desk for about a month. At first it looked hard to simplify, but then I saw that one could make a "hole" in the middle and take a chunk of knot and pass it up through the hole and back down again. This kind of global untwisting would, I think, have to be part of any unknotting procedure of the kind I fantasize about. At some point I might make the knot out of string and see whether I can indeed untie it fairly straightforwardly starting with that move. 12 awarded Nice Answer 6 awarded Favorite Question 24 awarded Popular Question May Are there any very hard unknots? 9 comment Thank you for this example. It's quite interesting as it is in some sense a "product" of smaller knots. I tried replacing the bundles of strands (most of the time four strands) by a single strand and obtained a picture of a knot that I can't instantly see to be the unknot, though I did find a local way of reducing the number of crossings. If this "quotient" knot is not the unknot, then it's a very interesting example. 30 awarded Good Answer Mar What can be proved about the Ramanujan conjecture using elementary means? 29 comment I don't mind complex analysis, but I'm wondering whether a "non-structural" proof is possible. Without saying precisely what I mean by that, I would say that modular forms are on the wrong side of the boundary. Mar What can be proved about the Ramanujan conjecture using elementary means? 29 revised added 164 characters in body Mar What can be proved about the Ramanujan conjecture using elementary means? 29 revised added 619 characters in body Mar What can be proved about the Ramanujan conjecture using elementary means? 29 comment Ah, I see the point now. OK, I'll go back and add a condition. Mar What can be proved about the Ramanujan conjecture using elementary means? 29 comment I'm taking $1-q^{a_r}$, and not $1+(-q)^{a_r}$. 29 awarded Nice Question 29 asked What can be proved about the Ramanujan conjecture using elementary means? Mar awarded Popular Question
{"url":"http://mathoverflow.net/users/1459/gowers?tab=activity&sort=all&page=2","timestamp":"2014-04-20T01:33:15Z","content_type":null,"content_length":"46583","record_id":"<urn:uuid:d8bac721-57dc-4d0b-a852-1771016f9189>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
and e , 2002 "... We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound ..." Cited by 132 (8 self) Add to MetaCart We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: (a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and (b) a tree-reweighted max-product messagepassing algorithm that is related to but distinct from the max-product algorithm. In this way, we establish a connection between a certain LP relaxation of the modefinding problem, and a reweighted form of the max-product (min-sum) message-passing algorithm. - IEEE Transactions on Information Theory , 2002 "... We develop an approach for computing provably exact maximum a posteriori (MAP) configurations for a subclass of problems on graphs with cycles. By decomposing the original problem into a convex combination of tree-structured problems, we obtain an upper bound on the optimal value of the original ..." Cited by 107 (11 self) Add to MetaCart We develop an approach for computing provably exact maximum a posteriori (MAP) configurations for a subclass of problems on graphs with cycles. By decomposing the original problem into a convex combination of tree-structured problems, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is met with equality if and only if the tree problems share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original problem. Next we present and analyze two methods for attempting to obtain tight upper bounds: (a) a tree-reweighted messagepassing algorithm that is related to but distinct from the max-product (min-sum) algorithm; and (b) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds. Finally, we discuss the conditions that govern when the relaxation is tight, in which case the MAP configuration can be obtained. The analysis described here generalizes naturally to convex combinations of hypertree-structured distributions. , 2001 "... We present a tree-based reparameterization framework that provides a new conceptual view of a large class of algorithms for computing approximate marginals in graphs with cycles. This class includes the belief propagation or sum-product algorithm [39, 36], as well as a rich set of variations and ext ..." Cited by 102 (22 self) Add to MetaCart We present a tree-based reparameterization framework that provides a new conceptual view of a large class of algorithms for computing approximate marginals in graphs with cycles. This class includes the belief propagation or sum-product algorithm [39, 36], as well as a rich set of variations and extensions of belief propagation. Algorithms in this class can be formulated as a sequence of reparameterization updates, each of which entails re-factorizing a portion of the distribution corresponding to an acyclic subgraph (i.e., a tree). The ultimate goal is to obtain an alternative but equivalent factorization using functions that represent (exact or approximate) marginal distributions on cliques of the graph. Our framework highlights an important property of BP and the entire class of reparameterization algorithms: the distribution on the full graph is not changed. The perspective of tree-based updates gives rise to a simple and intuitive characterization of the fixed points in terms of tree consistency. We develop interpretations of these results in terms of information geometry. The invariance of the distribution, in conjunction with the fixed point characterization, enables us to derive an exact relation between the exact marginals on an arbitrary graph with cycles, and the approximations provided by belief propagation, and more broadly, any algorithm that minimizes the Bethe free energy. We also develop bounds on this approximation error, which illuminate the conditions that govern their accuracy. Finally, we show how the reparameterization perspective extends naturally to more structured approximations (e.g., Kikuchi and variants [52, 37]) that operate over higher order cliques. - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2005 "... Computer vision is currently one of the most exciting areas of artificial intelligence re-search, largely because it has recently become possible to record, store and process large amounts of visual data. While impressive achievements have been made in pattern clas-sification problems such as handwr ..." Cited by 49 (4 self) Add to MetaCart Computer vision is currently one of the most exciting areas of artificial intelligence re-search, largely because it has recently become possible to record, store and process large amounts of visual data. While impressive achievements have been made in pattern clas-sification problems such as handwritten character recognition and face detection, it is even more exciting that researchers may be on the verge of introducing computer vision systems that perform scene analysis, decomposing image input into its constituent objects, lighting conditions, motion patterns, and so on. Two of the main challenges in computer vision are finding efficient models of the physics of visual scenes and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms for computer vision and scene analysis. We review exact techniques and various approximate, computationally efficient techniques, including iterative conditional modes, the expectation maximization (EM) algorithm, the mean field method, variational techniques, structured variational techniques, Gibbs sampling, the sum-product algorithm and “loopy ” belief propagation. We describe how each technique can be applied in a model of multiple, occluding objects, and contrast the behaviors and performances of the techniques using a unifying cost function, free energy. "... Abstract—We consider families of Markov random fields (MRFs) on an undirected graph using the exponential family representation. In earlier work [13] we proved that if the statistic that defines a family of MRFs is positively correlated, then the entropy is monotone decreasing in the exponential par ..." Add to MetaCart Abstract—We consider families of Markov random fields (MRFs) on an undirected graph using the exponential family representation. In earlier work [13] we proved that if the statistic that defines a family of MRFs is positively correlated, then the entropy is monotone decreasing in the exponential parameters. In this paper we address the converse, specifically within the context of the Ising model. The statistic for an edge is viewed as positive or negative as it favors similar or dissimilar values at the endpoints of the edge. We show that for an acyclic Ising model with no self statistics, the statistic is positively correlated regardless of the polarity of the edges. We further show that for a cyclic Ising model, the statistic is positively correlated if and only if the statistic is not frustrated; and that the entropy is monotone decreasing in the exponential parameters, if and only if the statistic is not frustrated. I. PREAMBLE In this paper we pick up the discussion started in [13] and continued in [14]; namely, examining the relationship between the statistic defining a family of Markov random fields (MRFs) and the behavior of information-theoretic quantities within that family of MRFs. In particular, we address the question of whether monotonicity of entropy over the family of MRFs implies that the statistic is positively correlated. The converse was shown in [13]. The interest in such questions arises from both engineering [15] and social science [16], [7] concerns. Let G = (V, E) be a graph. A family of exponential distributions is specified by a vector statistic t = (tij) defined on the endpoints of the edges E of the graph. 1 That is, for a given image x = {xi: i ∈ V} and each edge {i, j} ∈ E, the function tij: Xi×Xj − → R determines the contribution of the pair (xi, xj) to the probability of x. We say that X is Markov with respect to G, in that conditioning on a cutset renders separated subsets independent of one another [17]. The entire family of MRFs based on t is generated by introducing an exponential parameter θ = (θij) where for each edge {i, j}, θij scales the sensitivity of the distribution p(x) = p (x; θ) to the function tij. Specifically, if X is an MRF based on t with exponential parameter θ, the probability of an image x is p(x; θ) = exp { ∑ θijtij(xi, xj) − Φ(θ)} , 2005 "... A new class of upper bounds on the log partition function 1 ..." , 2003 "... We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound ..." Add to MetaCart We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of tree-structured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is met with equality if and only if the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: (a) a tree-relaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and (b) a tree-reweighted messagepassing algorithm that is related to but distinct from the max-product (min-sum) algorithm. Finally, we discuss the conditions that govern when the relaxation is tight, in which case the MAP configuration can be obtained. The analysis described here generalizes naturally to convex combinations of hypertree-structured distributions. , 2012 "... estimation for discrete graphical models: ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1481716","timestamp":"2014-04-23T08:58:49Z","content_type":null,"content_length":"33538","record_id":"<urn:uuid:80c0c74e-b224-464b-a999-c13b7e4ec200>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Neck Algebra 1 Tutor Find a Little Neck Algebra 1 Tutor ...However, the manner in which the information is transmitted and use of various motivational techniques can bring about dramatic improvement and make learning and understanding much more interesting and fun. For example, the use of technology, images, reading and writing strategies, stories, one-... 18 Subjects: including algebra 1, reading, biology, algebra 2 ...I am proficient and certified in MS Access, as well as Visual Basic, SQL Server, MySQL, Crystal Reports and Web Development Technologies.I am an Information Technology (IT) Professional with technical, hands-on expertise in Full Project Lifecycle Applications Development, Business Process Re-engi... 12 Subjects: including algebra 1, GED, algebra 2, elementary math ...The position required the ability to explain a variety of math subjects to high school students, which ranged from basic math to advance topics like Pre-Calculus and AP Calculus. During my undergraduate career at UIC, I taught general chemistry and classical physics as a supplemental instructor ... 13 Subjects: including algebra 1, chemistry, calculus, physics ...A recent student's SAT Math score improved by 100 points! I have been working as a freelance proofreader for 15 years. I am very familiar with MLA rules and can help fix grammar and citations in various topics and at various levels of English proficiency. 35 Subjects: including algebra 1, English, reading, writing ...I consider myself incredibly proficient in Access and can teach others to be so as well. I have been using VBA for 4 years. I have taken 1.5 years of classwork in VB in highschool. 15 Subjects: including algebra 1, calculus, algebra 2, finance
{"url":"http://www.purplemath.com/little_neck_ny_algebra_1_tutors.php","timestamp":"2014-04-19T23:27:24Z","content_type":null,"content_length":"24063","record_id":"<urn:uuid:8465f55f-e349-4d53-b326-26189955e835>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Sharkovskii's theorem Sharkovskii’s theorem Every natural number can be written as $2^{r}p$, where $p$ is odd, and $r$ is the maximum exponent such that $2^{r}$divides the given number. We define the Sharkovskii ordering of the natural numbers in this way: given two odd numbers $p$ and $q$, and two nonnegative integers $r$ and $s$, then $2^{r}p\succ 2^{s}q$ if 1. 1. 2. 2. 3. 3. This defines a linear ordering of $\mathbb{N}$, in which we first have $3,5,7,\dots$, followed by $2\cdot 3$, $2\cdot 5,\dots$, followed by $2^{2}\cdot 3$, $2^{2}\cdot 5,\dots$, and so on, and finally $2^{{n+1}},2^{n},\dots,2,1$. So it looks like this: $3\succ 5\succ\cdots\succ 3\cdot 2\succ 5\cdot 2\succ\cdots\succ 3\cdot 2^{n}% \succ 5\cdot 2^{n}\succ\cdots\succ 2^{2}\succ 2\succ 1.$ Sharkovskii’s theorem. Let $I\subset\mathbb{R}$ be an interval, and let $f:I\rightarrow\mathbb{R}$ be a continuous function. If $f$ has a periodic point of least period $n$, then $f$ has a periodic point of least period $k$, for each $k$ such that $n\succ k$. Sharkovskii's ordering, Sharkovsky's theorem Mathematics Subject Classification no label found Added: 2002-12-14 - 08:56
{"url":"http://planetmath.org/sharkovskiistheorem","timestamp":"2014-04-20T05:46:05Z","content_type":null,"content_length":"63697","record_id":"<urn:uuid:fd27c6a9-7035-461c-ad20-e7cdd16d511d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Definitions for Differentialˌdɪf əˈrɛn ʃəl This page provides all possible meanings and translations of the word Differential Random House Webster's College Dictionary dif•fer•en•tialˌdɪf əˈrɛn ʃəl(adj.) 1. of or pertaining to difference or diversity. 2. constituting a difference; distinguishing; distinctive. 3. exhibiting or depending upon a difference or distinction. 4. pertaining to or involving the difference of two or more motions, forces, etc. Category: Physics 5. pertaining to or involving a mathematical derivative or derivatives. Category: Math 6. (n.)a difference or the amount of difference, as in rate, cost, degree, or quality, between things that are comparable. 7. Category: Machinery Ref: differential gear. 8. Math. a function of two variables that is obtained from a given function, y = f(x), and that expresses the approximate increment in the given function as the derivative of the function times the increment in the independent variable, written as dy = f~(x) any generalization of this function to higher dimensions. Category: Math 9. Physics. the quantitative difference between two or more forces, motions, etc.: a pressure differential. Category: Physics Origin of differential: 1640–50; < ML Princeton's WordNet 1. derived function, derivative, differential coefficient, differential, first derivative(noun) the result of mathematical differentiation; the instantaneous change of one quantity relative to another; df(x)/dx 2. differential(noun) a quality that differentiates between similar things 3. differential gear, differential(adj) a bevel gear that permits rotation of two shafts at different speeds; used on the rear axle of automobiles to allow wheels to rotate at different speeds on curves 4. differential(adj) relating to or showing a difference "differential treatment" 5. differential(adj) involving or containing one or more derivatives "differential equation" 1. differential(Noun) the differential gear in an automobile etc 2. differential(Noun) a qualitative or quantitative difference between similar or comparable things 3. differential(Noun) an infinitesimal change in a variable, or the result of differentiation 4. differential(Adjective) of, or relating to a difference 5. differential(Adjective) dependent on, or making a difference; distinctive 6. differential(Adjective) having differences in speed or direction of motion 7. differential(Adjective) of, or relating to differentiation, or the differential calculus Webster Dictionary 1. Differential(adj) relating to or indicating a difference; creating a difference; discriminating; special; as, differential characteristics; differential duties; a differential rate 2. Differential(adj) of or pertaining to a differential, or to differentials 3. Differential(adj) relating to differences of motion or leverage; producing effects by such differences; said of mechanism 4. Differential(noun) an increment, usually an indefinitely small one, which is given to a variable quantity 5. Differential(noun) a small difference in rates which competing railroad lines, in establishing a common tariff, allow one of their number to make, in order to get a fair share of the business. The lower rate is called a differential rate. Differentials are also sometimes granted to cities 6. Differential(noun) one of two coils of conducting wire so related to one another or to a magnet or armature common to both, that one coil produces polar action contrary to that of the other 7. Differential(noun) a form of conductor used for dividing and distributing the current to a series of electric lamps so as to maintain equal action in all 1. Differential A differential is a device, usually, but not necessarily, employing gears, which is connected to the outside world by three shafts, chains, or similar, through which it transmits torque and rotation. The gears or other components make the three shafts rotate in such a way that, where, and are the angular velocities of the three shafts, and and are constants. Often, but not always, and are equal, so is proportional to the sum of and . Except in some special-purpose differentials, there are no other limitations on the rotational speeds of the shafts, apart from the usual mechanical/engineering limits. Any of the shafts can be used to input rotation, and the other to output it. See animation of a simple differential in which and are equal. The shaft rotating at speed is at the bottom-right of the image. In automobiles and other wheeled vehicles, a differential is the usual way to allow the driving roadwheels to rotate at different speeds. This is necessary when the vehicle turns, making the wheel that is travelling around the outside of the turning curve roll farther and faster than the other. The engine is connected to the shaft rotating at angular velocity . The driving wheels are connected to the other two shafts, and and are equal. If the engine is running at a constant speed, the rotational speed of each driving wheel can vary, but the sum of the two wheels' speeds can not change. An increase in the speed of one wheel must be balanced by an equal decrease in the speed of the other. Find a translation for the Differential definition in other languages: Use the citation below to add this definition to your bibliography: Are we missing a good definition for Differential?
{"url":"http://www.definitions.net/definition/Differential","timestamp":"2014-04-21T10:03:22Z","content_type":null,"content_length":"31463","record_id":"<urn:uuid:d8275cdc-3885-41e5-bbae-d253faf7f105>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Find The Point Of Diminishing Returns (x,y) For ... | Chegg.com Image text transcribed for accessibility: Find the point of diminishing returns (x,y) for the function R(x) = -0.6x 3 + 4.7x2 + 2x where R(x) represents revenue(in thousands of dollars) and x represents the amount spent on advertising (in thousands of dollars). The point of diminishing returns occurs at approximately .
{"url":"http://www.chegg.com/homework-help/questions-and-answers/find-point-diminishing-returns-x-y-function-r-x-06x-3-47x2-2x-r-x-represents-revenue-thous-q3876107","timestamp":"2014-04-24T06:41:09Z","content_type":null,"content_length":"20673","record_id":"<urn:uuid:e04a4a80-d383-4f45-9c1b-fc63d4cb8b5d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Absolute value inequality September 18th 2009, 07:57 PM #1 Sep 2009 Absolute value inequality Hey there, my first post here. Anyway, I need to solve this inequality by giving an interval notation. Here is the question: |2/x - 3| < 5 the denominator x is just throwing me off =S Any hints or insights would be greatly appreciated! When you take reciprocals in an inequality, the signs reverse... September 18th 2009, 07:59 PM #2 September 19th 2009, 03:45 PM #3 Sep 2009
{"url":"http://mathhelpforum.com/pre-calculus/103047-absolute-value-inequality.html","timestamp":"2014-04-18T14:00:03Z","content_type":null,"content_length":"35052","record_id":"<urn:uuid:9ec0562f-6768-4687-a335-62f7d36ca960>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Ecological Archives Ecological Archives E093-093-A2 Mark R. Lesser and Stephen T. Jackson. 2012. Making a stand: five centuries of population growth in colonizing populations of Pinus ponderosa. Ecology 93:1071–1081. Appendix B. Age height offset analysis. To correct tree ages to a germination age we used a subset of 80 trees, selected across all four of the study populations that were cored twice on the same side, but at different heights. We developed a regression equation to explain the difference in height based on age. To account for heteroscedasticity the data was transformed as follows: and entered into the equation: Based on this model we used the fitted equation to solve for age. For every 10 cm of height difference between cores the model predicted that 5 years be added to the age of a tree. FIG. B1. Height regressed against age for 80 ponderosa pine. Fitted regression line is shown along with 50 and 95% confidence intervals. Transformed, observed data are shown with open circles. [Back to E093-093]
{"url":"http://esapubs.org/archive/ecol/E093/093/appendix-B.htm","timestamp":"2014-04-20T03:11:17Z","content_type":null,"content_length":"2917","record_id":"<urn:uuid:ab18447d-3318-42ca-993a-616f9b9aa530>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Logic problems 10-25-2007 #1 Registered User Join Date Sep 2007 Logic problems I am trying to design a recursive method to figure out a certain problem. I know what I have to do but I am not really sure how to go around implementing it. Here it is: The user inputs n locations, say 4, and the distance between each location. For example The first line is the number of locations there are, in this case 4. The next three lines are the distance between locations. Line 2 corresponds to the distance of location 1 to loc. 2, 3, and 4 - so between 1 and 2 there is a distance of 9, between 1 and 3 there is a distance of 6, and between 1 and 4 there is a distance of 10. Line 3 corresponds to the distance between 2 - 3 and 2 - 4, and line 4 is the distance between 3 - 4. Now my goal is to find the shortest possible route between all four locations, starting with location 1 and ending back at location 1, hitting all of the locations inbetween. For example, it may look like "1 3 2 4 1" which would mean start at loc 1, travel to loc 3, from 3 travel to 2, from 2 travel to 4, and from 4 back to one. Here is what I know I must do: find the sum of every combination and then compare the sums to see which one is the smallest. My problem is that I am not sure how to code this - how do I put together every possible combination? 1 -> 2 -> 3 -> 4 -> 1 1 -> 3 -> 4 -> 2 -> 1 1 -> 4 -> 2 -> 3 -> 1 1 -> 2 -> 4 -> 3 ->1 I am using lists and sets if that is of any help. I think I know how to do this if I was able to use two dimensional arrays and n was a constant, but I can't quite figure it out recursively using sets and lists. This is done by Dijkstra's algorithm IIRC. It uses a stack to find the shortest route. Look it up. The actual implementation of the graph can be done in a number of ways. The simplest is to use a multimap, but there are other alternatives. It is too clear and so it is hard to see. A dunce once searched for fire with a lighted lantern. Had he known what fire was, He could have cooked his rice much sooner. I've been reading through articles I have found on Google, but the code is not making much sense to me. Does anyone know of any good tutorials that talk about Dijkstra's algorithm in terms of C++ lists and sets, not graphs? You need to figure out how to implement a graph using lists and sets. It's very easy with a multimap, since a graph is just a mapping of vertexes to edges, and edges are just a value and destination pair. If you can't use a multimap, you can implement one youself. It is too clear and so it is hard to see. A dunce once searched for fire with a lighted lantern. Had he known what fire was, He could have cooked his rice much sooner. I asked about this also in another forum and here is what one person had to say: Actually, Dijkstra's algorithm gives you the shortest paths between a given node and all the other nodes. It doesn't give you any path that visits all the nodes. What you are asking for is a solution of the traveling salesman problem, which is an infamous NP-complete problem. Nobody knows whether an efficient algorithm exists. If there are only 4 locations, you can just check all 6 orderings of locations 2, 3, and 4, but that's not going to work if you have many locations. (With 10 locations, there are already more than 360,000 orderings, and after that, each new location increases the number of orderings by at least a factor of 10.) You should ask yourself why you need the absolute shortest path. If you can settle for one that's not quite optimal, there may be algorithms that find short paths without guaranteeing that they find the shortest one. If I remember correctly, this is true at least when the locations are points on a plane and the distance is just normal distance, but beyond that, I don't know. In any case, this is a graph problem, so all relevant examples and code you're likely to encounter will use the language of graphs. What about something along these lines? // int n = number of places the user has entered possibleCombinations = factorial(n); for(int i = 0; i < possibleCombinations; i++) for(int x = 0; x < n; x++) // some code here So if there were 5 places, 5! is 160 so the first loop would execute 160 times for 160 total possible combinations, and the inside loop would execute 5 times for each time the out loop executed, since each combination has 5 places. Now my problem still is how do I put together all of these combinations? I found an article through Google talking about the traveling salesman problem, and it mentioned using std::next_permutation() This seems to be exactly what I need? Sure. For each permutation, you have to check if it is a valid path. std::next_permutation() will keep permuting forever. When it eventually comes back to the order you started with, you know you're done. It will actually return false when it realizes it has come back to the order you started with, you just have to keep checking the return value. Yeah, the other forum poster was right, I did not read your post carefully enough. next_permutation will work if you disallow revisiting each destination. Sometimes though, the least path may involve doubling back. It is too clear and so it is hard to see. A dunce once searched for fire with a lighted lantern. Had he known what fire was, He could have cooked his rice much sooner. For my assignment I have to calculate the best route using a recursive method. I've been playing around with next_permutation() and do not see how I could implement it into a recursive method - is it even possible? Is there anywhere I can see the actual code that next_permutation calls? Update: I found this recursive method for permutations, which seems to do what I need, but I don't understand the code: // recvperm.cpp : Defines the entry point for the console application. #include "stdafx.h" #include <string> #include <iostream> using namespace std; void string_permutation( std::string& orig, std::string& perm ) if( orig.empty() ) for(int i=0;i<orig.size();++i) std::string orig2 = orig; std::string perm2 = perm; perm2 += orig.at(i); int main() std::string orig="ABCDE"; std::string perm; return 0; How does this look to you? And what is "stdafx.h"? As far as I know, "stdafx.h" is a header file automatically generated when you use Visual C++. And as far as I can see from the code, you can just get rid of that line if you want to. Thanks, that fixed it. I've been working to convert that method from using strings to lists and sets. Originally I had a ton of errors, but now I've managed to fix a few of them. However, I seem to be stuck now. Here is my code: #include <string> #include <iostream> #include <set> #include <list> using namespace std; void string_permutation(set<int>& orig, list<int>& perm ) { // line 10 if( orig.empty() ) list<int>::iterator theIterator; for(theIterator = perm.begin(); theIterator != perm.end(); theIterator++) cout << *theIterator << "\n"; for(int i = 0; i < orig.size(); i++) // line 21 list<int> orig2 = orig; orig2.erase(i,1); // line 25 list<int> perm2 = perm; perm2.push_front(orig.find(i)); // line 29 string_permutation(orig2,perm2); // line 31 int main() int myInts[] = {1, 3, 2, 4}; set<int> orig(myInts, myInts+4); list<int> perm; string_permutation(orig, perm); return 0; And the compiler errors: 21 permutation.cpp [Warning] comparison between signed and unsigned integer expressions 23 permutation.cpp conversion from `std::set<int, std::less<int>, std::allocator<int> >' to non-scalar type `std::list<int, std::allocator<int> >' requested 25 permutation.cpp invalid conversion from `int' to `std::_List_node_base*' 25 permutation.cpp initializing argument 1 of `std::_List_iterator<_Tp>::_List_iterator(std::_List_node_base*) [with _Tp = int]' 25 permutation.cpp invalid conversion from `int' to `std::_List_node_base*' 25 permutation.cpp initializing argument 1 of `std::_List_iterator<_Tp>::_List_iterator(std::_List_node_base*) [with _Tp = int]' 29 permutation.cpp no matching function for call to `std::list<int, std::allocator<int> >::push_front(std::_Rb_tree_const_iterator<int>)' note C:\Applications\Dev-Cpp\include\c++\3.4.2\bits\stl_list.h:753 candidates are: void std::list<_Tp, _Alloc>::push_front(const _Tp&) [with _Tp = int, _Alloc = std::allocator<int>] 31 permutation.cpp invalid initialization of reference of type 'std::set<int, std::less<int>, std::allocator<int> >&' from expression of type 'std::list<int, std::allocator<int> >' 10 permutation.cpp in passing argument 1 of `void string_permutation(std::set<int, std::less<int>, std::allocator<int> >&, std::list<int, std::allocator<int> >&)' First, fix line 23. You're trying to dump a set into a list. Furthermore, you're trying to pass that on as an argument to a function which isn't defined for that argument. Last, you're lists aren't defined to have pointers/iterators as elements. I fixed line 23. Last, you're lists aren't defined to have pointers/iterators as elements. Sorry, what does this mean? The only place I use pointers/iterators is at line 13 and 14 (the for loop inside of if orig.empty()) and that compilers fine. 10-25-2007 #2 Registered User Join Date Apr 2006 10-25-2007 #3 Registered User Join Date Sep 2007 10-26-2007 #4 Registered User Join Date Apr 2006 10-28-2007 #5 Registered User Join Date Sep 2007 10-29-2007 #6 Registered User Join Date Sep 2007 10-29-2007 #7 10-29-2007 #8 Registered User Join Date Jan 2005 10-29-2007 #9 Registered User Join Date Apr 2006 10-29-2007 #10 Registered User Join Date Sep 2007 10-29-2007 #11 Registered User Join Date Sep 2007 10-29-2007 #12 Registered User Join Date Oct 2005 10-29-2007 #13 Registered User Join Date Sep 2007 10-29-2007 #14 Registered User Join Date Oct 2005 10-29-2007 #15 Registered User Join Date Sep 2007
{"url":"http://cboard.cprogramming.com/cplusplus-programming/95031-logic-problems.html","timestamp":"2014-04-17T08:51:11Z","content_type":null,"content_length":"98194","record_id":"<urn:uuid:352160c3-b610-4315-97eb-cd4613074096>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Total cross sections for the ionization and excitation of atoms and molecules by electron impact is one of the essential sets of data needed in a wide range of applications, such as modeling plasmas for plasma processing of semiconductors, designing mercury-free fluorescent lamps, assessing the efficiency of ion gauges, normalizing mass spectrometer output, diagnosing plasmas in magnetic fusion devices, and modeling radiation effects on materials. In addition to total ionization cross sections, sometimes differential ionization cross sections are needed. We present below total ionization cross sections for a large numbers of atoms/molecules, and singly differential ionization cross sections (energy distribution of ejected electrons) for H, He, and H[2]. Excitation cross sections for H, He, and Li are also presented. A. Total Ionization Cross Sections The accuracy of standard theoretical methods for ionization cross sections depends both on the quality of wave functions as well as the collision theory used. Many theories work well at high incident energies T but few can be trusted at low incident energies, particularly near the ionization threshold. Also, theories that require continuum wave functions are difficult to use on molecules because calculating continuum wave functions for molecules suitable for ionization cross sections is in general a very difficult task, particularly for polyatomic molecules. To date, major sources of ionization cross sections for molecules were experiments and theories--often semiempirical--that worked well only on limited types of targets and/or limited ranges of T. The theory used in this ionization cross section database is specifically designed for electron-impact ionization. It is versatile and can provide cross sections for atoms as well as molecules. The theory, referred to as the Binary-Encounter-Bethe (BEB) model [1], combines the Mott cross section with the high-T behavior of the Bethe cross section. The theory does not use any fitting parameters, and provides a simple analytic formula for the ionization cross section per atomic/molecular orbital. The total ionization cross section for a target is obtained by summing these orbital cross sections. Four orbital constants--the binding energy B, the orbital kinetic energy U, the electron occupation number N, and a dipole constant Q--are needed for each orbital, and the first three constants are readily available from the ground-state wave function of the target atom or molecule. The basic formula for the ionization cross section per orbital is [1]: where t = T/B, u = U/B, S = 4πa[0]^2 N (R/B)^2, a[0] = 0.529 18 Å, and R = 13.6057 eV, and the dipole constant Q is defined in terms of the continuum dipole oscillator strength df/dW, where W is the kinetic energy of the ionized electron: When df/dW is unknown, one can put Q = 1 as a further approximation. The constant n on the right-hand side (RHS) of Eq. (1) is used for ion targets and valence orbitals of large atoms as indicated at the end of this section. Unless noted otherwise, use n = 1. The BEB cross section is not very sensitive to the accuracy of the orbital constants used except for the value of the lowest B. A vertical ionization energy is recommended for the lowest B. We used an experimental value if it is known for a target. Otherwise theoretical values were used. The B values marked "OVGF" were calculated using the outer-valence Green's function method, those marked "CCSD" using the coupled cluster single double excitations, and those marked "CCSD(T)" using the CCSD plus triple excitations included perturbatively. Orbital constants obtained from the Hartree-Fock or similar wave functions are adequate. The resulting cross sections are accurate to 5 % to 20 % from threshold to T ~ 1 keV in most targets presented in this database. As was demonstrated in a series of publications [1-11] the BEB model (with Q = 1 in most cases) was found to reproduce known ionization cross sections accurately for small atoms and a variety of large and small molecules from H[2] to SF[6]. Also, the BEB model works well for radicals as well as stable molecules. In many cases, the theory agrees with experiments in peak values within 10 %. Unlike most theories, the BEB model is reliable near the threshold as well. Production of a doubly charged ion or two singly-charged molecular fragments resulting from inner-shell ionization can be estimated by doubling the ionization cross sections of individual atomic/ molecular orbitals whose binding energies B exceed the double ionization energy (approximately B > 40 eV) [6]. Inclusion of such contributions in the BEB theory is indicated by "Yes" in the atomic/ molecular orbital constant column marked "DblIon," e.g., for C[2]F[6] and C[3]F[8] [7]. Through this WWW presentation, a user can (a) download orbital constants and ionization cross sections for an atom or molecule of interest as a text (ASCII file), (b) calculate the total ionization cross section at a given incident energy T online, (c) look at the graph for an atom or molecule with comparisons between the BEB model and experiments (data points for experiments, curves for theory unless otherwise specified), (d) zoom into a segment of the BEB cross section between specific incident energy limits, e.g., 5 eV to 100 eV, or (e) calculate the energy distribution of ejected electrons for H, He, and H[2] online. The molecules table and atoms table provide access to the data for those molecules and atoms to which the BEB model has been applied. This database is updated frequently by adding new atoms and molecules as their cross sections become available. The BEB model is a simplified version of the BED model described in Section B below. For singly-charged molecular ions, n = 2 is used on the RHS of Eq. (1) [Example: H[2]^+]. Also, if the Mulliken population analysis of a valence molecular orbital indicates the leading component (> 50 %) consists of a specific atomic orbital with the principal quantum number n > 2, then this value of n is used in Eq. (1) [Example: CS[2]]. For atoms, if the principal quantum number (pqn) of an orbital is 3 or greater, then use n = pqn in Eq. (1). If pqn is 1 or 2, then use n = 1. B. Singly Differential Ionization Cross Section (Ejected Electron Energy Distribution) The singly differential cross section (SDCS) for ejecting an electron with kinetic energy W from an atomic/molecular orbital is given in the BED model by: where w = W/B, df(w)/dw = the continuum dipole oscillator strength for ejecting an electron of kinetic energy W by photoionization, and N[i] = f/dw)dw. For H, He, and H[2], known df/dw values have been fitted to a simple power series of y = B/(W + B), and the values of the coefficients a, b, ... and the corresponding SDCS are presented in the web pages of individual targets for T and W values requested by the user, or a table of SDCS for a T value and preselected values of W. The resulting dσ/dW is normalized to match the total ionization cross section. Sample comparisons between experimental data and theory for SDCS [9] are available online. As is true for any theory, the BEB model for both total and differential cross sections has its limits. For instance, the BEB model presented here is a nonrelativistic theory, and therefore should not be used for T > 10 keV. For a relativistic extension of the BED/BEB model see [12]. C. Electron-Impact Excitation of Atoms A scaling formula, called BE scaling, converts plane-wave Born cross sections (σ[PWB]) for electron-impact excitation of neutral atoms to highly accurate cross sections at all incident energies T, although the original Born cross sections are reliable only at high T [13]. The BE scaled cross section σ[BE] is defined as: where E is the excitation energy. Scaling with just the excitation energy E, called E scaling, converts Coulomb Born cross sections (σ [CB]) for electron-impact excitation of singly charged atomic ions to highly accurate cross sections at all incident energies T, although the original Coulomb Born cross sections are reliable only at high T [14]. The E scaled cross section σ[E] is defined as: D: Excitation-Autoionization Total ionization cross sections of atoms and molecules consist of two components, direct and indirect ionization. The direct ionization accounts for the ejection of a bound electron directly into the continuum. The BEB/BED model is used to calculate direction ionization cross sections. The most significant source for indirect ionization is the excitation of an inner-shell electron to an excited valence state (e.g., 3s → 3p excitation in aluminum), which is an unstable excited state. This excited state can decay by either emitting a photon or by ejecting an electron. The latter is called excitation-autoionization (EA). When the energy level of the excited state is not very high compared to the lowest ionization energy, EA dominates. For the ionization cross sections of atoms in this database, cross sections for significant EA (usually electric-dipole and spin allowed excitations) have been calculated using the BE and E scalings discussed in Section C above, and the EA cross sections are included in the total ionization cross sections. For further information, please contact: Dr. Karl Irikura Chemical and Biochemical Reference Data Division National Institute of Standards and Technology Gaithersburg, MD 20899-8320 phone 301-975-2510; fax 301-869-4020 email Feedback.
{"url":"http://physics.nist.gov/PhysRefData/Ionization/intro.html","timestamp":"2014-04-17T21:24:21Z","content_type":null,"content_length":"15645","record_id":"<urn:uuid:0ac15f04-54a6-4c27-96ff-7928b6563411>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from September 2011 on Gowers's Weblog When I started writing about basic logic, I thought I was going to do the whole lot in one post. I’m quite taken aback by how long it has taken me just to deal with AND, OR, NOT and IMPLIES, because I thought that connectives were the easy part. Anyway, I’ve finally got on to quantifiers, which are ubiquitous in advanced mathematics and which often cause difficulty to those beginning a university course. A linguist would say that there are many quantifiers, but in mathematics we normally make do with just two, namely “for all” and “there exists”, which are often written using the symbols $\forall$ and $\exists.$ (If it offends you that the A of “all” is reflected in a horizontal axis and the E of “exists” is reflected in a vertical axis, then help is at hand: they are both obtained by means of a half turn.) Let me begin this discussion with a list of mathematical definitions that involve quantifiers. Some will be familiar to you, and others less so.
{"url":"http://gowers.wordpress.com/2011/09/","timestamp":"2014-04-21T04:33:16Z","content_type":null,"content_length":"57399","record_id":"<urn:uuid:15371184-2828-45a5-8144-b9654a5310be>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
partially ordered set partially ordered set (plural partially ordered sets) 1. (set theory) A set having a specified partial order. 2. (set theory) Said set together with said partial order; the ordered pair of said set and said partial order. Usage notesEdit • The two senses are commonly used interchangeably, there rarely being a need to distinguish between the two entities. set having a specified partial order • Japanese: 半順序集合 (はんじゅんじょしゅうごう, hanjunjo-shūgō) Last modified on 8 October 2013, at 20:39
{"url":"http://en.m.wiktionary.org/wiki/partially_ordered_set","timestamp":"2014-04-20T12:11:57Z","content_type":null,"content_length":"18859","record_id":"<urn:uuid:98b72c9a-1c2e-486d-8f28-b84a6ba2d84c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
The Alexander-Conway Polynomial Notice. The Knot Atlas is now recovering from a major crash. Hopefully all functionality will return slowly over the next few days. --Drorbn (talk) 21:23, 4 July 2013 (EDT) The Alexander-Conway Polynomial From Knot Atlas (For In[1] see Setup) │In[2]:= ?Alexander │In[3]:= Alexander::about │ │Alexander[K][t] computes the Alexander polynomial of a knot K as a function of the variable t. Alexander[K, │The program Alexander[K, r] to compute Alexander ideals was written by Jana Archibald │ │r][t] computes a basis of the r'th Alexander ideal of K in Z[t]. │at the University of Toronto in the summer of 2005. │ │In[4]:= ?Conway │ │Conway[K][z] computes the Conway polynomial of a knot K as a function of the variable z. │ The Alexander polynomial $A(K)$ and the Conway polynomial $C(K)$ of a knot $K$ always satisfy $A(K)(t)=C(K)(\sqrt{t}-1/\sqrt{t})$. Let us verify this relation for the knot 8_18: In[5]:= alex = Alexander[Knot[8, 18]][t] Out[5]= -3 5 10 2 3 13 - t + -- - -- - 10 t + 5 t - t 2 t t In[6]:= Expand[Conway[Knot[8, 18]][Sqrt[t] - 1/Sqrt[t]]] Out[6]= -3 5 10 2 3 13 - t + -- - -- - 10 t + 5 t - t 2 t t The determinant of a knot $K$ is $|A(K)(-1)|$. Hence for 8_18 it is In[7]:= Abs[alex /. t -> -1] Out[7]= 45 Alternatively (see The Determinant and the Signature): In[8]:= KnotDet[Knot[8, 18]] Out[8]= 45 $V_2(K)$, the (standardly normalized) type 2 Vassiliev invariant of a knot $K$ is the coefficient of $z^2$ in its Conway polynomial: In[9]:= Coefficient[Conway[Knot[8, 18]][z], z^2] Out[9]= 1 Alternatively (see Finite Type (Vassiliev) Invariants), In[10]:= Vassiliev[2][Knot[8, 18]] Out[10]= 1 Sometimes two knots have the same Alexander polynomial but different Alexander ideals. An example is the pair K11a99 and K11a277. They have the same Alexander polynomial, but the second Alexander ideal of the first knot is the whole ring ${\mathbb Z}[t]$ while the second Alexander ideal of the second knot is the smaller ideal generated by $3$ and by $1+t$: In[11]:= {K1, K2} = {Knot[11, Alternating, 99], Knot[11, Alternating, 277]}; In[12]:= Alexander[K1] == Alexander[K2] Out[12]= True In[13]:= Alexander[K1, 2][t] Out[13]= {1} In[14]:= Alexander[K2, 2][t] Out[14]= {3, 1 + t} Finally, the Alexander polynomial attains 551 values on the 802 knots known to KnotTheory`: In[15]:= Length /@ {Union[Alexander[#]& /@ AllKnots[]], AllKnots[]} Out[15]= {551, 802}
{"url":"http://katlas.math.toronto.edu/wiki/The_Alexander-Conway_Polynomial","timestamp":"2014-04-16T13:08:37Z","content_type":null,"content_length":"32802","record_id":"<urn:uuid:dd221044-832e-42a3-b7fe-8c7aacfc75d3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
West Mclean Geometry Tutor Find a West Mclean Geometry Tutor I have a masters in economics and a strong math background. I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and algebra problems. I enjoy teaching and working through problems with students since that is the best way ... 14 Subjects: including geometry, calculus, probability, STATA ...I can help you study for the math, verbal, and writing sections to maximize your score. You tell me what area or areas you want to focus on, and I will plan our sessions accordingly to build your skills and confidence. I can give you advice on how to approach math and reading comprehension problems, how to learn vocabulary efficiently, and how to write effectively. 46 Subjects: including geometry, English, Spanish, algebra 1 ...I don’t particularly like memorization, so I prefer to teach in a logical way so that the material makes sense to the student instead of having them memorize tons of information that will be forgotten shortly after the test. I believe a solid understanding of some of the more basic subjects in s... 23 Subjects: including geometry, Spanish, calculus, statistics ...I'm a retired Professional Engineer with 30 years of experience applying math and science on the job. I helped my customers understand the technical parts of their projects, so I’ve been teaching math and science my whole career. These days, I do one-on-one and group tutoring in the subjects I know best. 31 Subjects: including geometry, chemistry, calculus, physics ...I taught Math on the high school level but I have also tutored students in elementary school and college. I have taught and tutored for ten years. I have experience in Algebra I & II, trigonometry, Pre-Calculus, Geometry, Calculus, and Pre-Algebra. 19 Subjects: including geometry, calculus, ASVAB, GRE
{"url":"http://www.purplemath.com/West_Mclean_geometry_tutors.php","timestamp":"2014-04-18T16:31:07Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:1134f54c-70d2-4572-8490-6986a612c75d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the N(F) and V(f) the kernel and a values of a matrix, Questions January 3rd 2012, 12:28 AM Finding the N(F) and V(f) the kernel and a values of a matrix, Questions Hello, I would like you to read the attached image where I have presented the entire problem and my two questions Attachment 23172 Please see the attached file To clarify question 2 a bit more: How can i judge by the one parametric value of t:s which dim to take? How do i know its a line,vectors parallel to a plane, or vectors in a room? Is there any analytical way to check this? January 3rd 2012, 01:07 AM Re: Finding the N(F) and V(f) the kernel and a values of a matrix, Questions Have you tried to solve the system? If you do you'll obtain one zero row that means you have to choose one parameter. Solving the system: $\left \{ \begin{array}{ll} x_1+2x_3=0 \\ x_2-3x_3=0 \end{array} \right.$ Choose $x_3=t$ so we obtain: $\left \{ \begin{array}{ll} x_1=-2t \\ x_2=3t \\x_3=t \end{array} \right.$ The solution set is: $V=\left \{ \left ( \begin{array}{ccc} -2t \\ 3t \\ t \end{array} \right) | t \in \mathbb{R} \right \}$ January 3rd 2012, 01:23 AM Re: Finding the N(F) and V(f) the kernel and a values of a matrix, Questions Actually I am studying linear alg on my own, i have never been to a lecture on linear algebra, thats why i have limited knowledge, and I have an exam that I am trying to pass very soon, i dont have a particular book either, im trying to read through some free books online sometimes, and i borrowed one book from the library where I saw the dim values., however they did not explain how to set the dim values. * Thanks for your answer Siron, but could you specify what a one zero row looks like? * I dont understand why you pick x3 = t, could you pick anyone to be t? Could someone answer that second question? How to know whether something is a dim vl =1, dim vpi = 2 , dim V = 3 In The exam problem that i presented to you above which was studying they got dim v(F) = 2. I still dont get it though. January 3rd 2012, 03:03 AM Re: Finding the N(F) and V(f) the kernel and a values of a matrix, Questions We are given the system: $\left \{\begin{array}{lll} 2x_1+4x_3=0 \\ x_1+x_2-x_3=0 \\ -x_1+3x_2-11x_3=0\end{array} \right.$ We can solve the system if we apply row operations on the following matrix: $\left ( \begin{array}{ccc} 2&0&4 \\ 1&1&-1 \\ -1&3&-11 \end{array} \right)$ $=\left ( \begin{array}{ccc} 1&0&2 \\ 1&1&-1 \\ -1&3&-11 \end{array} \right)=\left ( \begin{array}{ccc} 1&0&2 \\ 0&1&-3 \\ 0&3&-9 \end{array} \right)=\left ( \begin{array}{ccc} 1&0&2 \\ 0&1&-3 \\ 0&1&-3 \end{array} \right)=\left ( \begin{array}{ccc} 1&0&2 \\ 0&1&-3 \\ 0&0&0 \end{array} \right)$ Therefore we have reduced the given system to: $\left \{ \begin{array}{ll} x_1+2x_3=0 \\ x_2-3x_3=0 \end{array} \right.$ We can express $x_1$ and $x_2$ in terms of $x_3$ so choose $x_3=t$. January 3rd 2012, 03:05 AM Re: Finding the N(F) and V(f) the kernel and a values of a matrix, Questions I get what you mean by a one zero row now, thanks I must ask though if you know the reason for choosing dim (Vf) = 2. Are you guys familiar with this terminology? January 3rd 2012, 06:06 AM Re: Finding the N(F) and V(f) the kernel and a values of a matrix, Questions Terminology? Even if you are studying Linear Algebra on your own, you should have seen the definition of "dimension of a vector space". Here, as has been pointed out, row reduction shows that the original system of equations, giving the kernel of F, is equivalent to $x_1+ 2x_3= 0$ and $x_2- 3x_3= 0$ which are, of course, equivalent to $x_1= -2x_3$ and $x_2= 3x_3$. That means that any vector in the solution space (the kernel of the linear transformation) is of the form [tex]<x_1, x_2, x_3>= <-2x_3, 3x_3, x_3>= x_3<-2, 3, 1>. That is, any vector in the kernel is simply <-2, 3, 1> times a constant. <-2, 3, 1> spans the entire space and, of course, a set containing a single vector is "independent" so it is a basis for the kernel- the kernel has dimension 1. There is a theorem, that perhaps you have not met yet, that says that if F is a linear transformation from vector space U to vector space V, the sum of the dimensions of the kernel and image is equal to the dimension of U. Here U has dimension 3 and the kernel has dimension 1 so the image has dimension 2. That is, F maps all of $R^3$ into a 2 dimensional subspace of $R^3$. If you have not had that theorem, you can prove that the image has dimension 2 directly: let <a, b, c> be some vector in the image. Then $2x_1+ 4x_3= a$, $x_1+ x_2- x_3= b$, and $-x_1+ 3x_2- 11x_3= c$ for some $x_1, x_2, x_3$. We can put that into an augmented matrix and row reduce: $\begin{bmatrix}2 & 0 & 4 & a \\ 1 & 1 & -1 & b \\ -1 & 3 & -11 & c\end{bmatrix}$ $\begin{bmatrix}1 & 1 & -1 & b \\ 0 & 1 & -3 & b- \frac{a}{2}\\ 0 & 0 & 0 & c- 2b+ 2a\end{bmatrix}$ which is solvable if and only if c- 2b+ 2a= 0 or c= 2b- 2a. That is, any such vector can be written <a, b, c>= <a, b, 2b- 2a>= <a, 0, -2a>+ <0, 1, 2b>= a<1, 0, -2>+ b<0, 1, 2>. Those two vectors span the image of F and are independent so are a basis- the image of F has dimension 2. January 3rd 2012, 11:05 PM Re: Finding the N(F) and V(f) the kernel and a values of a matrix, Questions Thank you, I am studying linear alg in another language than English, thats why am confused about the terminology
{"url":"http://mathhelpforum.com/advanced-algebra/194872-finding-n-f-v-f-kernel-values-matrix-questions-print.html","timestamp":"2014-04-20T18:53:35Z","content_type":null,"content_length":"15648","record_id":"<urn:uuid:41c2de17-9bb9-47c4-b618-f34a007c1d77>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I need to set up a proportion for this question: A recipe for chocolate chip cookies requires a half cup of sugar for a serving of 8 cookies. You need to make 30 cookies for your Math class and decide to use proportions to calculate how much sugar you need. Tell me if I am doing this right please :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f5567c3e4b0862cfd06e639","timestamp":"2014-04-19T07:19:57Z","content_type":null,"content_length":"66987","record_id":"<urn:uuid:4572e1c2-93d5-45c6-94d9-316e27ee8c01>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate An Average Daily Expense Cell I've set up a simple spreadsheet to keep track of my food expenses every month. The first column is for the date, the second is for the daily total expenses and then the next three columns are where I add the data which is then calculated into the daily total column. I also have a total at the bottom for the entire month. Now what I want to do is I want to also have underneath the grand total, a cell which keeps track of my average daily expenses. Basically I want to divide the total expenses by the number of days which I've entered data. Now normally this would be fine but because I've applied the formula to all the cells in the expense column, it automatically lists every day as "0" rather than leaving it blank. So when it does the average calculation it's dividing my total by 30 days rather than by only the 7 days I have data for. View Complete Thread with Replies Sponsored Links: Related Forum Messages: Calculate Daily Average Of The Several Variables I'm working on a time series dataset with a time step of 15 minutes. I need to calculate daily average of the several variables. So let column "A" be the "date-time" column, let column "B" be the "variable column" and column "C" be the "average column", I need a function that calculated in C1 cell the average of B1 to B95 cells, in C2 cell the average of B96 to B190 cells, in C3 cell the average of B191 to B285 etc. View Replies! View Related Categorized Daily Expense To Monthly Basis While I was working my daily expense I come up with this issue. I do eat outside while I am on work. Sometimes I go to Pizza, sometime I go to mexican etc etc. The common between them is word FOOD. I would like to modify the formula suggested by Ron Coderre See the attached file to get more idea of my question. Then I would Like to Highlight Entire Rows which contains a Specific text. View Replies! View Related Daily Average Formula I need to count the daily average of a task to a week ending number. I need to see the current average after each day during the week. Example – Mon = 2, Tues = 4 AVERAGE is 3 – Wed = 2 AVERAGE IS NOW 2.6…and so on averaging out after each day is added. View Replies! View Related Average Daily Values I have a table of data covering the last 9 months based on values automatically collated from 15 minute intevals. The date/time is in column A (01/01/2009 00:00) with the data collected in column D. My wish is to get the average daily data from column D and I am slowly losing my head!!! Is there anyway of getting a formula to auto-average the daily values bearing in mind there are currently 96 daily entries. I have tried converting the first 5 digits of column A to numeric (i.e. 31894 for 01/01) then trying to write a formula saying =average(D1:D24577,if(range="31894",1)). I can now see a simpler way but am so confused after an hour or so of trying. Each day has 96 readings so I need an auto adding formula. average column cell A would say =average(D1:D96). Is there are way to have the cell below auto-update itself to look at the next 96 values and so on and so forth? View Replies! View Related Average Formula On Daily Basis How to go about fixing my spreadsheet so I am not having to manually update it each day..here is my forumla I am currently using...=(AVERAGE($D$2:$AH$2)-C5)*AI5...basically i need the cell below in D2 to change as every day a new day rolls off..for example the following day I need this formula to be =(AVERAGE($E$2:$AH$2)-C5)*AI5 ....so just that day changes.....do I need to use an If/then statement? if so how? View Replies! View Related Average Daily Data By Month I have a workbook with two sheets - DATA and SUMMARY. DATA has two columns - date and data_value. Data will be added to this sheet on a regular basis SUMMARY has two columns - month and average In the column for average I would like a formula to calculate the average of data_value for each month without having to manually determine the range for the particular month. View Replies! View Related Formula For Average Daily Balance Am trying without success to create a formula to calculate average daily balance from a ledger that has a variable amount of entries per month. The variability of # of entries has me stumped. For date, amount 1/1/2005, 10 1/5/2005, 1 1/10/2005, 4.65 1/18/2005, 7 1/22/2005, 20 Aver Daily Bal = 23.78. and I can get this easily manually, but I'd like a more automated solution. I'm trying a sumproduct angle, to no avail. View Replies! View Related Summarize Daily Data Into Weekly Average I have two time series which span several years. The first series measures stock levels on every Friday (52 values a year). The second series measures the price level every weekday (260 values a I'd like to condense the daily data in to a weekly average, can I do this easily? For example, I could manually use the Weeknum function to calculate the week number of each daily price data, then find the average daily price for each week, thus giving me 52 values which I can compare to the weekly stock series. Is there an automatic, fast way of doing this? Alternatively, I'd be happy to settle with a monthly average. Is this possible via macro's or does VBA need to be used? View Replies! View Related Average Based On Criteria :: Count Daily Usage I need to monitor the average daily usage of a liquid tank for a customer. We fill this tank every few weeks. The formula I am looking for would ignore the fills and just count the daily usage. View Replies! View Related How To Calculate Daily Compound Interest I am trying to set up my excel to calculate daily compound interest. The amount is 10,000 at 0.75% per day for 6 months. I have tried several different things with no success - View Replies! View Related Calculate Daily Interest Rate Some years ago I came across a formulae to calculate Daily Interest on a Building Society Savings account in the UK. I have used this since but find my calculations never work out the same as my BS, although to my advantage! It is =B3*B4/360*DAYS360(B5,B6,TRUE) Where: B4=Interest Rate B5=Starting Date B6=Finishing Date For some reason the formulae uses 360/year and not 365/year. Using both still gives wrong answer. View Replies! View Related Formula To Calculate Average Of Every Other Cell I have a row of data starting in cell E4 that could, theoretically, go to the far right end of the spreadsheet. I need to enter a formula in cell D4 that calculates the average of every other cell in this row, starting with E4, that is E4,G4,I4,K4... View Replies! View Related SUMIF To Calculate Daily Gains/losses I currently have a profit/loss spreadsheet where I use a SUMIF to calculate daily gains/losses. =SUMIF('Daily Sheet'!B:B,B377,'Daily Sheet'!H:H) Basically, on the 'Daily Sheet' page when a sale is made I write it in a row. Each sale gets its own row. One of the column's has the date. (Column B) Column H has the respective gain or loss for that sale/purchase. B377 in this case contains the date 18-Nov. (B360 to B389 has Nov 1 through Nov 30) So this cell will calculate all the gains and losses in column H for each sale/purchase entry I have for that day. Some days I have 30 rows, others only 2 or 3, so this works great. My problem now is that my rows have gotten much more complicated and detailed, and now I'd like to be able to only sum column H:H if it meets TWO criteria. (I'd like to add in Column C that specifies the type of sale or purchase being made, which is R, T or S for my purposes.) So I'd like to somehow write a sumif that only adds up the sales for 18-Nov AND sale type R. View Replies! View Related Calculate Monthly Standard Deviation From Daily Data I've daily data of a stock indices returns and I would like to calculate the monthly standard deviation. Currently, I'm using the following worksheet functions: =STDEVP(C2:C20)*SQRT(COUNT(C2:C20)) However, the range changes from month to month, which makes the process of calculating the monthly standard deviation to be quite tedious if I've about 10 years worth of data. I assume I could somehow substitute the range with a dynamic range, but I'm struggling to come up with the correct formulation that would do that. View Replies! View Related How To Calculate Camp Meals Statement Based Only Daily Sheet I have around 250 Employees Camp Meals Statements. Each day we prepare a Excell Sheet and enter the details file attached for easy reference Im manually calculating the Totals in each sheet if emp takes meals we marked as Y otherwise N based on that i want the total meals daily. One more thing Base on employeed code i want the monthly statement in another sheet same file attached.. View Replies! View Related Calculate The Average Time I need to find the average time it takes students to take exams . I use the following formula =text(end time - start time, "h:mm"). I am able to calculate the amount of time it takes a student to take the exam. Now I need a formula to calculate the average time students take to complete a test. I have over 80 times i need to average. Whenever i try a formula I keep getting 0. View Replies! View Related Calculate Average Using Non-consecutive Columns I have data in 3 different columns (A, C, and E) to name a few. I want to average each of these columns, but if any of them include zero values, I want to exclude that from my calculation. Column A = 10 Column B = 0 Column C = 3 Right now, my "average" formula, is showing the average as 4.33. (average a5, b5, c5) The real average I'm looking for is 6.5. What is the best way to setup my formula? View Replies! View Related Calculate The Average Difference Between Columns I am trying to determine the average reduction amount of appraised property values. I have two columns in my spreadsheet. Column H has appraised value of property. Column I has the accepted value of the property. Sometimes the accepted value is the same as the appraised value, sometimes it is reduced, and sometimes it is rejected. I want to be able to find the average reduction amount when the accepted value is less than the appraised value and is not rejected. Column H always shows a numerical value (i.e. $250,000), but Column I may have a numerical value or show "rejected". View Replies! View Related Calculate Average If Condition Met I have two columns of data. Column B is age column C is gender. I want to calculate average male age and average female age. Suggestions? B C 57 f 53 f 47 m 40 f 42 m Average female age is ___ Average male age is ___ View Replies! View Related Calculate The Average Of The Previous 12 Months I'm trying to make a formula that will calculate the average of the previous 12months. The goal is to tie the formula to a reference cell that contains a date. Each time the date is changed by a user the calculation will be updated accordingly. Here is the CSE formula that I thought would work: I've also attached a sample file to illustrate the problem. The 'range' portion ($C$5,(COUNT(C5:$C$53)-1)[/b]of the Offset function was setup simply to get the 12 months which preceeded the reference View Replies! View Related Calculate Average Over Variable Range I'd like to calculate an average over a variable range. In col.A there are grades from A4:A21. In col.C there are the values for the start row of the range and in col.D the values for the end row of the range. For instance the value in C4=4 and D4=9. In cell F4 I want the average calculated over A4:A9. Value in C5=10 and D5=15. In cell F5 I want the average calculated over A10:A15. View Replies! View Related Calculate An Average As Data Is Entered? I'm using Excel 2003. I have about 190 rows that I use on any given day to enter start times & end times. I calculate the difference in Column E. Is there a formula that will calculate the average time as I enter them in the rows? Some days may have only 100 entries, other days may have as many as 190. I don't want to keep adjusting the average formula for column E. View Replies! View Related Calculate Average In Pivot Table With Pivot Tables, there is the ability to add Grand Totals to Rows or Columns, but I want to add Averages to the end of the row instead. Can this be done? I have tried Calculated Fields but can't get the right result. Auto Merged Post;Hi again, After I posted this I found another similar post, where the answer was that the "Average" calculation has to be done outside the pivot table, ie. there seems to be no way the pivot will give averages for rows, only grand totals. If this is the case then I will have to work around it.... I was hoping it could be done within the pivot because I have graphs linking to the pivot and they all go spak when I update the pivot with different data. The number of columns will change all the time, meaning the average will need to be reworked. Just trying to save time! View Replies! View Related Calculate Average Where All Numbers Equal X I have done is created an Officer Evaluation Form in Word for my Police Chief and the Scores for the different observations are: N/A, 1, 2, 3, 4 and 5. If for example there are 4 observations and one of the observations is "N/A" for not applicable or not observed and the rest are all 5's I want the formula to ignore the field(s) with the N/A and still come up with an average of 5. The way I have it set up now which is: =AVERAGE(KOW1,KOW2,KOW3,KOW4) it comes up with an average of 3 when I put a N/A in field KOW1 and all 5's in KOW2-KOW4. View Replies! View Related Calculate Average Based On Matching Values From Two Worksheets Worksheet 1 I am calculating group averages for the following performers - very good, good, average, low, very low - for a series of factors. Worksheet 2 Contains the same factors with the values for which Im trying to work out the average. Each factor has a performance rating above it, either very good, good, average, low, very low. I need a formula which will match the performance rating from worksheet 1 (I3, J3, K3, L3, M3) to worksheet 2 and then calculate the averages of each factors based on those matches. View Replies! View Related Formula To Calculate Average Number Of Rebills Per Client I need help with a formula to calculate average number of rebills per client. I don't know how to get excel to add the number of unique client in a given row. Example Column A Client 1 Client 1 Client 1 Client 2 Client 2 Client 2 Client 3 Client 4 Client 4 Client 4 Client 4 Client 5 Formula needs to calculate number of Unique clients. In this case, the answer is 5, but how can I get excel to calculate for me? View Replies! View Related How Do I Create A Running Average That Will Only Calculate The Averages In % Each Month I need to do the following and can't figure it out. How do I create a running average that will only calculate the averages in % each month. Example: Opt 1 for Jan, Feb, Mar =1 each= 3 total = 100%; OPt 2 for Jan, Feb, Mar =1,0,1= 2 = 66%; Opt 3 for Jan, Feb, Mar = 0, 0, 1 = 1 total = 33%. My problems is I want monthly running average that shows the yearly percentage up to date but only for the months there is a value 1 or 0. How can this be done because the way I have it now, those % are being divided by 12 and that isn't the correct % View Replies! View Related Calculate The Average Of A Range If It Meets A Certain Text Criteria I want to calculate the average of a range...if it meets a certain text criteria. For example, if the product is a "Course", then take the average of pages all those courses together. ProductNumber of PagesExam316Course46Exam232Course32Exam245Course53Exam155Course246Exam118Course154Exam82Course434Exam80Average # of Pages for Courses = Average # of Pages for Exams = View Replies! View Related Calculate Average Based On Item Chosen From List My attached files contains stock returns for companies. Each sheet contains the returns over a 5 year period for a certain stock, with the ticker symbol of the stock used as the sheet name. I want to write a sub that presents the user with a user form. This user form should have an OK and Cancel buttons, and it should have a list box with a list of all stocks. The user should be allowed to choose only one stock in the list. The sub should then display a message box that reports the average monthly return for the selected stock. View Replies! View Related UserForm For An Expense Report I have an expense report with one row labeled as "Auto", and 7 columns labeled with Sunday, Monday, Tuesday, etc. I'd like to double-click a cell within that row and have a userform (or something else) pop up with 2 spots for data entry: "Personal" and "Company". A user would enter dollar amounts in one or both fields. After they're finished, I would like the total of what they entered to populate the cell that was double-clicked, but still have that breakdown available or even be able to change it by re-double-clicking the cell. Is that even possible? To add complication, at the end of the row are two additional columns that total personal and company expenses. I'd have to have all personal expense amounts sum together in its column, as well as all company expenses sum together in the other column. View Replies! View Related Expense Account Lookup The expense account we work with involves several currencies due to the international nature of the business. With that, each row must show the currency involved and the formula used from one line to the next makes it repeat the currency until changed i.e. if cell B22 has a date entered then Cell H22 will reflect the currency from Cell H21 =IF(B22="","",H21). If Cell H21 showed CDA for Canadian currency then H22 would also become CDA. Once changed manually then all cells below will now reflect the new currency until changed again. Using the lookup function we collect the individual amounts of each currency and run totals at the bottom. Therefore, if there were three entries in SGD (Singapore Dollars) and two entries in EUR (European Euros) etc. that each row will do a lookup by the three letter currency code and collect the sum of each currency. SGD could repeat later again and when changed manually will be included in the lookup. At the present we are entering the three digit codes manually i.e. SGD. What we would like to do is automatically have the bottom be able to pickup the currency change when a new currency is entered on the individual rows on the top part of the expense sheet. So if Cells H21:H23 were SGD and Cells H24:H25 were ERU that Cell A58 would show SGD and cell A59 would somehow be able to grab the ERU etc. One of the same currencies may repeat itself and the lookup will capture the additional totals but SGD would only show once. View Replies! View Related Formula To Calculate Average/wtd Avg On A Subset Of A List I will do my best to explain but just in case I have attached a worksheet to make things easier. I have a list of a few thousand products with data on each product. When I run through a series of cuts, I get a subset list and want to be able to calculated a weighted average by somehow saying to do a weighted average (and/or count, and/or average, etc.) on the characteristics of only the products in the subset. View Replies! View Related Formula In / With Dropdown: Calculate The Average %age In The Attached Spreadsheet Need formula to calculate the average %age in the attached spreadsheet. I would like to enter a score between 1 and 4, but with 1 = 10%, 2 = 25%, 3 = 80% and 4 = 100%. The score in the cell must still show as between 1 and 4 but the total must be an average of the relevant %ages. i.e. if scores are recorded as 1, 2, 3, 4, then the total average % will be (10%+25%+80%+100%)/4 = 53.75%. I'm not sure whether this should be in the Validation or in the Total cell. View Replies! View Related Payment Expense Spreadsheet With Gst I used erecord the other day to do my activity statement for the first time (I have just started a home based business which has not yet started trading but I had to send the BAS for the purchases that I had made for the business) and it was quite easy to use and you can send it electronically to the ATO which saves a lot of hassle particularly as I am very not accountancy litterate. However I am trying to develop an expenses/payment spreadsheet similar in function to erecord but that allows me to categorise the inputs. My headers are: Cbookref = (drop down validation box) similar to a chart of account # Category = ie - advertising accounting fees etc.. uses a look up function with cbook ref to populate field Description ......................... View Replies! View Related Calculate The Weighted Average Of The Win Rate Based On Volume Of Calls I have 3 sets of data for two different groups: Group 1 - Inbound - Total volume - Gross adds - Win rate (gross adds/total volume) Group 2 - Outbound - Total volume - Gross adds - Win rate (gross adds/total volume) I need to calculate the weighted average of the win rate based on volume of calls. Is there any way to do that? View Replies! View Related Calculate A 30-day Moving Average Based On The Last X Number Of Entries And Date I have a worksheet that has all weekday dates in column 1 and values in column 2. I want to create a 30-day moving average based on the last (non-zero) value in the column 2. Since every month has a different amount of days, I want it to search the date that has the last value (since I don't get a chance to update it daily) and go back thirsty days from that date and give an average of all the column 2 values skipping and values that are null or zero. View Replies! View Related Creating Personal Expense Tracking Form I'm trying to make a spreadsheet that will track my expenses. What happens is I enter in my daily expenses in a "Notes" worksheet. This includes the date, whether it's a debit/credit, and what category is it (rent, tuition, entertainment, work income etc). It looks like this Date - - - - - - Debit - - - Credit - - - Category 4-May-06........................$30 ..........Dining out Then I have another worksheet called "Expense outline" which pretty much sums all expenses in each category and displays a summary. So it would show how much I have spent in total on each category for each month. Looks something like this Dining out..........$100 So what I did for the Entertainment summary for the month of May was, I used a SUMIF(column of categories, "Entertainment", column of credits). This will look for the category name "Entertainment" in my "notes" worksheet, and sums the corresponding amount from the credit row. The problem is, I also want to include it so that it will automatically differentiate between the different months. Right now, when I'm choosing the column of categories for May, I select only the cells in the month of may when I'm choosing my column of categories and credits. For example:.................... View Replies! View Related Calculate The Average Of A Group Cells In One Column Based On The Condition Of Another Column I'm trying to figure out if there is a formula I could use that will calculate the average of a group cells in one column based on the condition of another column. It's hard to explain, so I will show an example. All the data is on a one worksheet and I'm trying to show totals and averages on another worksheet. Location, Days 17, 4 17, 3 17, 5 26, 4 26, 8 26, 10 26, 7 On a different worksheet I would want to know what the average days are for each location. So is there a formula that I could use that will look at column A for a specified location number and then average all the days in column B for that location? I'm using Excel 2003 and have tried using the Average(if) but with no success. View Replies! View Related Using Offset From Latest Month To Calculate 3-month Average Within A Range I have a spreadsheet that has columns of monthly values for three years of financial data and where the values for the latest month are added to the last column. Months that have not been completed will have a zero value (e.g. Jul-09). View Replies! View Related Track A Number Of Expense Items Accross 15 Worksheets With Up To 500 Rows Accross 30 + Columns Per Worksheet I'm looking to use excel to track a number of expense items accross 15 worksheets with up to 500 rows accross 30 + columns per worksheet. Many of the learned people in this forum have helped me get this far, now I need some more assistance - please. In my speadsheet I have a vlookup formula that returns a value from another worksheet. Here's an example.=IF(ISERROR(VLOOKUP($D3,Room_Configs!$A$1:$BO$3006,MATCH(M$1,Room_Configs! This works brilliantly. Now here comes the tricky part. What I'd like to do is append that formula with another one to do a vlookup on a second worksheet. If both lookups return a value then I'd like the value of the 1st vlookup returned in the cell. If the value of the 1st vlookup is "0", then I'd like the value of the 2nd vlookup returned, and if the 1st and 2nd vlookup values are blank then a "0" is returned. The name of the 2nd work sheet is "Non_Network_Equip" Finally, it would be really great if the font colour for values returned from the 2nd vlookup forumla was blue. View Replies! View Related Formula =AVERAGE(B16:L16) To Give The Average I'm using the formula =AVERAGE(B16:L16) to give me the average. However I have a couple of problems with this. Firstly I would like to exclude the value zero from the average. Secondly to also ignore the lowest and highest values. Example, if the values in the cells are 0,1,2,3,4,5,6,7,8,9,10 then the current result shows 5, by ignoring the 0 and lowest value 1 and highest value 10 the average should be 4.5. View Replies! View Related Updating Graph Daily Im handling a graph, line type, that needs to be updated daily, as daily, another cell in the row will be filled. Anyone can tell me how I can make it update daily and still only show untill todays data. For instance: today is the 7 of May and I want only to show the evolution from the first of the month to the 7th but tomorrow I want it to automatically show from 1 to the 8th,and so on... View Replies! View Related SUM Of Daily Inventory I AM TRYING TO SUM OF EACH DAILY INVENTORY ITEM. PREVIOUSLY I USED FORMULA SUGGESTED FROM TEETHLESSMAMA (=SUMPRODUCT(--($A$5:$J$13=A19),$B$5:$K$13)). BUT THIS FORMULA NOT WORK FOR NEW FORMAT OF INVENTROY DATA. I tried to make some change in it to get the result, which is not working well. View Replies! View Related Calculating Daily Quantities I have 3 worksheets: Income; Expense; Consolidate. In the first two sheets i am entering, by dates, quantities that are getting in and out of the warehouse. My code copies that information in the consolidated sheet. What I need is to make a code that Calculates the "Daily Quantities" and "Rent", based on quantity in the warehouse, that I am paying each day. View Replies! View Related Store Data Daily I have a two rows of data one containing names and the other containing corresponding numbers. The names are static and the numbers change on a daily basis. I want to be able to copy the numbers to a static table next to each name on a daily basis (so I can see what the value was a few weeks ago). Is there anything I can write to do this job? My thinking was to set a vlookup to grab the data but i'm not sure how this would work because the vlookup would change daily when the numbers change View Replies! View Related Daily Spreadsheet References I have a spreadsheet that needs to reference another spreadsheet to obtain a daily target figure. Unfortunately the way the system is set up at work, each day of the year has it's own spreadsheet in it's own folder, and the figure I need needs to be updated each day from the corresponding spreadsheet. At the moment I simply have 366 (a spare unused one for leap years) different formulas to compare dates and return the figure from todays date. The downside here is that it takes excel 50 seconds to open the spreadsheet because of this, so I assume it's checking all of these figures in all those spreadsheets instead of just the one that's true. so I have =IF(E2=AF2,spreadsheet address,0) Where E2 is todays date and AF2 is a date from a list. What I'd like is a method to do something similar but with one or two formulas that will simply update the address of the file I need the figure from based on what date it is so that it will only look at one spreadsheet when it opens instead of all 365. I tried the following: IF(E$2=AF302,('C:spreadsheets('H:[Punch 2 Spreadsheet.xlsb]Punch 2 Metrics'!Z$28)[UVS Cell WEB.xlsx]Punch 2'!B$3)/1000,#N/A) Where the section in bold replaces the part of the address with the date folders (20091091) for example, and instead has a cell reference which is formatted to replace this section and updates automatically each day. It does not work obviosuly and I wanted to know I'm just not formatting the formula correctly or if this idea is a dead end. View Replies! View Related Amend To A Database Daily I have a daily log for work that keeps track of purchases and returns among other items and I was wondering if there was a way I could have all this information get put into a log that will amend everything for each week, month and year. View Replies! View Related Sheet For Daily Sales I have a query regarding making a Excel Sheet for Daily sales. here I go, Well i want to make an Excel Sheet where in I just need to enter the Date, Invoice Number , Product , No of Product and rest it should calculate the VAT (Rounding Off) amount N den the Grand Total.. M givin you an example in the Below Sheet. View Replies! View Related
{"url":"http://excel.bigresource.com/Calculate-an-average-daily-expense-cell-59lZyt23.html","timestamp":"2014-04-18T18:12:04Z","content_type":null,"content_length":"73735","record_id":"<urn:uuid:df15f4b8-b8cd-42bc-ba1c-1b2bca807acf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy methods and finite elements Ferraro, M., Mace, B.R. and Ferguson, N.S. (2010) Energy methods and finite elements. In, Brennan, M.J., Kovacic, Ivana, Lopes, V., Murphy, K., Petersson, B., Rizzi, S. and Yang, T. (eds.) Recent Advances Structural Dynamics: Proceedings of the X International Conference. Southampton, GB, University of Southampton, 16pp. Full text not available from this repository. Energy methods represent the most widely used techniques in high frequency vibration analysis. At these higher frequencies methods based on a full finite element (FE) and modal analysis become very expensive computationally. However, FE methods and modal decomposition can be used in a variety of ways to develop energy models of structures. Some of these techniques are reviewed in this paper. First, results from FE analysis of the whole system can be post-processed to form an energy distribution model and, from that, an SEA-like model. This numerical implementation of the power injection method (or “virtual SEA”) yields expressions for coupling loss factors etc, providing small models which can be used as the basis for engineering design modifications. It is still computationally costly, since it involves FE analysis of the complete structure. Model reduction techniques, such as Component Mode Synthesis (CMS), can be used to reduce the number of degrees of freedom (DOFs). The number of interface DOFs can also be reduced using, for example, characteristic constraint modes. Although the models are smaller, a full modal solution for the whole structure is still required and hence the cost might be excessive. Secondly, FE analysis can be applied to only part of the structure and corresponding SEA parameters estimated. The models are consequently smaller, but the results are an approximation. Finally, there are approximate approaches in which FE analysis of the individual substructures is performed. The substructures are then regarded as sets of oscillators which are coupled together and between which energy flows. The most well known of these is Statistical modal Energy distribution Analysis (SmEdA), which employs a dual formulation to uncouple the substructures and estimate the mode-to-mode energy exchange. Another technique is based on some combination of free and fixed interface CMS for the analysis of the individual substructures, the models subsequently being coupled using a coupled oscillator theory. A critical overview of these techniques, focused on the aspects involved in modelling and substructuring, computational cost and accuracy, is presented. Actions (login required)
{"url":"http://eprints.soton.ac.uk/160699/","timestamp":"2014-04-18T21:18:16Z","content_type":null,"content_length":"30279","record_id":"<urn:uuid:05050ea2-3f3c-476e-8ed0-1fa22f2481a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2010 [00737] [Date Index] [Thread Index] [Author Index] Re: typesetting problems or bugs? need a professional stylesheet • To: mathgroup at smc.vnet.net • Subject: [mg115038] Re: typesetting problems or bugs? need a professional stylesheet • From: Eric Brown <eric.c.brown at mac.com> • Date: Wed, 29 Dec 2010 05:57:02 -0500 (EST) • References: <ifcrkr$boq$1@smc.vnet.net> sean k <seaninsocal at gmail.com> writes: > Hello Group. > I'm having multiple problems while trying to typeset a few notes and > other documents. Main thing has seem to be the Mathematica's tendency > to change italics to non-italics automatically under certain > circumstances. > I am using "textbook stylesheet." If I use the default stylesheet, > the line spacing is screwed up. So far it seems like textbook > stylesheet gives me what I need. > I'm also using ctrl9 and ctrl 0 to open and close any formula portion > of the text. That seems to make the formulas look nice and > mathematical. ie. italicized. > But I'm encountering a few problems that I just can't get around. So > first use the textbook stylesheet under menu>format>stylesheet>book> > textbook. Then these formulas are entered in a text cell by first > typing ctrl9 to open the invisible formula box. Then when done > inputting, ctrl0 will end close the box. > I'm running windows vista home edition 64 bit and Mathematica v8. I > also have v7 installled. The problem seems to the same in both > versions. > 1. Typing \[CapitalDelta]y \[TildeTilde] f'(x)\[CapitalDelta] x . > If I remove that space between \[CapitalDelta] x to \[CapitalDelta]x, > it will change the italics to non-italics. > 2. d/(d x) vs d/dx > First ctrl9 then crtl/ to make the fraction. And if I put in d/d x > into the fraction box, it will retain the italics. but if I put in d/ > dx, the italics go away. > So the space makes the italics go away in both cases. Hi Sean, I don't have a solution to your problem, and I suffer from this problem as well. In my case, it is with the \[Delta] like one would use in the calculus of variations, e.g. \[Delta]x . When there is a space, it seems that Mathematica believes that the characters are math symbols and then makes them italic. When there is not a space, it thinks that it is a word or perhaps a special function like cos and sin, etc. and makes the letters regular face. If I could offer a couple of suggestions: 1) Use \[DifferentialD] although I think it's kind of ugly. 2) Export to LaTeX, and all of these problems seem to go away. I don't think that there is an analogous \[DifferentialD] for the variational \[Delta], but if someone could offer a suggestion I would be > 3. Lastly, I would really like to get a hands on a stylesheet that > most mathematicians would use to, say, write a professional manuscript > or a textbook or maybe even a thesis. Can anyone help me here? Short Answer: In Mathematica version 8, there are some new style sheets under File... New... Styled Notebooks. They seem to look nicer than the previous versions of Mathematica, which had gargantuan letters for Title/Subtitle, etc. and tiny little letters for standard text. Long Answer (and more for therapy than providing this newsgroup with information): It seems that this has been a touchy subject with a number of folks. People are very sensitive to ugly math and really appreciate good math typesetting. At the same time, people want to "write once" but deploy everywhere, including: * LaTeX * HTML / Blogs * Word Processing * Cross-platform, minimal external dependency * literate programming (code combined with documentation) * Movies/Animations, Interactivity I have settled on researching and writing in Mathematica, and then exporting to those other formats when I need to churn out a product for immediate consumption. Sometimes a PDF from Mathematica is perfect. Sometimes the HTML output is just fine. However, I often times want to replace one of the crappy GIF's with a high-res PNG, and so I do that by hand. So, finally, I load whatever export format into emacs and drive the rest of the way, doing it by hand. In my experience (two dissertations and bunch of thirty page articles), simply having enough research to be able to write them is 99.9% of the work. Once I have that work done, I would re-type the dissertation/paper a dozen times in any language/system whatsoever if I knew that it would lead to an attractive finished product. With each version of Mathematica, I try to see if it is the panacaea to all of my scientific research and publishing needs. Combined with emacs (or Textmate, etc.) to polish up the specialized outputs, I think it's the best thing money can buy, and is a lot easier than custom-writing everything from scratch. That being said, stock Mathematica will not output a site that looks as attractive as the Wolfram sites. There is a lot of other technology there--just as there would be when writing a book/dissertation, which is the topic of your inquiry. P.S. I apologize to the readers of the list who have much stronger and well-formed opinions than I do regarding Mathematica as a publishing tool. I believe that Wolfram is putting a ton of effort into this area. Also, I seem to recall some very thoughtful posts by David Park which seem to suggest that these problems would go away if the Mathematica Player would be unencumbered.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00737.html","timestamp":"2014-04-18T13:11:29Z","content_type":null,"content_length":"30355","record_id":"<urn:uuid:357e973b-f250-4a6e-8c25-89da5bd20b1a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Most common answers on the SAT So I saw someone on Yahoo! Answers suggesting that C was the most common answer choice on the SAT. I hear people say this all the time and it drives me batty, mostly because I'm afraid students will actually consider such misinformation actionable and fill in a bunch of C's. You deserve what you get when you do that, I know, but still I worry. Anyway, I decided to spend some time this morning counting up all the choices in the Blue Book (leaving out Grid-Ins, obviously) to prove a point, and ended up proving the point much less firmly than I had hoped to. There's actually a fair amount of variability over what seems to me to be a big enough sample to mitigate most of the noise. Here's a link to the data. I don't really expect any of this to be useful information, but I did want to solicit your opinions on this, since this is a community of people who seem to think about the SAT as much as I do. Also, I know there are some people on this board who are at university and still poke around here. I'm interested in hearing from someone who's taken a stats class more recently than me about statistical significance of, say, the infrequency of A. So, you know, if you wanna nerd it up with me, I'm up for that. PM me. Post edited by PWNtheSAT on Replies to: Most common answers on the SAT lol do you have any hobbies, sports, or friends to entertain you? sorry but this is kind of extreme hahahaha A fair question, to be sure. I get paid pretty well to be very good at the SAT, though. This is work, but I also kinda enjoy it. ya i was going to say this is kind of interesting though A little while ago, I did a study of this question using 13 past (QAS) exams (math only). The results are consistent with a uniform random distribution of answers, i.e., p(A) = p(B) = p(C) = p(D) = p(E) = 0.2. For a total of 572 multiple choice questions, here were the letter frequency numbers and corresponding z-scores: A 108 -0.67 B 116 0.17 C 119 0.48 D 120 0.59 E 109 -0.56 (expected value for each: 114 to 115). You would need a z-score of more than 1.96 or less than -1.96 to say with a decent level of confidence that these letter frequencies are NOT uniformly random. Actually, little do most people know, each SAT has a secret code built into the answers. If you get the first 5-6 right in the section, it should only take you 10-15 minutes to crack the rest of the code. In June 2007, they used the Fibonacci Sequence (alphabetical with the usual encoding exclusions, of course) for most of the Writing section, but it was too obvious; that's why so many people got 760-800 on the June 2007 Writing. But there is suspicion that people knew ahead of time because a North Korean stole the code. I was able to crack my last Reading one (ha, and who says Tuvaluan history never came in handy), but screwed up the code on one of the math sections. For the essay, I went with the ol' binary bypass, but the computer picked up the 874th 0 as an O, so I didn't get a 12. Wait, I thought the code was the digits of pi (mod 5)! Wait, I thought the code was the digits of pi (mod 5)! Nah, they tried that November 2008, but just once; nearly as obvious as the whole Fibonacci fiasco. I don't know stats, but I just had some fun at random.org asking for sets of 1600 integers from 1-5 (link: RANDOM.ORG - Integer Generator Here were my results: 1st Run: 1: 326 2: 311 3: 336 4: 305 5: 322 2nd Run: 1: 315 2: 338 3: 309 4: 314 5: 324 3rd Run: 1: 294 2: 323 3: 322 4: 328 5: 333 4th Run: 1: 329 2: 321 3: 360 4: 291 5: 299 The first three runs were fairly even, but the fourth was even more lopsided than the Blue Book analysis, and it was generated by completely random atmospheric noise! Atmospheric noise isn't random; the aliens are trying to contact us. Unless... College Board... is... Oh my God. I guess I deserve the ribbing for going so aggressively public with my nerdery this morning. Thanks, though, fignewton, for admitting that you've also done something similar before. I wasn't expecting any surprising results, but the difference of 50 between A and D raised my eyebrow a tad. After having a look at buffalowizard's post...not as surprising as I thought. Thanks guys! Nothing too nerdy about this at all! Some related nerdery: pick up any section, look at the answers and you will find things like: 12 in a row with no 'd', only 2 e's in an entire section...other similar weird stretches -- and these are all completely normal. People just don't expect weird sequences as often as they come up. I forget where I read this, but it seems related: ask half the students in a class to flip a coin 100 times and record the results, while the other half just pretends to flip coins, but actually just makes up a random string of 100 h's or t's. Then examine the data. You can usually tell who did the experiment vs who faked it. The fake data doesn not have as many weird strings: runs of lots of h's in a row say. For example, 5 in a row seems unlikely, but in a random set of 100, it is actually more likely than not to have at least one such stretch. Of course, that leads to a probability question, too hard for the sat: you flip a coin 100 times. What is probability that you do not get at least one stretch of 6 h's or 6 t's in a row? Fibonacci numbers mod 5 would be interesting This reminds me of the 2012 AMC12A, where the answers to #22 through #25 were all C. PWNtheSAT: Great analysis--I think it's quite interesting to look at the data, and it's perfectly valid as a hobby. If I were an evil SAT test designer, I'd make the answers to the Level 5 questions more likely to be A or E than anything else--the theory being that those who were just guessing would go with B, C, or D. Bwa ha ha! (Oops, kind of a give-away) Hey, you're that guy who's the reason I did so well on all the sections! YOU ROCK!
{"url":"http://talk.collegeconfidential.com/sat-preparation/1102693-most-common-answers-sat.html","timestamp":"2014-04-18T13:10:09Z","content_type":null,"content_length":"70892","record_id":"<urn:uuid:a3bdc567-3012-49d3-aea0-2314a5c0c8b2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
T-Math’s enthusiasm for numbers and solutions to real-world problems makes this a title that math teachers can sink their teeth into. From the moment he bursts out of his shell, T-Math thinks mathematically, making number sentences to express how many digits he has and the number of kids in his family. He counts footprints by twos and uses fives and tens to group and count a herd of triceratops. He checks his subtraction with addition, draws pictures to solve word problems, creates pictographs and thinks in pie graphs. And it is his estimation skills that save his sister, who gets stranded on the wrong side of a canyon after an earthquake. From that day on the entire family appreciated his love for math and learned all they could from him. Backmatter provides an index of the different skills T-Math uses. Cushman’s brightly colored acrylic illustrations nicely show readers the math involved without diminishing in any way the personalities of the dinosaurs. The ultimate melding of a topic kids love with knowledge they need. (Picture book. 5-8)
{"url":"https://www.kirkusreviews.com/book-reviews/michelle-markell/tyrannosaurus-math/print/","timestamp":"2014-04-17T01:17:55Z","content_type":null,"content_length":"4573","record_id":"<urn:uuid:6b6dc4ca-5be7-4e3a-844a-0464259be813>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Carlsbad, CA 92009 Recent Cal Poly graduate who loves tutoring math! ...am a recent graduate from Cal Poly looking for students in need of a math tutor. I majored in Physics and minored in Mathematics with a 3.5 GPA. I tutor Pre-Algebra, Algebra, Geometry, , and Physics for all grade levels. I have a year of math... Offering 8 subjects including calculus
{"url":"http://www.wyzant.com/Cardiff_CA_calculus_tutors.aspx","timestamp":"2014-04-18T18:31:28Z","content_type":null,"content_length":"58592","record_id":"<urn:uuid:0e7aae84-1163-417b-a05a-952b9c13fb81>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a way to convert MS 2010 Equation to Object in MS Equation 3.0? up vote 1 down vote favorite I have a lot of equations (for faculty) written in MS Equation (button from right side) and saved it in .docx format. All good and the best until my professor told me that he has MS 2003 and I have to convert from docx to doc format and the equations must be editable. I don't have enough time to rewrite all the equations in MS Equation 3.0. Is there a way to convert from MS Equation to MS Equation 3.0 Object to be recognized and editable in Word 2003? asked Dec 1 '11 at 10:39 share|improve this question Peachy 443116 Teodorescu add comment asked 2 years ago viewed 389 times Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged microsoft-word microsoft-word-2010 equations docx equation-editor or ask your own question.
{"url":"http://superuser.com/questions/363392/is-there-a-way-to-convert-ms-2010-equation-to-object-in-ms-equation-3-0","timestamp":"2014-04-23T14:31:19Z","content_type":null,"content_length":"62099","record_id":"<urn:uuid:db3508ca-3065-48c9-b3e9-9aa82ff490cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
correspondence between invariant forms and Lie groups up vote 4 down vote favorite In Lie theory, one often asks about alternating forms on $\mathbb{R}^n$ which are invariant under some particular subgroup $G\subseteq GL_n(\mathbb{R})$, and there is always some algebra of invariant forms associated to $G$. For example, $SO(n)$ preserves the algebra generated by $1\in \Lambda^0(\mathbb{R}^n)$ and $*1\in \Lambda^n(\mathbb{R}^n)$ (and nothing else). However, taking the algebra and calculating the group which leaves it invariant may yield a strictly larger group than the original one. Of course, another way of looking at it is that sometimes a subgroup of $G$ will still have the same algebra of invariant forms. I've been told, although I have no concrete examples, that these subgroups may not even be nested. So, I'm wondering if there is some nice algebraic conditions that govern this correspondence. It would be nice to have some characterization along the lines of the basic facts in algebraic geometry that for any subset $T\subseteq A$, $Z(T)=Z((T))$, that for any ideal $a\subseteq A$, $I(Z(a))=\sqrt{a}$, etc. 2 Not an answer, but you should check out Cvitanovic's book, available online at nbi.dk/GroupTheory --- he classifies simple Lie algebras over C in terms of the forms they preserve. – Theo Johnson-Freyd Nov 18 '09 at 16:40 add comment 1 Answer active oldest votes I don't have a full answer yet. Some notation: let's write $\mathrm{Inv}(G)$ for the collection (algebra) of all invariant tensors for $G \subseteq GL_n$, and $\mathrm{Grp}(I)$ for the group of matrices that leave invariant some collection $I$ of tensors. Then a necessary condition for $G = \mathrm{Grp}(\mathrm{Inv}(G))$ is for $G$ to be Zariski-closed in $GL_n$. (Recall that $GL_n$ is a codimension-one Zariski-closed subset of affine $(n^2 +1)$-space, where the first $n^2$ coordinates are the matrix coefficients, and the last one is the inverse determinant; $GL$ is cut out by the condition that the actual determinant times the last coefficient is unity.) Indeed, $\mathrm{Grp}(I)$ is Zariski-closed for any $I$, because it is the intersection of $\mathrm{Grp}(i)$ over all $i\in I$, and fixing a tensor is a Zariski-closed condition, because $GL_n \to \mathrm{End}(V)$ is polynomial for $V$ and tensor product of the $n$-dimensional representation and its dual. up vote 2 down So this rules out things like the irrational line in the torus (diagonal two-by-two matrices with eigenvalues $\exp(x)$ and $\exp(\pi x)$ as $x$ ranges over the field, and $\pi$ is your vote favorite irrational number). I think that a sufficient condition is for $G$ to be compact. This is because if you know all the tensor invariants, then you know the full subcategory of representations that are tensor-generated by the defining representations, and in fact all the subrepresentations of these, and if $G$ is compact then this category is equivalent to the full category of representations and knows the group by Tannakian arguments. But this is much too strong --- $SL_n(\mathbb{R})$ is not compact, for example. Thanks! You lose me a little in the last paragraph though, is there an easy way of rephrasing what you're saying without (much) reference to category theory? – Aaron Mazel-Gee Nov 24 '09 at 7:06 All I'm saying in the last paragraph is that for compact (real) groups, if you know all their finite-dimensional representations then you know the group, and that the invariant tensors give enough data. To make this precise requires categories and functors. – Theo Johnson-Freyd Nov 24 '09 at 17:28 add comment Not the answer you're looking for? Browse other questions tagged lie-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/5934/correspondence-between-invariant-forms-and-lie-groups","timestamp":"2014-04-17T01:16:15Z","content_type":null,"content_length":"54951","record_id":"<urn:uuid:f86ac93f-a3c4-4669-a1c5-35481787e4df>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiation problem (shouldn't be too hard) April 19th 2007, 12:33 AM #1 Differentiation problem (shouldn't be too hard) Hi, I have two equations that i need to find the max of...apparently this is finding the derivative of the function..but I'm struggling.. find max of: a(1-X)^1/2 + (X)^1/2 where a is a positive constant. So I need to find out what X equals. find max of: 4log(100-X) + 2log(X + Y) So I need to find X. Any help would be much appreciated! At a local maximum of a differentiable function f we have: In this case: f(x) = a(1-x)^1/2 + (x)^1/2 df/dx= a(1/2)(1-x)^{-1/2}(-1) + (1/2) x^{-1/2} so if df/dx=0 we have: x^{-1/2} = a (1-x)^{-1/2} 1/x = a^2/(1-x) and if x!=0 and x!=1 (you can check neither of these give df/fx=0 so this is OK), we have: so x=1/(1+a) Now this can be a maximum, a minimum or a point of inflection so we need to check that this is a maximum using the second derivative test. d^2f/dx^2 = -a/[4(1-x)^(3/2)] - 1/[4x^{3/2}] which is negative at x=1/(1+a) (as a>0), hence this is a maximum. April 19th 2007, 03:38 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/13911-differentiation-problem-shouldn-t-too-hard.html","timestamp":"2014-04-20T11:12:20Z","content_type":null,"content_length":"32543","record_id":"<urn:uuid:cf4e6a5e-75f4-471c-87fd-627c4d550c58>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Modelling two-time velocity correlations for prediction of both Lagrangian and Eulerian statistics Seminar Room 1, Newton Institute More information on two-point two-time velocity correlations are needed for a better prediction of turbulent dispersion as well as radiated noise using an acoustic analogy. Conceptual aspects will be emphasized and not applications. Only isotropic turbulence will be considered, although many applications are developped in our team towards strongly anisotropic turbulence, mainly in rotating, stably stratified and/or MHD flows. A simple synthetic model of isotropic turbulence is firstly considered, using a random superposition of Fourier modes : This is the KS (Kinematic simulation) following Kraichnan and Fung et al. Unsteadiness of velocity field realizations is mimicked using temporal frequencies, which are expressed in term of a prescribed energy spectrum and the wavenumber. Even if the orientation of the wavevector is randomly chosen, the link of the temporal frequency to the wavenumber is deterministic in the simpler version of the KS model. Although such a model was relevant for several applications, it is dramatically questioned for the evaluation of two-time velocity correlations. It is shown that spurious oscillations are generated, and that it is needed to model the temporal frequencies as random Gaussian variables with a standard deviation of the same order of magnitude as their mean value. Further applications to noise radiation are touched upon, in order to illustrate dominant (Lagrangian) `straining' or dominant (Eulerian) `sweeping' effects, according to the scale under consideration. The role of a typical time-scale for the decorrelation of triple velocity correlations is then recalled and discussed in the classical `triadic closures' from the Orszag and Kraichnan's legacy, such as EDQNM, DIA and semi-Lagrangian more sophisticated variants. Finally, these different concepts (diffusive and/or dispersive eddy dampings, straining or sweeping processes) are applied to a recent closure theory of weakly compressible isotropic turbulence. A Gaussian kernel for the decorrelation of triple velocity correlations was shown to give much better results than the classical exponential kernel inherited from EDQNM in the incompressible case. A new explanation is given in accordance with the renormalization of the acoustic wave frequency by a pure random term with zero mean but with a standard deviation of the same order of the eddy damping term formerly used in EDQNM. This analysis can be related to the concept of Kraichnan's random oscillator, recently revisited by Kaneda (2007), with a connection to the much simpler KS problem firstly presented (see also the monograph `homogeneous turbulence dynamics' by Pierre Sagaut and Claude Cambon, just published in Cambridge University Press.) The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/HRT/seminars/2008100117101.html","timestamp":"2014-04-21T07:09:57Z","content_type":null,"content_length":"8705","record_id":"<urn:uuid:cd48e950-efea-4c11-9e64-2c6ac1b22e95>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems with Prime Numbers [Archive] - Dynamic Drive Forums 01-13-2008, 01:00 AM I am beginning computer science major and I am completely lost with this. Can anyone help me figure these directions because they are very confusing at some parts. Any suggestions? Write a program that calculates all prime numbers less than 10,000. The program works as follows: 1. Create a main method, and a method called ArrayList< Integer > sieve( int n ), which returns a list with all the prime numbers less than n. The main method calls sieve( 10,000 ), and prints out the results. 2. Inside of sieve(), do the following: a. Call method createTrueArray( n ) to create a boolean array of size n. b. Set cells 0 and 1 of the array to false. Each cell in the array represents a number, the boolean value represents whether it is prime or not. We start by assuming that all numbers greater or equal 2 are prime, and then we remove the ones that are not. c. Loop over the array, starting at index 2. d. First, remove all multiples of 2, by setting the values for 4, 6, 8, ... to false. e. Then, look for the next number still marked true (in this case 3), and set all its multiples to false. f. Repeat with all numbers that are marked true. When you are done, only prime numbers will be marked true. g. Call method booleanArrayToIntList() to return a list of integers that contains the primes. 3. Method boolean[] createTrueArray( int n ) creates a boolean array of size n and initializes all its values to true. 4. Method ArrayList< Integer > booleanArrayToIntList( boolean[] booleans ) loops through booleans, and for each true value, it adds the corresponding number to a list. Finally, it returns the list.
{"url":"http://www.dynamicdrive.com/forums/archive/index.php/t-28388.html","timestamp":"2014-04-20T05:51:31Z","content_type":null,"content_length":"9045","record_id":"<urn:uuid:c662b72d-7573-4bf0-b193-bc4233c9326a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Epistemic logics for time and space bounded reasoning Seminar Room 1, Newton Institute In standard epistemic logics, an agent's beliefs are modelled as closed under logical consequence. This becomes a problem if we are interested in the evolution of beliefs of a rational agent which requires time and memory to derive consequences of its beliefs. For example, we may be interested in expressing and verifying properties like `given its current beliefs and its memory size, the agent will/will not believe A in the next n time steps', where A may be a consequence of the agent's beliefs. The basic idea of our approach is as follows. The reasoning agent is modelled as a state transition system; each state is a finite set of formulas (agent's beliefs); the maximal size of this set is determined by the agent's memory size. Transitions between states correspond to applications of the agent's inference rules. The language of temporal epistemic logic can be interpreted in such structures in a straightforward way. Several interesting directions for research arise next, for example: - completeness and decidability of epistemic logics for particular kinds of reasoners (determined by their inference rules); - model-checking: using MBP (model-based planner, developed in Trento) to verify memory requirements for a classical reasoner and a rule-based reasoner; - expressive power: adding epistemic operators which allow us to express properties like `the agent only knows m formulas'.
{"url":"http://www.newton.ac.uk/programmes/LAA/seminars/2006012411001.html","timestamp":"2014-04-19T06:59:30Z","content_type":null,"content_length":"5299","record_id":"<urn:uuid:06b540ff-7bbe-4e8a-a191-4ba448aaa3fb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Fountain Hills Math Tutors ...I have a passion for teaching, and experience in tutoring math, physics, and Japanese for K-12, college students, and adults. In addition to having good communication skills, excellent organizational skills, and patience, my ability to clearly communicate complex topics and help others overcome ... 12 Subjects: including calculus, trigonometry, statistics, SAT math ...Just a little about my work and research. While my PhD is in Mathematics, my research area has been Mathematics Education. I specifically study the way students learn mathematics and analyze how best to develop instruction. 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...Much of my tutoring has been for ESL students. I have had many years preparing students for the SAT, GRE, GED, ACT and various military service tests. While the ultimate results of tutoring lie with the student, I have been successful in helping to raise test scores significantly. 34 Subjects: including algebra 1, English, SAT math, prealgebra ...I've completed numerous homework assignments, research papers, and scientific/environmental reports via MS Word. I've had specific training on some of the earlier versions, but most of my learning has come from experience. Although my degrees are in geology, I've done quite a bit of scientific writing associated with those studies. 7 Subjects: including algebra 1, English, writing, elementary math ...I customize study skills to each individual student with whom I work. For example, for a younger student who may be more of a visual learner, I would have him/her draw images pertaining to the subject at hand and organize them in a way that facilitates their study. Colors and shapes are importa... 27 Subjects: including algebra 1, prealgebra, geometry, English
{"url":"http://www.algebrahelp.com/Fountain_Hills_math_tutors.jsp","timestamp":"2014-04-17T09:53:29Z","content_type":null,"content_length":"24961","record_id":"<urn:uuid:19ffab50-15c1-4075-887e-e97e4a2c2a23>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to stable distributions for finance June 10, 2013 By Pat A few basics about the stable distribution. “The distribution of financial returns made simple” satirized ideas about the statistical distribution of returns, including the stable distribution. As “A tale of two returns” points out, the log return of a long period of time is the sum of the log returns of the shorter periods within the long period. • non-overlapping time periods produced statistically independent returns • returns over the same time span had the same distribution (IID in textbook lingo), then returns would have a stable distribution. The phenomenon of volatility clustering means that the distribution changes from time to time. There is close to zero correlation between returns of non-overlapping periods, but it is almost certainly not really zero — just too small and complicated for us to know it. (And zero correlation is not the same as independent.) The markets don’t satisfy the conditions to make returns follow a stable distribution. But we are modeling. Modeling is not about truth, it is about being informative. The stable distribution is not the true distribution, but it might be useful. Strong points It is handy that the stable distribution allows us to (assume to) know the distribution over time periods of varying length. Weak points The variance of stable distributions (except for the normal) is infinite. This is a problem for options pricing — it says that the value of an option is infinite. This is also a problem for garch which models the conditional variance at particular time points. The stable distribution model says that volatility is infinite at all times. That’s simple. Simple is good. But too simple is not useful. The key parameter of the stable distribution is called alpha. The values of alpha go from 0 (not included) to 2 (included). When alpha is 2, then it is the normal distribution. The normal is the only stable distribution with a finite variance. The Cauchy distribution (Student’s t with 1 degree of freedom) is the stable distribution with alpha equal 1 (and zero skewness). The smaller alpha, the longer the tail. One dataset of daily exchange rates had an estimate of 1.5 for alpha. To get a sense of what alpha means, Figures 1 through 4 compare the (symmetric) stable distribution with various values of alpha to the normal. The quantiles are shown for 0.001 through 0.999 with increments of 0.001. The line in the plots goes through the quartiles. Figure 1: Quantile comparison of the normal and the stable distribution with alpha=1.5. Figure 2: Quantile comparison of the normal and the stable distribution with alpha=1.8. Figure 3: Quantile comparison of the normal and the stable distribution with alpha=1.9. Figure 4: Quantile comparison of the normal and the stable distribution with alpha=1.95. What benefits and demerits have I missed? The stable distribution has a place in finance. That place should probably be smaller than the one envisioned by its keenest adherents. were we the belly of the beast or the sword that fell…we’ll never tell. from “The Stable Song” by Gregory Alan Isakov Appendix R The R language is one place you can find functionality for the stable distribution. There are probably a few packages that include the stable distribution. A package that is dedicated to it is the stabledist package on CRAN. That is what is used for the plots. The function that created Figure 1 is: P.stab15qqnorm <- function (filename = "stab15qqnorm.png") if (length(filename)) { png(file = filename, width = 512) par(mar = c(5, 4, 0, 2) + 0.1) s <- (1:999)/1000 qs <- qstable(s, alpha=1.5, beta=0) plot(qnorm(s), qs, col="steelblue", xlab="Normal quantiles", ylab="Stable quantiles alpha=1.5") qqline(qs, datax=FALSE, lwd=2, col="gold") if (length(filename)) { This function can be called with NULL as its argument to see the plot on the screen and correct what is wrong with it. Then calling it with no arguments creates the file that is actually used in the daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/introduction-to-stable-distributions-for-finance/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+RBloggers+%28R+bloggers%29","timestamp":"2014-04-17T19:11:13Z","content_type":null,"content_length":"41293","record_id":"<urn:uuid:deff5bfa-f9b5-4974-9af2-bc84f00299e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Accuracy and Precision Accuracy and Precision: Accuracy refers to the closeness of a measured value to a standard or known value. For example, if in lab you obtain a weight measurement of 3.2 kg for a given substance, but the actual or known weight is 10 kg, then your measurement is not accurate. In this case, your measurement is not close to the known value. Precision refers to the closeness of two or more measurements to each other. Using the example above, if you weigh a given substance five times, and get 3.2 kg each time, then your measurement is very precise. Precision is independent of accuracy. You can be very precise but inaccurate, as described above. You can also be accurate but imprecise. For example, if on average, your measurements for a given substance are close to the known value, but the measurements are far from each other, then you have accuracy without precision. A good analogy for understanding accuracy and precision is to imagine a basketball player shooting baskets. If the player shoots with accuracy, his aim will always take the ball close to or into the basket. If the player shoots with precision, his aim will always take the ball to the same location which may or may not be close to the basket. A good player will be both accurate and precise by shooting the ball the same way each time and each time making it in the basket.
{"url":"http://www.ncsu.edu/labwrite/Experimental%20Design/accuracyprecision.htm","timestamp":"2014-04-19T14:43:47Z","content_type":null,"content_length":"1829","record_id":"<urn:uuid:4fa637d0-3e2b-47d7-9afc-2adb813409f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Lafayette, CA Math Tutor Find a Lafayette, CA Math Tutor ...I am an accomplished singer and vocal coach. I teach in the Bel Canto style, passed down to me by my teachers, Neva Rego and Betty Grierson (Honolulu, Hawaii), and Shigeimi Matsumoto (University of Southern California). I have been a professional musician since 1993, and I am trained in music t... 57 Subjects: including algebra 2, SAT math, GRE, trigonometry ...I believe that anybody who puts in the effort can succeed in math. So a large part of my task is to help instill an "I can do it' attitude so that the student will put in the needed effort. I have a B.S. 5 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I am a 25-year veteran of Silicon Valley, where all my experience has been in Marketing. I am proficient in Product Marketing, Channel Marketing, Field Marketing, Marketing Communications, and Solutions Marketing. I teach Business Marketing at the Graduate level to students from all over the world. 39 Subjects: including calculus, logic, discrete math, differential equations ...After achieving at 3.86 cumulative GPA, working as a departmental tutor for Italian, and being admitted into both Phi Beta Kappa and Gamma Kappa Alpha (the national Italian Honor Society), I enrolled in Middlebury College's immersion-based MA program in Italian Literature. My scholastic journey ... 16 Subjects: including algebra 1, reading, Italian, grammar ...Quick math is fun and natural for me - and I love introducing that to others. When I think of Algebra 1, what comes to mind is a hybrid of all maths (obviously, between Pre-Algebra and Algebra 2) - a place where the foundations of math are laid. I think it is integral for this particular segment of math to be carefully experienced and very hands-on, and that is how I would tutor Algebra 1. 12 Subjects: including algebra 1, algebra 2, calculus, chemistry Related Lafayette, CA Tutors Lafayette, CA Accounting Tutors Lafayette, CA ACT Tutors Lafayette, CA Algebra Tutors Lafayette, CA Algebra 2 Tutors Lafayette, CA Calculus Tutors Lafayette, CA Geometry Tutors Lafayette, CA Math Tutors Lafayette, CA Prealgebra Tutors Lafayette, CA Precalculus Tutors Lafayette, CA SAT Tutors Lafayette, CA SAT Math Tutors Lafayette, CA Science Tutors Lafayette, CA Statistics Tutors Lafayette, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/lafayette_ca_math_tutors.php","timestamp":"2014-04-20T21:06:39Z","content_type":null,"content_length":"23905","record_id":"<urn:uuid:2855e4be-765a-4438-ae3e-28fc14165144>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: [Work Included] In ABC, centroid D is on median AM. AD = x + 5 and DM = 2x – 1 Find AM. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e5c8e8e4b058681f3f0c9a","timestamp":"2014-04-20T10:52:25Z","content_type":null,"content_length":"66439","record_id":"<urn:uuid:b3aae9e4-4fb9-417f-88e9-462af0f9f823>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment We've heard lots about MathsJams recently, but what exactly are they? Alison Kiddle explains... As someone who has always enjoyed recreational mathematics, I was very excited when MathsJam first appeared on my radar back in 2010. A few people I know on Twitter started talking about a gathering being organised for people who wanted to do maths together. Before I knew it, I found myself joining getting on for 100 other keen maths folk for a hectic weekend of hands-on maths. Since then I have attended two more MathsJam conferences and started the Cambridge MathsJam pub meets. It's easy to spot the MathsJam table... So what is a MathsJam? Let me start by describing the pub MathsJams. These began in London and were the brainchild of Matt Parker, also known as Stand Up Maths. Over time, more maths fans wanted to attend MathsJams near them, and at the time of writing there are more than 30 MathsJams in towns and cities in the UK and further afield. After the 2011 MathsJam conference, I was talked into setting up the Cambridge MathsJam, and we met for the first time in January 2012. You can spot the MathsJam table in the pub because of all the maths paraphernalia lying around. I have a big bag filled with puzzles, Rubik's cubes, playing cards, dominoes, a couple of Martin Gardner books, paper, pens, glue, scissors, origami squares, post-it notes, and anything else that could come in useful. Each month, MathsJammers turn up with ideas for problems to discuss, and if we run out of puzzles we can always turn to Twitter where the #mathsjam hashtag and @mathsjam account are always full of news about what the other MathsJams around the country are discussing. Of course there's also the obligatory games of SET – a wonderful card game where you have to spot sets of three cards before your opponents. And I make a point of teaching every new attendee of the Cambridge MathsJam how to make a dodecahedron out of post-it notes, a useful skill for any mathematician. (See James Grime’s excellent tutorial.) The MathsJam conference has now become a highlight of my mathematical year. Held in November, it is a weekend of intense mathematical fun. This year's conference was just held in Crewe and over 100 people, including mathematicians, physicists, engineers, teachers, software developers, maths communicators and many other maths enthusiasts, came along from all over the UK. The format for the conference is very clever: participants offer 5-minute talks, long enough to introduce an idea but not to teach anything new. Each hour-long session of around 10 talks is then followed by a half-hour coffee break where people can choose to explore the maths that interested them, or talk in more detail to the presenters. There is always plenty to look at in the breaks too, from mathematical knitting, to jugglers, to mathematical magic. My talk at this year’s MathsJam was "n things you can do with squared paper", where I shared ideas including tiling with L triominoes, a puzzle where you need to make 100, and a way of shading squares to reveal the Sierpinski Triangle that can also be realised in crochet. Mine was not the only talk to refer to mathematical crafting – the team from Woolly Thoughts showed off some of their beautiful mathematical textile projects. Other talks looked at graphs that swear at you, the radius of a cube, technical problems arising from IP addresses using different bases, how the Egyptians built their pyramids, and many more fascinating topics. The conference dinner was also great fun. I ended up on a table with a few people I knew and a few people I didn't, and before the soup had arrived we were already chatting like old friends sharing terrible mathematical jokes with one another, and talking about the benefits of owning a Raspberry Pi. The tweets on the #mathsjam hashtag all appeared on the big screen at the front of the room, and as more wine was sampled, the bad jokes and mathematical puns got worse. In the bar afterwards, there was a good mix of people playing SET, sharing puzzles and performing tricks. I had a good natter with some friends from Twitter while getting on with some mathematical crochet. I was buzzing so much from all the maths I found it very difficult to get to sleep, and ended up tweeting at 2am to say I’d finally found a proof for one of the problems set for me the previous day! I really can't do justice to the pubmeets or the conference in just a few words. If this has whetted your appetite and you want to know more, I urge you to visit the MathsJam website and find a pub meet near you, or if there isn't one, to start your own! And if you are on twitter, follow @mathsjam and look out for the #mathsjam hashtag on the penultimate Tuesday of the month for all the mathematical puzzles and discussions your heart could desire! Alison Kiddle is a former mathematics teacher and currently works as Key Stage 4 Coordinator for the NRICH mathematics project.
{"url":"http://plus.maths.org/content/comment/reply/5831","timestamp":"2014-04-17T00:56:22Z","content_type":null,"content_length":"26329","record_id":"<urn:uuid:e24a8701-1325-44af-8e01-852913aae409>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Lectures on Functional Formulation in S-Matrix theory[MatSciRep:47] dc.contributor.author Rzewuski, J. dc.date.accessioned 2010-08-09T05:37:42Z dc.date.available 2010-08-09T05:37:42Z dc.date.issued 2010-08-09T05:37:42Z dc.date.submitted 1966 dc.identifier.uri http://hdl.handle.net/123456789/214 Collision processes between elementary particles are mathematically described by functions depending on several points of space-time, or of the momentum space. Those, so called n-point functions or rather the generating functionals for these functions will be the subject of this volume. This lectures are based on considerations of two principles namely, Relativistic Invariance and Microscopic causality - being the basis for the theory of collision processes. Since this volume is devoted mainly to the description of functional methods, the principles of invariances other than Relativistic Invariance are not considered here. Mathematical preliminaries connected with functionals are contained in Chapter I. Chapter II introduces the notion of functional metrices and the corresponding calculus, in particular with respect to the functional metrices encountered in the S-Matrix theory. The notion of Scattering Matrix (S-Matrix) is introduced in Chapter III. Derivation of various forms of the causality condition and the approximative treatment of the corresponding equations are given in this chapter. Chapter IV introduces the notion of interaction functional in terms of S-Matrix. It is shown formally that the conditions of reality and locality for the interaction functional are equivalent with the conditions of generalized dc.description.abstract unitarity and causality for the S-Matrix. It also derives the canonical from the functional formalism. Generalizations of the functional formalism to charged spin 0 and en_US spin(1/2) particles and to photons are the subject of Chapter V. The theory developed in the first five chapters is based entirely on the notion of functional derivatives and functional differential operations although possibly of infinite order(Volterra Expansion, definition of interaction functional). Functional integration is introduced in Chapter VI, and several related problems such as transformation of variables in functional integrals, functional Fourier transformations, and orthonormal expansions in terms of Hermito Functionals. Chapter VII deals with the theory in terms of functional integrals and some explicit calculations are carried out. Most of the considerations based on the causality principle, lead to divergent difficulties. Therefore, Chapter VIII considers a mathematical device by means of which the divergent quantities may be represented as the limits of well defined quantities. The procedure is based on the theory of higher order differential equations. Several important subjects concerning n-point functions are not treated in this report(eg., - the renormalization theory and dispersion relations) as already emphasized that this volume is devoted mainly to the presentation of functional methods. Any Modern monographs on Quantum Field theory in conventional presentation may deal with these topics which are left out here. dc.subject Functional Formulation en_US dc.subject Matscience Report 47 en_US dc.title Lectures on Functional Formulation in S-Matrix theory[MatSciRep:47] en_US dc.type.institution Institute of Mathematical Sciences en_US dc.description.pages 286p. en_US dc.type.mainsub Mathematics en_US Files in this item MR47.pdf 138.8Mb PDF View/Open This item appears in the following Collection(s) Search DSpace My Account
{"url":"http://www.imsc.res.in/xmlui/handle/123456789/214?show=full","timestamp":"2014-04-17T09:42:29Z","content_type":null,"content_length":"15901","record_id":"<urn:uuid:edbc44cf-23b6-48fe-a292-eb14115aeda3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Compiling numerical iterations Replies: 3 Last Post: Feb 27, 2013 3:04 AM Messages: [ Previous | Next ] Re: Compiling numerical iterations Posted: Feb 27, 2013 3:04 AM On 2/26/13 at 1:11 AM, cornelius.franz@gmx.net (firlefranz) wrote: >Thanks a lot! To be honest, some of the commands Ray is using, I've >never seen before. I stopped using mathematica before version 5 came The functions Ray used were available in version 5 and even earlier. >Coming back to Peters statement of exporting the code from >Mathematica to C. How this can be done starting from my or Ray's >code? There is an automated C-code-gernerator implemented in >Mathematica 9, am I right? Yes, version 9 has a C code generator. But it may or may not do what you want. If your Mathematica code doesn't use any specialized functions, the C code generator will likely be fine for you. But if you are using specialized Mathematica functions, I suspect the generator won't output C code for them. >Here is what I come up with. It's running in a reasonable time for >one particle, but for a real statistic ensemble, I have to do it >over 1.000.000 particles for a long time. Optimizing this or >(probably better) exporting it to C would hopefully help a lot. Your code example makes considerable use of For. Here is something to consider: In[1]:= n = 100000; sum = 0; Timing[For[k = 0, k <= n, k++, sum += k]; sum] Out[2]= {0.156014,5000050000} In[3]:= Timing[Plus @@ Range[n]] Out[3]= {0.019608,5000050000} In[4]:= Timing[Total@Range@n] Out[4]= {0.000449,5000050000} All of these get the same result for the sum of the first n integers. The first method is easily exported to C and would run much faster after being compiled. The last code sample uses a specific built-in Mathematica function that may not export nicely to C. But notice it is ~2.5 orders of magnitude faster than the first example using For. It is very possible to write Mathematica code without using specialized built-in functions that is very portable to C. But that code generally runs much slower than code making use of Mathematica's functional programming paradigm. If you restrict yourself to things that are easily exported to C code, you are really missing out on the true power of Mathematica. Date Subject Author 2/23/13 Re: Compiling numerical iterations Dr. Peter Klamser 2/25/13 Re: Compiling numerical iterations Dana DeLouis 2/27/13 Re: Compiling numerical iterations Bill Rowe
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2437007&messageID=8427400","timestamp":"2014-04-20T06:10:34Z","content_type":null,"content_length":"20481","record_id":"<urn:uuid:8ecb06eb-e2b4-4f7d-81e5-eb858060239d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: On ( k)­pseudoedges in generalized configurations and the pseudolinear crossing number of Kn B. ŽAbrego J. Balogh S. FernŽandez­Merchant J. Lea~nos G. Salazar July 18, 2006 It is known that every generalized configuration with n points has at least 3 ( k)­pseudoedges, and that this bound is tight for k n/3 - 1. Here we show that this bound is no longer tight for (any) k > n/3 - 1. As a corollary, we prove that the usual and the pseudolinear (and hence the rectilinear) crossing numbers of the complete graph Kn are different for every n 10. It has been noted that all known optimal rectilinear drawings of Kn share a triangular­like property, which we abstract into the concept of 3­decomposability. We give a lower bound for the crossing numbers of all pseudolinear drawings of Kn that satisfy this property. This bound coincides with the best general lower bound known
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/915/1055271.html","timestamp":"2014-04-18T19:15:30Z","content_type":null,"content_length":"8147","record_id":"<urn:uuid:0dbf7e71-b974-4549-a217-0a40c5820fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: The Humans: A Novel (Matt Haig) After Cambridge mathematician Andrew Martin proves the Riemann Hypothesis, he is replaced by an alien whose job it is to prevent news of the discovery from spreading as it is their belief that humans are not yet ready for the power it would afford them. The alien doppelganger surprises Martin's family and colleagues by being seemingly more human than the emotionless mathematician ever was, and they surprise him by being more worthy than the primitive creatures he had been led to expect. Of course, there is no reason to think that a proof of the Riemann Hypothesis would actually have any dramatic impact on the human race. Haig's claim that it would imply the existence of a "pattern" for the "first hundred thousand or so primes" is inaccurate, probably a misunderstanding based on the conjecture has been checked for the first hundred thousand or so primes. In fact, as you can read about in greater detail here, the Riemann Hypothesis is the conjecture that the zeroes of a certain function that can be written in terms of the prime numbers are all either negative integers or complex numbers with real part equal to 1/2. Of course, it is possible that a proof that this is true would provide knowledge that could have some important consequences that we cannot yet imagine, but I would not want anyone to misunderstand and suppose that this question, intriguing as it is to mathematicians, is necessarily of great importance to anyone else. The only immediate consequence I know of for the Riemann Hypothesis itself, which would not excite anyone but an expert number theorist, is that a certain known approximation to the function that counts the number of primes less than a given number would be known to be a slightly more accurate approximation than it might otherwise be. Hardly an application of Earth-shattering consequences. But, then, the mathematics is not really the main focus of the book. Told from the point of view of the alien who has taken over Andrew Martin's life, it is his growing appreciation of humanity and the things we have made (from peanut butter to pop music) that seems to be the key point.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf1113","timestamp":"2014-04-16T18:58:11Z","content_type":null,"content_length":"10801","record_id":"<urn:uuid:0f6f74dc-5a9e-4722-975e-f2d5c0606f51>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 10. Fields “Okay. Your duties are as follows: Get Breen. I don't care how you get him, but get him soon. That faker! He posed for twenty years as a scientist without ever being apprehended. Well, I'm going to do some apprehending that'll make all previous apprehending look like no apprehension at all. You with me?” “Yes,” said Battle, very much confused. “What's that thing you have?” “Piggy-back heat-ray. You transpose the air in its path into an unstable isotope which tends to carry all energy as heat. Then you shoot your juice light, or whatever along the isotopic path and you burn whatever's on the receiving end. You want a few?” “No,” said Battle. “I have my gats. What else have you got for offense and defense?” Underbottam opened a cabinet and proudly waved an arm. “Everything,” he said. “Disintegraters, heat-rays, bombs of every type. And impenetrable shields of energy, massive and portable. What more do I need?” From THE REVERSIBLE REVOLUTIONS by Cecil Corwin, Cosmic Stories, March 1941. Art by Morey, Bok, Kyle, Hunt, Forte. Copyright expired. } 10.1 Fields of Force Cutting-edge science readily infiltrates popular culture, though sometimes in garbled form. The Newtonian imagination populated the universe mostly with that nice solid stuff called matter, which was made of little hard balls called atoms. In the early twentieth century, consumers of pulp fiction and popularized science began to hear of a new image of the universe, full of x-rays, N-rays, and Hertzian waves. What they were beginning to soak up through their skins was a drastic revision of Newton's concept of a universe made of chunks of matter which happened to interact via forces. In the newly emerging picture, the universe was made of force, or, to be more technically accurate, of ripples in universal fields of force. Unlike the average reader of Cosmic Stories in 1941, you now possess enough technical background to understand what a “force field” really is. 10.1.1 Why fields? Time delays in forces exerted at a distance What convinced physicists that they needed this new concept of a field of force? Although we have been dealing mostly with electrical forces, let's start with a magnetic example. (In fact the main reason I've delayed a detailed discussion of magnetism for so long is that mathematical calculations of magnetic effects are handled much more easily with the concept of a field of force.) First a little background leading up to our example. A bar magnet, a, has an axis about which many of the electrons' orbits are oriented. The earth itself is also a magnet, although not a bar-shaped one. The interaction between the earth-magnet and the bar magnet, b, makes them want to line up their axes in opposing directions (in other words such that their electrons rotate in parallel planes, but with one set rotating clockwise and the other counterclockwise as seen looking along the axes). On a smaller scale, any two bar magnets placed near each other will try to align themselves head-to-tail, c. Now we get to the relevant example. It is clear that two people separated by a paper-thin wall could use a pair of bar magnets to signal to each other. Each person would feel her own magnet trying to twist around in response to any rotation performed by the other person's magnet. The practical range of communication would be very short for this setup, but a sensitive electrical apparatus could pick up magnetic signals from much farther away. In fact, this is not so different from what a radio does: the electrons racing up and down the transmitting antenna create forces on the electrons in the distant receiving antenna. (Both magnetic and electric forces are involved in real radio signals, but we don't need to worry about that yet.) A question now naturally arises as to whether there is any time delay in this kind of communication via magnetic (and electric) forces. Newton would have thought not, since he conceived of physics in terms of instantaneous action at a distance. We now know, however, that there is such a time delay. If you make a long-distance phone call that is routed through a communications satellite, you should easily be able to detect a delay of about half a second over the signal's round trip of 50,000 miles. Modern measurements have shown that electric, magnetic, and gravitational forces all travel at the speed of light, \(3\times10^8\) m/s. (In fact, we will soon discuss how light itself is made of electricity and magnetism.) If it takes some time for forces to be transmitted through space, then apparently there is some thing that travels through space. The fact that the phenomenon travels outward at the same speed in all directions strongly evokes wave metaphors such as ripples on a pond. More evidence that fields of force are real: they carry energy. The smoking-gun argument for this strange notion of traveling force ripples comes from the fact that they carry energy. First suppose that the person holding the bar magnet on the right decides to reverse hers, resulting in configuration d. She had to do mechanical work to twist it, and if she releases the magnet, energy will be released as it flips back to c. She has apparently stored energy by going from c to d. So far everything is easily explained without the concept of a field of force. But now imagine that the two people start in position c and then simultaneously flip their magnets extremely quickly to position e, keeping them lined up with each other the whole time. Imagine, for the sake of argument, that they can do this so quickly that each magnet is reversed while the force signal from the other is still in transit. (For a more realistic example, we'd have to have two radio antennas, not two magnets, but the magnets are easier to visualize.) During the flipping, each magnet is still feeling the forces arising from the way the other magnet used to be oriented. Even though the two magnets stay aligned during the flip, the time delay causes each person to feel resistance as she twists her magnet around. How can this be? Both of them are apparently doing mechanical work, so they must be storing magnetic energy somehow. But in the traditional Newtonian conception of matter interacting via instantaneous forces at a distance, interaction energy arises from the relative positions of objects that are interacting via forces. If the magnets never changed their orientations relative to each other, how can any magnetic energy have been stored? The only possible answer is that the energy must have gone into the magnetic force ripples crisscrossing the space between the magnets. Fields of force apparently carry energy across space, which is strong evidence that they are real things. This is perhaps not as radical an idea to us as it was to our ancestors. We are used to the idea that a radio transmitting antenna consumes a great deal of power, and somehow spews it out into the universe. A person working around such an antenna needs to be careful not to get too close to it, since all that energy can easily cook flesh (a painful phenomenon known as an “RF burn”). 10.1.2 The gravitational field Given that fields of force are real, how do we define, measure, and calculate them? A fruitful metaphor will be the wind patterns experienced by a sailing ship. Wherever the ship goes, it will feel a certain amount of force from the wind, and that force will be in a certain direction. The weather is ever-changing, of course, but for now let's just imagine steady wind patterns. Definitions in physics are operational, i.e., they describe how to measure the thing being defined. The ship's captain can measure the wind's “field of force” by going to the location of interest and determining both the direction of the wind and the strength with which it is blowing. Charting all these measurements on a map leads to a depiction of the field of wind force like the one shown in the figure. This is known as the “sea of arrows” method of visualizing a field. Now let's see how these concepts are applied to the fundamental force fields of the universe. We'll start with the gravitational field, which is the easiest to understand. As with the wind patterns, we'll start by imagining gravity as a static field, even though the existence of the tides proves that there are continual changes in the gravity field in our region of space. When the gravitational field was introduced in chapter 2, I avoided discussing its direction explicitly, but defining it is easy enough: we simply go to the location of interest and measure the direction of the gravitational force on an object, such as a weight tied to the end of a string. In chapter 2, I defined the gravitational field in terms of the energy required to raise a unit mass through a unit distance. However, I'm going to give a different definition now, using an approach that will be more easily adapted to electric and magnetic fields. This approach is based on force rather than energy. We couldn't carry out the energy-based definition without dividing by the mass of the object involved, and the same is true for the force-based definition. For example, gravitational forces are weaker on the moon than on the earth, but we cannot specify the strength of gravity simply by giving a certain number of newtons. The number of newtons of gravitational force depends not just on the strength of the local gravitational field but also on the mass of the object on which we're testing gravity, our “test mass.” A boulder on the moon feels a stronger gravitational force than a pebble on the earth. We can get around this problem by defining the strength of the gravitational field as the force acting on an object, divided by the object's mass: The gravitational field vector, \(\mathbf{g}\), at any location in space is found by placing a test mass \(m_t\) at that point. The field vector is then given by \(\mathbf{g}=\mathbf{F}/m_t\), where \(\mathbf{F}\) is the gravitational force on the test mass. We now have three ways of representing a gravitational field. The magnitude of the gravitational field near the surface of the earth, for instance, could be written as 9.8 N/kg, 9.8 \(\text{J}/\text {kg}\cdot\text{m}\), or 9.8 \(\text{m}/\text{s}^2\). If we already had two names for it, why invent a third? The main reason is that it prepares us with the right approach for defining other fields. The most subtle point about all this is that the gravitational field tells us about what forces would be exerted on a test mass by the earth, sun, moon, and the rest of the universe, if we inserted a test mass at the point in question. The field still exists at all the places where we didn't measure it. Example 1: Gravitational field of the earth \(\triangleright\) What is the magnitude of the earth's gravitational field, in terms of its mass, \(M\), and the distance \(r\) from its center? \(\triangleright\) Substituting \(|\mathbf{F}|= GMm_{t}/ r^2\) into the definition of the gravitational field, we find \(|\mathbf{g}|= GM/ r^2\). This expression could be used for the field of any spherically symmetric mass distribution, since the equation we assumed for the gravitational force would apply in any such case. Sources and sinks If we make a sea-of-arrows picture of the gravitational fields surrounding the earth, g, the result is evocative of water going down a drain. For this reason, anything that creates an inward-pointing field around itself is called a sink. The earth is a gravitational sink. The term “source” can refer specifically to things that make outward fields, or it can be used as a more general term for both “outies” and “innies.” However confusing the terminology, we know that gravitational fields are only attractive, so we will never find a region of space with an outward-pointing field pattern. Knowledge of the field is interchangeable with knowledge of its sources (at least in the case of a static, unchanging field). If aliens saw the earth's gravitational field pattern they could immediately infer the existence of the planet, and conversely if they knew the mass of the earth they could predict its influence on the surrounding gravitational field. Superposition of index{superposition of fields}index{fields!superposition of}fields A very important fact about all fields of force is that when there is more than one source (or sink), the fields add according to the rules of vector addition. The gravitational field certainly will have this property, since it is defined in terms of the force on a test mass, and forces add like vectors. Superposition is an important characteristics of waves, so the superposition property of fields is consistent with the idea that disturbances can propagate outward as waves in a field. Example 2: Reduction in gravity on Io due to Jupiter's gravity \(\triangleright\) The average gravitational field on Jupiter's moon Io is 1.81 N/kg. By how much is this reduced when Jupiter is directly overhead? Io's orbit has a radius of \( 4.22\times10^8\) m, and Jupiter's mass is \( 1.899\times10^{27}\) kg. \(\triangleright\) By the shell theorem, we can treat the Jupiter as if its mass was all concentrated at its center, and likewise for Io. If we visit Io and land at the point where Jupiter is overhead, we are on the same line as these two centers, so the whole problem can be treated one-dimensionally, and vector addition is just like scalar addition. Let's use positive numbers for downward fields (toward the center of Io) and negative for upward ones. Plugging the appropriate data into the expression derived in example 1, we find that the Jupiter's contribution to the field is \(- 0.71\) N/kg. Superposition says that we can find the actual gravitational field by adding up the fields created by Io and Jupiter: \(1.81-0.71\) N/kg = 1.1 N/kg. You might think that this reduction would create some spectacular effects, and make Io an exciting tourist destination. Actually you would not detect any difference if you flew from one side of Io to the other. This is because your body and Io both experience Jupiter's gravity, so you follow the same orbital curve through the space around Jupiter. Gravitational waves A source that sits still will create a static field pattern, like a steel ball sitting peacefully on a sheet of rubber. A moving source will create a spreading wave pattern in the field, like a bug thrashing on the surface of a pond. Although we have started with the gravitational field as the simplest example of a static field, stars and planets do more stately gliding than thrashing, so gravitational waves are not easy to detect. Newton's theory of gravity does not describe gravitational waves, but they are predicted by Einstein's general theory of relativity. J.H. Taylor and R.A. Hulse were awarded the Nobel Prize in 1993 for giving indirect evidence that Einstein's waves actually exist. They discovered a pair of exotic, ultra-dense stars called neutron stars orbiting one another very closely, and showed that they were losing orbital energy at the rate predicted by Einstein's theory. A Caltech-MIT collaboration has built a pair of gravitational wave detectors called LIGO to search for more direct evidence of gravitational waves. Since they are essentially the most sensitive vibration detectors ever made, they are located in quiet rural areas, and signals will be compared between them to make sure that they were not due to passing trucks. The project began operating at full sensitivity in 2005, and is now able to detect a vibration that causes a change of \(10^{-18}\) m in the distance between the mirrors at the ends of the 4-km vacuum tunnels. This is a thousand times less than the size of an atomic nucleus! There is only enough funding to keep the detectors operating for a few more years, so the physicists can only hope that during that time, somewhere in the universe, a sufficiently violent cataclysm will occur to make a detectable gravitational wave. (More accurately, they want the wave to arrive in our solar system during that time, although it will have been produced millions of years before.) 10.1.3 The electric field The definition of the electric field is directly analogous to, and has the same motivation as, the definition of the gravitational field: The electric field vector, \(\mathbf{E}\), at any location in space is found by placing a test charge \(q_t\) at that point. The electric field vector is then given by \(\mathbf{E}=\mathbf{F}/q_t\), where \(\mathbf{F}\) is the electric force on the test charge. Charges are what create electric fields. Unlike gravity, which is always attractive, electricity displays both attraction and repulsion. A positive charge is a source of electric fields, and a negative one is a sink. The most difficult point about the definition of the electric field is that the force on a negative charge is in the opposite direction compared to the field. This follows from the definition, since dividing a vector by a negative number reverses its direction. It's as though we had some objects that fell upward instead of down. Find an equation for the magnitude of the field of a single point charge \(Q\). (answer in the back of the PDF version of the book) Example 3: Superposition of electric fields \(\triangleright\) Charges \(q\) and \(- q\) are at a distance \(b\) from each other, as shown in the figure. What is the electric field at the point P, which lies at a third corner of the square? \(\triangleright\) The field at P is the vector sum of the fields that would have been created by the two charges independently. Let positive \(x\) be to the right and let positive \(y\) be up. Negative charges have fields that point at them, so the charge \(-q\) makes a field that points to the right, i.e., has a positive \(x\) component. Using the answer to the self-check, we have \[\begin{align*} E_{-q,x} &= \frac{ kq}{ b^2} \\ E_{-q,y} &= 0 . \end{align*}\] Note that if we had blindly ignored the absolute value signs and plugged in \(- q\) to the equation, we would have incorrectly concluded that the field went to the left. By the Pythagorean theorem, the positive charge is at a distance \(\sqrt{2} b\) from P, so the magnitude of its contribution to the field is \(E= kq/2 b^2\). Positive charges have fields that point away from them, so the field vector is at an angle of 135° counterclockwise from the \(x\) axis. \[\begin{align*} E_{q,x} &= \frac{ kq}{2 b^2} \text{cos}\ 135° \\ &= -\frac{ kq}{2^\text{3/2} b^2} \\ E_{q,y} &= \frac{ kq}{2 b^2} \text{sin}\ 135° \\ &= \frac{ kq}{2^\text{3/2} b^2} \end{align*}\] The total field is \[\begin{align*} E_\text{x} &= \left(1-2^{-\text{3/2}}\right)\frac{ kq}{ b^2} \\ E_{y} &= \frac{ kq}{2^\text{3/2} b^2} \end{align*}\] The simplest set of sources that can occur with electricity but not with gravity is the dipole, consisting of a positive charge and a negative charge with equal magnitudes. More generally, an electric dipole can be any object with an imbalance of positive charge on one side and negative on the other. A water molecule, l, is a dipole because the electrons tend to shift away from the hydrogen atoms and onto the oxygen atom. Your microwave oven acts on water molecules with electric fields. Let us imagine what happens if we start with a uniform electric field, m/1, made by some external charges, and then insert a dipole, m/2, consisting of two charges connected by a rigid rod. The dipole disturbs the field pattern, but more important for our present purposes is that it experiences a torque. In this example, the positive charge feels an upward force, but the negative charge is pulled down. The result is that the dipole wants to align itself with the field, m/3. The microwave oven heats food with electrical (and magnetic) waves. The alternation of the torque causes the molecules to wiggle and increase the amount of random motion. The slightly vague definition of a dipole given above can be improved by saying that a dipole is any object that experiences a torque in an electric field. What determines the torque on a dipole placed in an externally created field? Torque depends on the force, the distance from the axis at which the force is applied, and the angle between the force and the line from the axis to the point of application. Let a dipole consisting of charges \(+q\) and \(-q\) separated by a distance \(\ell\) be placed in an external field of magnitude \(|\mathbf{E} |\), at an angle \(\theta\) with respect to the field. The total torque on the dipole is \[\begin{align*} \tau &= \frac{\ell}{2}q|\mathbf{E}|\sin \theta+\frac{\ell}{2}q|\mathbf{E}|\sin \theta \\ &= \ell q|\mathbf{E}|\sin \theta . \end{align*}\] (Note that even though the two forces are in opposite directions, the torques do not cancel, because they are both trying to twist the dipole in the same direction.) The quantity is called the dipole moment, notated \(D\). (More complex dipoles can also be assigned a dipole moment --- they are defined as having the same dipole moment as the two-charge dipole that would experience the same Employing a little more mathematical elegance, we can define a dipole moment vector, \[\begin{equation*} \mathbf{D} = \sum q_i \mathbf{r}_i , \end{equation*}\] where \(\mathbf{r}_i\) is the position vector of the charge labeled by the index \(i\). We can then write the torque in terms of a vector cross product (page 281), \[\begin{equation*} \boldsymbol{\tau} = \mathbf{D}\times\mathbf{E} . \end{equation*}\] No matter how we notate it, the definition of the dipole moment requires that we choose point from which we measure all the position vectors of the charges. However, in the commonly encountered special case where the total charge of the object is zero, the dipole moment is the same regardless of this choice. Example 4: Dipole moment of a molecule of NaCl gas \(\triangleright\) In a molecule of NaCl gas, the center-to-center distance between the two atoms is about 0.6 nm. Assuming that the chlorine completely steals one of the sodium's electrons, compute the magnitude of this molecule's dipole moment. \(\triangleright\) The total charge is zero, so it doesn't matter where we choose the origin of our coordinate system. For convenience, let's choose it to be at one of the atoms, so that the charge on that atom doesn't contribute to the dipole moment. The magnitude of the dipole moment is then \[\begin{align*} D &= (6\times10^{-10}\ \text{m})( e) \\ &= (6\times10^{-10}\ \text{m})( 1.6\times10^{-19}\ \text{C}) \\ &= 1\times10^{-28}\ \text{C}\cdot\text{m} \end{align*}\] Example 5: Dipole moments as vectors \(\triangleright\) The horizontal and vertical spacing between the charges in the figure is \(b\). Find the dipole moment. \(\triangleright\) Let the origin of the coordinate system be at the leftmost charge. \[\begin{align*} \mathbf{D} &= \sum q_i \mathbf{r}_i \\ &= (q)(\vc{0})+(-q)(b\hat{\mathbf{x}})+(q)(b\hat{\mathbf{x}}+b\hat{\mathbf{y}})+(-q)(2b\hat{\mathbf{x}}) \\ &= -2bq\hat{\mathbf{x}}+bq\hat{\ mathbf{y}} \end{align*}\] Alternative definition of the electric field The behavior of a dipole in an externally created field leads us to an alternative definition of the electric field: The electric field vector, \(E\), at any location in space is defined by observing the torque exerted on a test dipole \(D_t\) placed there. The direction of the field is the direction in which the field tends to align a dipole (from \(-\) to +), and the field's magnitude is \(|\mathbf{E}|=\tau/D_t\sin\theta\). In other words, the field vector is the vector that satisfies the equation \(\ boldsymbol{\tau} = \mathbf{D}_t\times\mathbf{E}\) for any test dipole \(\mathbf{D}_t\) placed at that point in space. The main reason for introducing a second definition for the same concept is that the magnetic field is most easily defined using a similar approach. Discussion Questions In the definition of the electric field, does the test charge need to be 1 coulomb? Does it need to be positive? Does a charged particle such as an electron or proton feel a force from its own electric field? Is there an electric field surrounding a wall socket that has nothing plugged into it, or a battery that is just sitting on a table? In a flashlight powered by a battery, which way do the electric fields point? What would the fields be like inside the wires? Inside the filament of the bulb? Criticize the following statement: “An electric field can be represented by a sea of arrows showing how current is flowing.” The field of a point charge, \(|\mathbf{E}|=kQ/r^2\), was derived in a self-check. How would the field pattern of a uniformly charged sphere compare with the field of a point charge? The interior of a perfect electrical conductor in equilibrium must have zero electric field, since otherwise the free charges within it would be drifting in response to the field, and it would not be in equilibrium. What about the field right at the surface of a perfect conductor? Consider the possibility of a field perpendicular to the surface or parallel to it. Small pieces of paper that have not been electrically prepared in any way can be picked up with a charged object such as a charged piece of tape. In our new terminology, we could describe the tape's charge as inducing a dipole moment in the paper. Can a similar technique be used to induce not just a dipole moment but a charge? 10.2 Voltage Related To Field 10.2.1 One dimension Voltage is electrical energy per unit charge, and electric field is force per unit charge. For a particle moving in one dimension, along the \(x\) axis, we can therefore relate voltage and field if we start from the relationship between interaction energy and force, \[\begin{equation*} dU = -F_xdx , \end{equation*}\] and divide by charge, \[\begin{equation*} \frac{dU}{q} = -\frac{F_x}{q}dx , \end{equation*}\] \[\begin{equation*} dV = -E_x dx , \end{equation*}\] \[\begin{equation*} \frac{dV}{dx} = -E_x . \end{equation*}\] The interpretation is that a strong electric field occurs in a region of space where the voltage is rapidly changing. By analogy, a steep hillside is a place on the map where the altitude is rapidly Example 6: Field generated by an electric eel \(\triangleright\) Suppose an electric eel is 1 m long, and generates a voltage difference of 1000 volts between its head and tail. What is the electric field in the water around it? \(\triangleright\) We are only calculating the amount of field, not its direction, so we ignore positive and negative signs. Subject to the possibly inaccurate assumption of a constant field parallel to the eel's body, we have \[\begin{align*} |\mathbf{E}| &= \frac{dV}{d x} \\ &\approx \frac{\Delta V}{\Delta x} \text{[assumption of constant field]} \\ &= 1000\ \text{V/m} . \end{align*}\] Example 7: Relating the units of electric field and voltage From our original definition of the electric field, we expect it to have units of newtons per coulomb, N/C. The example above, however, came out in volts per meter, V/m. Are these inconsistent? Let's reassure ourselves that this all works. In this kind of situation, the best strategy is usually to simplify the more complex units so that they involve only mks units and coulombs. Since voltage is defined as electrical energy per unit charge, it has units of J/C: \[\begin{align*} \frac{\text{V}}{\text{m}} &= \frac{\text{J/C}}{\text{m}} \\ &= \frac{\text{J}}{\text{C}\cdot\text{m}} . \end{align*}\] To connect joules to newtons, we recall that work equals force times distance, so \(\text{J}=\text{N}\cdot\text{m}\), so \[\begin{align*} \frac{\text{V}}{\text{m}} &= \frac{\text{N}\cdot\text{m}}{\text{C}\cdot\text{m}} \\ &= \frac{\text{N}}{\text{C}} \end{align*}\] As with other such difficulties with electrical units, one quickly begins to recognize frequently occurring combinations. Example 8: Voltage associated with a point charge \(\triangleright\) What is the voltage associated with a point charge? \(\triangleright\) As derived previously in self-check A on page 563, the field is \[\begin{equation*} |\mathbf{E}| = \frac{ kQ}{ r^2} \end{equation*}\] The difference in voltage between two points on the same radius line is \[\begin{align*} \Delta V &= -\int d V \\ &= -\int E_{x} d x \end{align*}\] In the general discussion above, \(x\) was just a generic name for distance traveled along the line from one point to the other, so in this case \(x\) really means \(r\). \[\begin{align*} \Delta V &= -\int_{ r_1}^{ r_2} E_{r} d r \\ &= -\int_{ r_1}^{ r_2} \frac{ kQ}{ r^2} d r \\ &= \left.\frac{ kQ}{ r}\right]_{ r_1}^{ r_2} \ &= \frac{ kQ}{ r_2}-\frac{ kQ}{ r_1} . \end The standard convention is to use \(r_1=\infty\) as a reference point, so that the voltage at any distance \(r\) from the charge is \[\begin{equation*} V = \frac{ kQ}{ r} . \end{equation*}\] The interpretation is that if you bring a positive test charge closer to a positive charge, its electrical energy is increased; if it was released, it would spring away, releasing this as kinetic Show that you can recover the expression for the field of a point charge by evaluating the derivative \(E_{x}=-d V/d x\). (answer in the back of the PDF version of the book) 10.2.2 Two or three dimensions The topographical map in figure a suggests a good way to visualize the relationship between field and voltage in two dimensions. Each contour on the map is a line of constant height; some of these are labeled with their elevations in units of feet. Height is related to gravitational energy, so in a gravitational analogy, we can think of height as representing voltage. Where the contour lines are far apart, as in the town, the slope is gentle. Lines close together indicate a steep slope. If we walk along a straight line, say straight east from the town, then height (voltage) is a function of the east-west coordinate \(x\). Using the usual mathematical definition of the slope, and writing \(V\) for the height in order to remind us of the electrical analogy, the slope along such a line is \(dV/dx\) (the rise over the run). What if everything isn't confined to a straight line? Water flows downhill. Notice how the streams on the map cut perpendicularly through the lines of constant height. It is possible to map voltages in the same way, as shown in figure b. The electric field is strongest where the constant-voltage curves are closest together, and the electric field vectors always point perpendicular to the constant-voltage curves. The one-dimensional relationship \(E=-dV/dx\) generalizes to three dimensions as follows: \[\begin{align*} E_x &= -\frac{dV}{dx} \\ E_y &= -\frac{dV}{dy} \\ E_z &= -\frac{dV}{dz} \end{align*}\] This can be notated as a gradient (page 215), \[\begin{equation*} \mathbf{E} = \nabla V , \end{equation*}\] and if we know the field and want to find the voltage, we can use a line integral, \[\begin{equation*} \Delta V = \int_C \mathbf{E}\cdotd\mathbf{r} , \end{equation*}\] where the quantity inside the integral is a vector dot product. Imagine that figure a represents voltage rather than height. (a) Consider the stream the starts near the center of the map. Determine the positive and negative signs of \(dV/dx\) and \(dV/dy\), and relate these to the direction of the force that is pushing the current forward against the resistance of friction. (b) If you wanted to find a lot of electric charge on this map, where would you (answer in the back of the PDF version of the book) Figure c shows some examples of ways to visualize field and voltage patterns. 10.3 Fields by Superposition 10.3.1 Electric field of a continuous charge distribution Charge really comes in discrete chunks, but often it is mathematically convenient to treat a set of charges as if they were like a continuous fluid spread throughout a region of space. For example, a charged metal ball will have charge spread nearly uniformly all over its surface, and for most purposes it will make sense to ignore the fact that this uniformity is broken at the atomic level. The electric field made by such a continuous charge distribution is the sum of the fields created by every part of it. If we let the “parts” become infinitesimally small, we have a sum of an infinitely many infinitesimal numbers: an integral. If it was a discrete sum, as in example 3 on page 564, we would have a total electric field in the \(x\) direction that was the sum of all the \(x\) components of the individual fields, and similarly we'd have sums for the \(y\) and \(z\) components. In the continuous case, we have three integrals. Let's keep it simple by starting with a one-dimensional example. Example 9: Field of a uniformly charged rod \(\triangleright\) A rod of length \(L\) has charge \(Q\) spread uniformly along it. Find the electric field at a point a distance \(d\) from the center of the rod, along the rod's axis. \(\triangleright\) This is a one-dimensional situation, so we really only need to do a single integral representing the total field along the axis. We imagine breaking the rod down into short pieces of length \(d z\), each with charge \(d q\). Since charge is uniformly spread along the rod, we have \(d q=\lambdad z\), where \(\lambda= Q/ L\) (Greek lambda) is the charge per unit length, in units of coulombs per meter. Since the pieces are infinitesimally short, we can treat them as point charges and use the expression \(kd q/ r^2\) for their contributions to the field, where \(r= d- z\) is the distance from the charge at \(z\) to the point in which we are interested. \[\begin{align*} E_{z} &= \int \frac{ kd q }{ r^2} \\ &= \int_{- L/2}^{+ L/2} \frac{ k\lambdad z }{ r^2} \\ &= k \lambda \int_{- L/2}^{+ L/2} \frac{d z}{( d- z)^2} \end{align*}\] The integral can be looked up in a table, or reduced to an elementary form by substituting a new variable for \(d- z\). The result is \[\begin{align*} E_{z} &= k\lambda\left(\frac{1}{ d- z}\right)_{- L/2}^{+ L/2} \\ &= \frac{ kQ}{ L} \left(\frac{1}{ d- L/2}-\frac{1}{ d+ L/2}\right) . \end{align*}\] For large values of \(d\), this expression gets smaller for two reasons: (1) the denominators of the fractions become large, and (2) the two fractions become nearly the same, and tend to cancel out. This makes sense, since the field should get weaker as we get farther away from the charge. In fact, the field at large distances must approach \( kQ/ d^2\) (homework problem 2). It's also interesting to note that the field becomes infinite at the ends of the rod, but is not infinite on the interior of the rod. Can you explain physically why this happens? Example 9 was one-dimensional. In the general three-dimensional case, we might have to integrate all three components of the field. However, there is a trick that lets us avoid this much complication. The voltage is a scalar, so we can find the voltage by doing just a single integral, then use the voltage to find the field. Example 10: Voltage, then field \(\triangleright\) A rod of length \(L\) is uniformly charged with charge \(Q\). Find the field at a point lying in the midplane of the rod at a distance \(R\). \(\triangleright\) By symmetry, the field has only a radial component, \(E_R\), pointing directly away from the rod (or toward it for \(Q\lt0\)). The brute-force approach, then, would be to evaluate the integral \(E=\int |d\mathbf{E}|\text{cos}\ \theta\), where \(d\mathbf{E}\) is the contribution to the field from a charge \(dq\) at some point along the rod, and \(\theta\) is the angle \(d\ mathbf{E}\) makes with the radial line. It's easier, however, to find the voltage first, and then find the field from the voltage. Since the voltage is a scalar, we simply integrate the contribution \(dV\) from each charge \(dq\), without even worrying about angles and directions. Let \(z\) be the coordinate that measures distance up and down along the rod, with \(z=0\) at the center of the rod. Then the distance between a point \(z\) on the rod and the point of interest is \(r=\sqrt{ z^2+ R^2}\), and we have \[\begin{align*} V &= \int \frac{ kdq}{ r} \\ &= k\lambda \int_{- L/2}^{+ L/2}\frac{dz}{ r} \\ &= k\lambda \int_{- L/2}^{+ L/2}\frac{dz}{\sqrt{ z^2+ R^2}} \\ \end{align*}\] The integral can be looked up in a table, or evaluated using computer software: \[\begin{align*} V &= \left. k\lambda\: \text{ln}\left( z+\sqrt{ z^2+ R^2}\right)\right|_{- L/2}^{+ L/2} \\ &= k\lambda\: \text{ln}\left(\frac{ L/2+\sqrt{ L^2/4+ R^2}}{- L/2+\sqrt{ L^2/4+ R^2}}\ right) \\ \end{align*}\] The expression inside the parentheses can be simplified a little. Leaving out some tedious algebra, the result is \[\begin{equation*} V = 2 k\lambda\: \text{ln}\left(\frac{ L}{2 R}+\sqrt{1+\frac{ L^2}{4 R^2}}\right) \end{equation*}\] This can readily be differentiated to find the field: \[\begin{align*} E_{R} &= -\frac{dV}{dR} \\ &= (-2 k\lambda)\frac{- L/2 R^2 +(1/2)(1+ L^2/4 R^2)^{-1/2}(- L^2/2 R^3) }{L/2 R+(1+ L^2/4 R^2)^{1/2} } , \\ \text{or, after some simplification,} E_{R} &= \frac{ k\lambda L}{ R^2\sqrt{1+ L^2/4 R^2}} \end{align*}\] For large values of \(R\), the square root approaches one, and we have simply \(E_{R}\approx k\lambda L/ R^2= k Q/ R^2\). In other words, the field very far away is the same regardless of whether the charge is a point charge or some other shape like a rod. This is intuitively appealing, and doing this kind of check also helps to reassure one that the final result is correct. The preceding example, although it involved some messy algebra, required only straightforward calculus, and no vector operations at all, because we only had to integrate a scalar function to find the voltage. The next example is one in which we can integrate either the field or the voltage without too much complication. Example 11: On-axis field of a ring of charge \(\triangleright\) Find the voltage and field along the axis of a uniformly charged ring. \(\triangleright\) Integrating the voltage is straightforward. \[\begin{align*} V &= \int \frac{ kdq}{ r} \\ &= k \int \frac{dq}{\sqrt{ b^2+ z^2}} \\ &= \frac{ k}{\sqrt{ b^2+ z^2}} \int dq \\ &= \frac{ kQ}{\sqrt{ b^2+ z^2}} , \end{align*}\] where \(Q\) is the total charge of the ring. This result could have been derived without calculus, since the distance \(r\) is the same for every point around the ring, i.e., the integrand is a constant. It would also be straightforward to find the field by differentiating this expression with respect to \(z\) (homework problem 10). Instead, let's see how to find the field by direct integration. By symmetry, the field at the point of interest can have only a component along the axis of symmetry, the \(z\) axis: \[\begin{align*} E_{x} &= 0 \\ E_y &= 0 \end{align*}\] To find the field in the \(z\) direction, we integrate the \(z\) components contributed to the field by each infinitesimal part of the ring. \[\begin{align*} E_{z} &= \int dE_z \\ &= \int |d\mathbf{E}|\:\text{cos}\:\theta , \end{align*}\] where \(\theta\) is the angle shown in the figure. \[\begin{align*} E_{z} &= \int \frac{ kdq}{ r^2}\:\text{cos}\:\theta \\ &= k \int \frac{dq}{ b^2+ z^2}\:\text{cos}\:\theta \end{align*}\] Everything inside the integral is a constant, so we have \[\begin{align*} E_{z} &= \frac{ k}{ b^2+ z^2}\:\text{cos}\:\theta \int dq \\ &= \frac{ kQ}{ b^2+ z^2}\:\text{cos}\:\theta \\ &= \frac{ kQ}{ b^2+ z^2}\:\frac{ z}{ r} \\ &= \frac{ kQz}{\left( b^2+ z^2 \right)^\text{3/2}} \end{align*}\] In all the examples presented so far, the charge has been confined to a one-dimensional line or curve. Although it is possible, for example, to put charge on a piece of wire, it is more common to encounter practical devices in which the charge is distributed over a two-dimensional surface, as in the flat metal plates used in Thomson's experiments. Mathematically, we can approach this type of calculation with the divide-and-conquer technique: slice the surface into lines or curves whose fields we know how to calculate, and then add up the contributions to the field from all these slices. In the limit where the slices are imagined to be infinitesimally thin, we have an integral. Example 12: Field of a uniformly charged disk \(\triangleright\) A circular disk is uniformly charged. (The disk must be an insulator; if it was a conductor, then the repulsion of all the charge would cause it to collect more densely near the edge.) Find the field at a point on the axis, at a distance \(z\) from the plane of the disk. \(\triangleright\) We're given that every part of the disk has the same charge per unit area, so rather than working with \(Q\), the total charge, it will be easier to use the charge per unit area, conventionally notated \(\sigma\) (Greek sigma), \(\sigma= Q/\pi b^2\). Since we already know the field due to a ring of charge, we can solve the problem by slicing the disk into rings, with each ring extending from \(r\) to \(r+dr\). The area of such a ring equals its circumference multiplied by its width, i.e., \(2\pi rdr\), so its charge is \(dq=2\pi\sigma rdr\), and from the result of example 11, its contribution to the field is \[\begin{align*} dE_{z} &= \frac{ kzdq}{\left( r^2+ z^2\right)^\text{3/2}} \\ &= \frac{2\pi\sigma kzrdr}{\left( r^2+ z^2\right)^\text{3/2}} \\ \end{align*}\] The total field is \[\begin{align*} E_{z} &= \int dE_{z} \\ &= 2\pi\sigma kz \int_0^{b} \frac{ rdr}{\left( r^2+ z^2\right)^\text{3/2}} \\ &= 2\pi\sigma kz \left. \frac{-1}{\sqrt{ r^2+ z^2}} \right|_{ r=0}^{ r=\text{b}} \\ &= 2\pi\sigma k\left(1-\frac{ z}{\sqrt{ b^2+ z^2}}\right) \end{align*}\] The result of example 12 has some interesting properties. First, we note that it was derived on the unspoken assumption of \(z>0\). By symmetry, the field on the other side of the disk must be equally strong, but in the opposite direction, as shown in figures e and g. Thus there is a discontinuity in the field at \(z=0\). In reality, the disk will have some finite thickness, and the switching over of the field will be rapid, but not discontinuous. At large values of \(z\), i.e., \(z\gg b\), the field rapidly approaches the \(1/r^2\) variation that we expect when we are so far from the disk that the disk's size and shape cannot matter (homework problem 2). A practical application is the case of a capacitor, f, having two parallel circular plates very close together. In normal operation, the charges on the plates are opposite, so one plate has fields pointing into it and the other one has fields pointing out. In a real capacitor, the plates are a metal conductor, not an insulator, so the charge will tend to arrange itself more densely near the edges, rather than spreading itself uniformly on each plate. Furthermore, we have only calculated the on-axis field in example 12; in the off-axis region, each disk's contribution to the field will be weaker, and it will also point away from the axis a little. But if we are willing to ignore these complications for the sake of a rough analysis, then the fields superimpose as shown in figure f: the fields cancel the outside of the capacitor, but between the plates its value is double that contributed by a single plate. This cancellation on the outside is a very useful property for a practical capacitor. For instance, if you look at the printed circuit board in a typical piece of consumer electronics, there are many capacitors, often placed fairly close together. If their exterior fields didn't cancel out nicely, then each capacitor would interact with its neighbors in a complicated way, and the behavior of the circuit would depend on the exact physical layout, since the interaction would be stronger or weaker depending on distance. In reality, a capacitor does create weak external electric fields, but their effects are often negligible, and we can then use the lumped-circuit approximation, which states that each component's behavior depends only on the currents that flow in and out of it, not on the interaction of its fields with the other components. 10.3.2 The field near a charged surface From a theoretical point of view, there is something even more intriguing about example 12: the magnitude of the field for small values of \(z\) (\(z\ll b\)) is \(E=2\pi k\sigma\), which doesn't depend on \(b\) at all for a fixed value of \(\sigma\). If we made a disk with twice the radius, and covered it with the same number of coulombs per square meter (resulting in a total charge four times as great), the field close to the disk would be unchanged! That is, a flea living near the center of the disk, h, would have no way of determining the size of her flat “planet” by measuring the local field and charge density. (Only by leaping off the surface into outer space would she be able to measure fields that were dependent on \(b\). If she traveled very far, to \(z\gg b\), she would be in the region where the field is well approximated by \(|\mathbf{E}|\approx kQ/z^2=k\pi b^2\sigma/z^2\), which she could solve for \(b\).) What is the reason for this surprisingly simple behavior of the field? Is it a piece of mathematical trivia, true only in this particular case? What if the shape was a square rather than a circle? In other words, the flea gets no information about the size of the disk from measuring \(E\), since \(E=2\pi k\sigma\), independent of \(b\), but what if she didn't know the shape, either? If the result for a square had some other geometrical factor in front instead of \(2\pi\), then she could tell which shape it was by measuring \(E\). The surprising mathematical fact, however, is that the result for a square, indeed for any shape whatsoever, is \(E=2\pi\sigma k\). It doesn't even matter whether the surface is flat or warped, or whether the density of charge is different at parts of the surface which are far away compared to the flea's distance above the surface. This universal \(E_\perp=2\pi k\sigma\) field perpendicular to a charged surface can be proved mathematically based on Gauss's law^1 (section 10.6), but we can understand what's happening on qualitative grounds. Suppose on night, while the flea is asleep, someone adds more surface area, also positively charged, around the outside edge of her disk-shaped world, doubling its radius. The added charge, however, has very little effect on the field in her environment, as long as she stays at low altitudes above the surface. As shown in figure i, the new charge to her west contributes a field, T, that is almost purely “horizontal” (i.e., parallel to the surface) and to the east. It has a negligible upward component, since the angle is so shallow. This new eastward contribution to the field is exactly canceled out by the westward field, S, created by the new charge to her east. There is likewise almost perfect cancellation between any other pair of opposite compass directions. A similar argument can be made as to the shape-independence of the result, as long as the shape is symmetric. For example, suppose that the next night, the tricky real estate developers decide to add corners to the disk and transform it into a square. Each corner's contribution to the field measured at the center is canceled by the field due to the corner diagonally across from it. What if the flea goes on a trip away from the center of the disk? The perfect cancellation of the “horizontal” fields contributed by distant charges will no longer occur, but the “vertical” field (i.e., the field perpendicular to the surface) will still be \(E_\perp=2\pi k\sigma\), where \(\sigma\) is the local charge density, since the distant charges can't contribute to the vertical field. The same result applies if the shape of the surface is asymmetric, and doesn't even have any well-defined geometric center: the component perpendicular to the surface is \(E_\perp=2\pi k\sigma\), but we may have \(E_\parallel\neq0\). All of the above arguments can be made more rigorous by discussing mathematical limits rather than using words like “very small.” There is not much point in giving a rigorous proof here, however, since we will be able to demonstrate this fact as a corollary of Gauss' Law in section 10.6. The result is as follows: At a point lying a distance \(z\) from a charged surface, the component of the electric field perpendicular to the surface obeys \[\begin{equation*} \lim_{z\rightarrow 0} E_\perp = 2\pi k\sigma , \end{equation*}\] where \(\sigma\) is the charge per unit area. This is true regardless of the shape or size of the surface. Example 13: The field near a point, line, or surface charge \(\triangleright\) Compare the variation of the electric field with distance, \(d\), for small values of \(d\) in the case of a point charge, an infinite line of charge, and an infinite charged \(\triangleright\) For a point charge, we have already found \(E\propto d^{-2}\) for the magnitude of the field, where we are now using \(d\) for the quantity we would ordinarily notate as \(r\). This is true for all values of \(d\), not just for small \(d\) --- it has to be that way, because the point charge has no size, so if \(E\) behaved differently for small and large \(d\), there would be no way to decide what \(d\) had to be small or large relative to. For a line of charge, the result of example 10 is \[\begin{equation*} E = \frac{ k\lambda L}{ d^2\sqrt{1+ L^2/4 d^2}} . \end{equation*}\] In the limit of \(d\ll L\), the quantity inside the square root is dominated by the second term, and we have \(E\propto d^{-1}\). Finally, in the case of a charged surface, the result is simply \(E=2\pi\sigma k\), or \(E\propto d^{0}\). Notice the lovely simplicity of the pattern, as shown in figure j. A point is zero-dimensional: it has no length, width, or breadth. A line is one-dimensional, and a surface is two-dimensional. As the dimensionality of the charged object changes from 0 to 1, and then to 2, the exponent in the near-field expression goes from 2 to 1 to 0. 10.4 Energy In Fields 10.4.1 Electric field energy Fields possess energy, as argued on page 559, but how much energy? The answer can be found using the following elegant approach. We assume that the electric energy contained in an infinitesimal volume of space \(dv\) is given by \(dU_e=f(\mathbf{E})dv\), where \(f\) is some function, which we wish to determine, of the field E. It might seem that we would have no easy way to determine the function \(f\), but many of the functions we could cook up would violate the symmetry of space. For instance, we could imagine \(f(\mathbf{E})=aE_y\), where \(a\) is some constant with the appropriate units. However, this would violate the symmetry of space, because it would give the \(y\) axis a different status from \(x\) and \(z\). As discussed on page 212, if we wish to calculate a scalar based on some vectors, the dot product is the only way to do it that has the correct symmetry properties. If all we have is one vector, E, then the only scalar we can form is \(\mathbf{E}\cdot \mathbf{E}\), which is the square of the magnitude of the electric field vector. In principle, the energy function we are seeking could be proportional to \(\mathbf{E}\cdot\mathbf{E}\), or to any function computed from it, such as \(\sqrt{\mathbf{E}\cdot\mathbf{E}}\) or \((\ mathbf{E}\cdot\mathbf{E})^7\). On physical grounds, however, the only possibility that works is \(\mathbf{E}\cdot\mathbf{E}\). Suppose, for instance, that we pull apart two oppositely charged capacitor plates, as shown in figure a. We are doing work by pulling them apart against the force of their electrical attraction, and this quantity of mechanical work equals the increase in electrical energy, \(U_e\). Using our previous approach to energy, we would have thought of \(U_e\) as a quantity which depended on the distance of the positive and negative charges from each other, but now we're going to imagine \(U_e\) as being stored within the electric field that exists in the space between and around the charges. When the plates are touching, their fields cancel everywhere, and there is zero electrical energy. When they are separated, there is still approximately zero field on the outside, but the field between the plates is nonzero, and holds some energy. Now suppose we carry out the whole process, but with the plates carrying double their previous charges. Since Coulomb's law involves the product \(q_1q_2\) of two charges, we have quadrupled the force between any given pair of charged particles, and the total attractive force is therefore also four times greater than before. This means that the work done in separating the plates is four times greater, and so is the energy \(U_e\) stored in the field. The field, however, has merely been doubled at any given location: the electric field \(\mathbf{E}_+\) due to the positively charged plate is doubled, and similarly for the contribution \(\mathbf{E}_-\) from the negative one, so the total electric field \(\mathbf{E}_++\mathbf{E}_-\) is also doubled. Thus doubling the field results in an electrical energy which is four times greater, i.e., the energy density must be proportional to the square of the field, \(dU_e\propto(\mathbf{E}\cdot\mathbf{E})dv\). For ease of notation, we write this as \ (dU_e\propto E^2dv\), or \(dU_e=aE^2dv\), where \(a\) is a constant of proportionality. Note that we never really made use of any of the details of the geometry of figure a, so the reasoning is of general validity. In other words, not only is \(dU_e=aE^2dv\) the function that works in this particular case, but there is every reason to believe that it would work in other cases as well. It now remains only to find \(a\). Since the constant must be the same in all situations, we only need to find one example in which we can compute the field and the energy, and then we can determine \(a\). The situation shown in figure a is just about the easiest example to analyze. We let the square capacitor plates be uniformly covered with charge densities \(+\sigma\) and \(-\sigma\), and we write \(b\) for the lengths of their sides. Let \(h\) be the gap between the plates after they have been separated. We choose \(h\ll b\), so that the field experienced by the negative plate due to the positive plate is \(E_+=2\pi k\sigma\). The charge of the negative plate is \(-\sigma b^2\), so the magnitude of the force attracting it back toward the positive plate is \((\text{force})=(\text {charge})(\text{field})=2\pi k\sigma^2 b^2\). The amount of work done in separating the plates is \((\text{work})=(\text{force})(\text{distance})=2\pi k\sigma^2 b^2h\). This is the amount of energy that has been stored in the field between the two plates, \(U_e=2\pi k\sigma^2 b^2h=2\pi k\sigma^2 v\), where \(v\) is the volume of the region between the plates. We want to equate this to \(U_e=aE^2v\). (We can write \(U_e\) and \(v\) rather than \(dU_e\) and \(dv\), since the field is constant in the region between the plates.) The field between the plates has contributions from both plates, \(E=E_++E_-=4\pi k\sigma\). (We only used half this value in the computation of the work done on the moving plate, since the moving plate can't make a force on itself. Mathematically, each plate is in a region where its own field is reversing directions, so we can think of its own contribution to the field as being zero within itself.) We then have \(aE^2v= a\cdot 16\pi^2k^2\sigma^2 \cdot v\), and setting this equal to \(U_e=2\pi k\sigma^2 v\) from the result of the work computation, we find \(a=1/8\pi k\). Our final result is as follows: The electric energy possessed by an electric field E occupying an infinitesimal volume of space \(dv\) is given by \[\begin{equation*} dU_e = \frac{1}{8\pi k}E^2 dv , \end{equation*}\] where \(E^2=\mathbf{E}\cdot\mathbf{E}\) is the square of the magnitude of the electric field. This is reminiscent of how waves behave: the energy content of a wave is typically proportional to the square of its amplitude. We can think of the quantity \(dU_{e}/dv\) as the energy density due to the electric field, i.e., the number of joules per cubic meter needed in order to create that field. (a) How does this quantity depend on the components of the field vector, \(E_x\), \(E_y\), and \(E_z\)? (b) Suppose we have a field with \(E_x\neq0\), \(E_y\)=0, and \(E_z\)=0. What would happen to the energy density if we reversed the sign of \(E_x\)? (answer in the back of the PDF version of the book) Example 14: A numerical example \(\triangleright\) A capacitor has plates whose areas are \(10^{-4}\ \text{m}^2\), separated by a gap of \(10^{-5}\) m. A 1.5-volt battery is connected across it. How much energy is sucked out of the battery and stored in the electric field between the plates? (A real capacitor typically has an insulating material between the plates whose molecules interact electrically with the charge in the plates. For this example, we'll assume that there is just a vacuum in between the plates. The plates are also typically rolled up rather than flat.) \(\triangleright\) To connect this with our previous calculations, we need to find the charge density on the plates in terms of the voltage we were given. Our previous examples were based on the assumption that the gap between the plates was small compared to the size of the plates. Is this valid here? Well, if the plates were square, then the area of \(10^{-4}\ \text{m}^2\) would imply that their sides were \(10^{-2}\) m in length. This is indeed very large compared to the gap of \(10^{-5}\) m, so this assumption appears to be valid (unless, perhaps, the plates have some very strange, long and skinny shape). Based on this assumption, the field is relatively uniform in the whole volume between the plates, so we can use a single symbol, \(E\), to represent its magnitude, and the relation \(E=d V/d x\) is equivalent to \(E=\Delta V/\Delta x=(\text{1.5 V})/(\text{gap})= 1.5\times10^5\ \text{V}/\text{m}\). Since the field is uniform, we can dispense with the calculus, and replace \(d U_{e} = (1/8\pi k) E^2 d v\) with \(U_{e} = (1/8\pi k) E^2 v\). The volume equals the area multiplied by the gap, so we \[\begin{align*} U_{e} &= (1/8\pi k) E^2(\text{area})(\text{gap})\\ &= \frac{1}{8\pi\times9\ \times10^9\ \text{N}\!\cdot\!\text{m}^2/\text{C}^2}( 1.5\times10^5\ \text{V}/\text{m})^2(10^{-4}\ \text{m} ^2)(10^{-5}\ \text{m})\\ &= 1\times10^{-10}\ \text{J} \end{align*}\] Show that the units in the preceding example really do work out to be joules. (answer in the back of the PDF version of the book) Example 15: Why \(k\) is on the bottom It may also seem strange that the constant \(k\) is in the denominator of the equation \(d U_{e} = (1/8\pi k) E^2 d v\). The Coulomb constant \(k\) tells us how strong electric forces are, so shouldn't it be on top? No. Consider, for instance, an alternative universe in which electric forces are twice as strong as in ours. The numerical value of \(k\) is doubled. Because \(k\) is doubled, all the electric field strengths are doubled as well, which quadruples the quantity \(E^2\). In the expression \(E^2/8\pi k\), we've quadrupled something on top and doubled something on the bottom, which makes the energy twice as big. That makes perfect sense. Example 16: Potential energy of a pair of opposite charges Imagine taking two opposite charges, , that were initially far apart and allowing them to come together under the influence of their electrical attraction. According to our old approach, electrical energy is lost because the electric force did positive work as it brought the charges together. (This makes sense because as they come together and accelerate it is their electrical energy that is being lost and converted to kinetic energy.) By the new method, we must ask how the energy stored in the electric field has changed. In the region indicated approximately by the shading in the figure, the superposing fields of the two charges undergo partial cancellation because they are in opposing directions. The energy in the shaded region is reduced by this effect. In the unshaded region, the fields reinforce, and the energy is It would be quite a project to do an actual numerical calculation of the energy gained and lost in the two regions (this is a case where the old method of finding energy gives greater ease of computation), but it is fairly easy to convince oneself that the energy is less when the charges are closer. This is because bringing the charges together shrinks the high-energy unshaded region and enlarges the low-energy shaded region. Example 17: A spherical capacitor \(\triangleright\) A spherical capacitor, , consists of two concentric spheres of radii \(a\) and \(b\). Find the energy required to charge up the capacitor so that the plates hold charges \(+ q\) and \(- q\). \(\triangleright\) On page 102, I proved that for gravitational forces, the interaction of a spherical shell of mass with other masses outside it is the same as if the shell's mass was concentrated at its center. On the interior of such a shell, the forces cancel out exactly. Since gravity and the electric force both vary as \(1/ r^2\), the same proof carries over immediately to electrical forces. The magnitude of the outward electric field contributed by the charge \(+ q\) of the central sphere is therefore \[\begin{equation*} |\mathbf{E}_+| = \left\{ \begin{array}{lr} 0, & r\lt a \\ kq/ r^2, & r> a \end{array} \right. , \end{equation*}\] where \(r\) is the distance from the center. Similarly, the magnitude of the inward field contributed by the outside sphere is \[\begin{equation*} |\mathbf{E}_-| = \left\{ \begin{array}{lr} 0, & r\lt b \\ kq/ r^2, & r> b \end{array} \right. . \end{equation*}\] In the region outside the whole capacitor, the two fields are equal in magnitude, but opposite in direction, so they cancel. We then have for the total field \[\begin{equation*} |\mathbf{E}| = \left\{ \begin{array}{lr} 0, & r\lt a \\ kq/ r^2, & a\lt r\lt b \\ 0, & r> b \end{array} \right. , \end{equation*}\] so to calculate the energy, we only need to worry about the region \(a\lt r\lt b\). The energy density in this region is \[\begin{align*} \frac{d U_{e}}{d v} &= \frac{1}{8\pi k} E^2 \\ &= \frac{ kq^2}{8\pi} r^{-4} . \end{align*}\] This expression only depends on \(r\), so the energy density is constant across any sphere of radius \(r\). We can slice the region \(a\lt r\lt b\) into concentric spherical layers, like an onion, and the energy within one such layer, extending from \(r\) to \(r+dr\) is \[\begin{align*} d U_{e} &= \frac{d U_{e}}{d v} dv \\ &= \frac{d U_{e}}{d v} (\text{area of shell}) (\text{thickness of shell}) \\ &= (\frac{ kq^2}{8\pi} r^{-4}) (4\pi r^2) (dr) \\ &= \frac{ kq^2}{2} r^{-2}dr . \end{align*}\] Integrating over all the layers to find the total energy, we have \[\begin{align*} U_{e} &= \int d U_{e} \\ &= \int_{a}^{b} \frac{ kq^2}{2} r^{-2}dr \\ &= \left.-\frac{ kq^2}{2} r^{-1}\right|_{a}^{b} \\ &= \frac{ kq^2}{2}\left(\frac{1}{a}-\frac{1}{b}\right) \\ \end Discussion Questions The figure shows a positive charge in the gap between two capacitor plates. Compare the energy of the electric fields in the two cases. Does this agree with what you would have expected based on your knowledge of electrical forces? The figure shows a spherical capacitor. In the text, the energy stored in its electric field is shown to be \[\begin{equation*} U_{e} = \frac{ kq^2}{2}\left(\frac{1}{a}-\frac{1}{b}\right) . \\ \end{equation*}\] What happens if the difference between \(b\) and \(a\) is very small? Does this make sense in terms of the mechanical work needed in order to separate the charges? Does it make sense in terms of the energy stored in the electric field? Should these two energies be added together? Similarly, discuss the cases of \(b\rightarrow\infty\) and \(a\rightarrow0\). Criticize the following statement: “A solenoid makes a charge in the space surrounding it, which dissipates when you release the energy.” In example 16 on page 585, I argued that for the charges shown in the figure, the fields contain less energy when the charges are closer together, because the region of cancellation expanded, while the region of reinforcing fields shrank. Perhaps a simpler approach is to consider the two extreme possibilities: the case where the charges are infinitely far apart, and the one in which they are at zero distance from each other, i.e., right on top of each other. Carry out this reasoning for the case of (1) a positive charge and a negative charge of equal magnitude, (2) two positive charges of equal magnitude, (3) the gravitational energy of two equal masses. 10.4.2 Gravitational field energy Example B depended on the close analogy between electric and gravitational forces. In fact, every argument, proof, and example discussed so far in this section is equally valid as a gravitational example, provided we take into account one fact: only positive mass exists, and the gravitational force between two masses is attractive. This is the opposite of what happens with electrical forces, which are repulsive in the case of two positive charges. As a consequence of this, we need to assign a negative energy density to the gravitational field! For a gravitational field, we have \[\begin{equation*} dU_g = -\frac{1}{8\pi G}g^2 dv , \end{equation*}\] where \(g^2=\mathbf{g}\cdot\mathbf{g}\) is the square of the magnitude of the gravitational field. 10.4.3 Magnetic field energy So far we've only touched in passing on the topic of magnetic fields, which will deal with in detail in chapter 11. Magnetism is an interaction between moving charge and moving charge, i.e., between currents and currents. Since a current has a direction in space,^2 while charge doesn't, we can anticipate that the mathematical rule connecting a magnetic field to its source-currents will have to be completely different from the one relating the electric field to its source-charges. However, if you look carefully at the argument leading to the relation \(dU_e/dv = E^2/8\pi k\), you'll see that these mathematical details were only necessary to the part of the argument in which we fixed the constant of proportionality. To establish \(dU_e/dv \propto E^2\), we only had to use three simple facts: • The field is proportional to the source. • Forces are proportional to fields. • Field contributed by multiple sources add like vectors. All three of these statements are true for the magnetic field as well, so without knowing anything more specific about magnetic fields --- not even what units are used to measure them! --- we can state with certainty that the energy density in the magnetic field is proportional to the square of the magnitude of the magnetic field. The constant of proportionality is given on p. 665. 10.5 LRC Circuits The long road leading from the light bulb to the computer started with one very important step: the introduction of feedback into electronic circuits. Although the principle of feedback has been understood and and applied to mechanical systems for centuries, and to electrical ones since the early twentieth century, for most of us the word evokes an image of Jimi Hendrix (or some more recent guitar hero) intentionally creating earsplitting screeches, or of the school principal doing the same inadvertently in the auditorium. In the guitar example, the musician stands in front of the amp and turns it up so high that the sound waves coming from the speaker come back to the guitar string and make it shake harder. This is an example of positive feedback: the harder the string vibrates, the stronger the sound waves, and the stronger the sound waves, the harder the string vibrates. The only limit is the power-handling ability of the amplifier. Negative feedback is equally important. Your thermostat, for example, provides negative feedback by kicking the heater off when the house gets warm enough, and by firing it up again when it gets too cold. This causes the house's temperature to oscillate back and forth within a certain range. Just as out-of-control exponential freak-outs are a characteristic behavior of positive-feedback systems, oscillation is typical in cases of negative feedback. You have already studied negative feedback extensively in section 3.3 in the case of a mechanical system, although we didn't call it that. 10.5.1 Capacitance and inductance In a mechanical oscillation, energy is exchanged repetitively between potential and kinetic forms, and may also be siphoned off in the form of heat dissipated by friction. In an electrical circuit, resistors are the circuit elements that dissipate heat. What are the electrical analogs of storing and releasing the potential and kinetic energy of a vibrating object? When you think of energy storage in an electrical circuit, you are likely to imagine a battery, but even rechargeable batteries can only go through 10 or 100 cycles before they wear out. In addition, batteries are not able to exchange energy on a short enough time scale for most applications. The circuit in a musical synthesizer may be called upon to oscillate thousands of times a second, and your microwave oven operates at gigahertz frequencies. Instead of batteries, we generally use capacitors and inductors to store energy in oscillating circuits. Capacitors, which you've already encountered, store energy in electric fields. An inductor does the same with magnetic fields. A capacitor's energy exists in its surrounding electric fields. It is proportional to the square of the field strength, which is proportional to the charges on the plates. If we assume the plates carry charges that are the same in magnitude, \(+q\) and \(-q\), then the energy stored in the capacitor must be proportional to \(q^2\). For historical reasons, we write the constant of proportionality as \(1/2C\), \[\begin{equation*} U_C = \frac{1}{2C}q^2 . \end{equation*}\] The constant \(C\) is a geometrical property of the capacitor, called its capacitance. Based on this definition, the units of capacitance must be coulombs squared per joule, and this combination is more conveniently abbreviated as the farad, \(1\ \text{F}=1\ \text{C}^2/\text{J}\). “Condenser” is a less formal term for a capacitor. Note that the labels printed on capacitors often use MF to mean \(\mu\text{F}\), even though MF should really be the symbol for megafarads, not microfarads. Confusion doesn't result from this nonstandard notation, since picofarad and microfarad values are the most common, and it wasn't until the 1990's that even millifarad and farad values became available in practical physical sizes. Figure a shows the symbol used in schematics to represent a capacitor. Example 18: A parallel-plate capacitor \(\triangleright\) Suppose a capacitor consists of two parallel metal plates with area \(A\), and the gap between them is \(h\). The gap is small compared to the dimensions of the plates. What is the \(\triangleright\) Since the plates are metal, the charges on each plate are free to move, and will tend to cluster themselves more densely near the edges due to the mutual repulsion of the other charges in the same plate. However, it turns out that if the gap is small, this is a small effect, so we can get away with assuming uniform charge density on each plate. The result of example 14 then applies, and for the region between the plates, we have \(E=4\pi k\sigma=4\pi kq/ A\) and \(U_{e} = (1/8\pi k) E^2 Ah\). Substituting the first expression into the second, we find \(U_{e}=2\pi kq^2 h / A\). Comparing this to the definition of capacitance, we end up with \(C= A/4\pi kh\). Any current will create a magnetic field, so in fact every current-carrying wire in a circuit acts as an inductor! However, this type of “stray” inductance is typically negligible, just as we can usually ignore the stray resistance of our wires and only take into account the actual resistors. To store any appreciable amount of magnetic energy, one usually uses a coil of wire designed specifically to be an inductor. All the loops' contribution to the magnetic field add together to make a stronger field. Unlike capacitors and resistors, practical inductors are easy to make by hand. One can for instance spool some wire around a short wooden dowel. An inductor like this, in the form cylindrical coil of wire, is called a solenoid, c, and a stylized solenoid, d, is the symbol used to represent an inductor in a circuit regardless of its actual geometry. How much energy does an inductor store? The energy density is proportional to the square of the magnetic field strength, which is in turn proportional to the current flowing through the coiled wire, so the energy stored in the inductor must be proportional to \(I^2\). We write \(L/2\) for the constant of proportionality, giving \[\begin{equation*} U_L = \frac{L}{2}I^2 . \end{equation*}\] As in the definition of capacitance, we have a factor of 1/2, which is purely a matter of definition. The quantity \(L\) is called the inductance of the inductor, and we see that its units must be joules per ampere squared. This clumsy combination of units is more commonly abbreviated as the henry, 1 henry = 1 \(\text{J}/\text{A}^2\). Rather than memorizing this definition, it makes more sense to derive it when needed from the definition of inductance. Many people know inductors simply as “coils,” or “chokes,” and will not understand you if you refer to an “inductor,” but they will still refer to \(L\) as the “inductance,” not the “coilance” or “chokeance!” There is a lumped circuit approximation for inductors, just like the one for capacitors (p. 578). For a capacitor, this means assuming that the electric fields are completely internal, so that components only interact via currents that flow through wires, not due to the physical overlapping of their fields in space. Similarly for an inductor, the lumped circuit approximation is the assumption that the magnetic fields are completely internal. Example 19: Identical inductances in series If two inductors are placed in series, any current that passes through the combined double inductor must pass through both its parts. If we assume the lumped circuit approximation, the two inductors' fields don't interfere with each other, so the energy is doubled for a given current. Thus by the definition of inductance, the inductance is doubled as well. In general, inductances in series add, just like resistances. The same kind of reasoning also shows that the inductance of a solenoid is approximately proportional to its length, assuming the number of turns per unit length is kept constant. (This is only approximately true, because putting two solenoids end-to-end causes the fields just outside their mouths to overlap and add together in a complicated manner. In other words, the lumped-circuit approximation may not be very good.) Example 20: Identical capacitances in parallel When two identical capacitances are placed in parallel, any charge deposited at the terminals of the combined double capacitor will divide itself evenly between the two parts. The electric fields surrounding each capacitor will be half the intensity, and therefore store one quarter the energy. Two capacitors, each storing one quarter the energy, give half the total energy storage. Since capacitance is inversely related to energy storage, this implies that identical capacitances in parallel give double the capacitance. In general, capacitances in parallel add. This is unlike the behavior of inductors and resistors, for which series configurations give addition. This is consistent with the result of example 18, which had the capacitance of a single parallel-plate capacitor proportional to the area of the plates. If we have two parallel-plate capacitors, and we combine them in parallel and bring them very close together side by side, we have produced a single capacitor with plates of double the area, and it has approximately double the capacitance, subject to any violation of the lumped-circuit approximation due to the interaction of the fields where the edges of the capacitors are joined together. Inductances in parallel and capacitances in series are explored in homework problems 36 and 33. Example 21: A variable capacitor Figure h/1 shows the construction of a variable capacitor out of two parallel semicircles of metal. One plate is fixed, while the other can be rotated about their common axis with a knob. The opposite charges on the two plates are attracted to one another, and therefore tend to gather in the overlapping area. This overlapping area, then, is the only area that effectively contributes to the capacitance, and turning the knob changes the capacitance. The simple design can only provide very small capacitance values, so in practice one usually uses a bank of capacitors, wired in parallel, with all the moving parts on the same shaft. Discussion Questions Suppose that two parallel-plate capacitors are wired in parallel, and are placed very close together, side by side, so that the lumped circuit approximation is not very accurate. Will the resulting capacitance be too small, or too big? Could you twist the circuit into a different shape and make the effect be the other way around, or make the effect vanish? How about the case of two inductors in Most practical capacitors do not have an air gap or vacuum gap between the plates; instead, they have an insulating substance called a dielectric. We can think of the molecules in this substance as dipoles that are free to rotate (at least a little), but that are not free to move around, since it is a solid. The figure shows a highly stylized and unrealistic way of visualizing this. We imagine that all the dipoles are intially turned sideways, (1), and that as the capacitor is charged, they all respond by turning through a certain angle, (2). (In reality, the scene might be much more random, and the alignment effect much weaker.) For simplicity, imagine inserting just one electric dipole into the vacuum gap. For a given amount of charge on the plates, how does this affect the amount of energy stored in the electric field? How does this affect the capacitance? Now redo the analysis in terms of the mechanical work needed in order to charge up the plates. 10.5.2 Oscillations Figure j shows the simplest possible oscillating circuit. For any useful application it would actually need to include more components. For example, if it was a radio tuner, it would need to be connected to an antenna and an amplifier. Nevertheless, all the essential physics is there. We can analyze it without any sweat or tears whatsoever, simply by constructing an analogy with a mechanical system. In a mechanical oscillator, k, we have two forms of stored energy, \[\begin{align*} U_{spring} &= \frac{1}{2}kx^2 &(1) \\ K &= \frac{1}{2}mv^2 . &(2) \end{align*}\] In the case of a mechanical oscillator, we have usually assumed a friction force of the form that turns out to give the nicest mathematical results, \(F=-bv\). In the circuit, the dissipation of energy into heat occurs via the resistor, with no mechanical force involved, so in order to make the analogy, we need to restate the role of the friction force in terms of energy. The power dissipated by friction equals the mechanical work it does in a time interval \(dt\), divided by \(dt\), \(P=W/dt=Fdx/dt=Fv=-bv^2\), so \[\begin{equation*} \text{rate of heat dissipation} = -bv^2 . (3) \end{equation*}\] Equation (1) has \(x\) squared, and equations (2) and (3) have \(v\) squared. Because they're squared, the results don't depend on whether these variables are positive or negative. Does this make physical sense? (answer in the back of the PDF version of the book) In the circuit, the stored forms of energy are \[\begin{align*} U_C &= \frac{1}{2C}q^2 &(1') \\ U_L &= \frac{1}{2}LI^2 , &(2') \end{align*}\] and the rate of heat dissipation in the resistor is \[\begin{equation*} \text{rate of heat dissipation} = -RI^2 . (3') \end{equation*}\] Comparing the two sets of equations, we first form analogies between quantities that represent the state of the system at some moment in time: \[\begin{align*} x &\leftrightarrow q\\ v &\leftrightarrow I\\ \end{align*}\] How is \(v\) related mathematically to \(x\)? How is \(I\) connected to \(q\)? Are the two relationships analogous? (answer in the back of the PDF version of the book) Next we relate the ones that describe the system's permanent characteristics: \[\begin{align*} k &\leftrightarrow 1/C\\ m &\leftrightarrow L\\ b &\leftrightarrow R\\ \end{align*}\] Since the mechanical system naturally oscillates with a frequency^3 \(\omega\approx\sqrt{k/m}\) , we can immediately solve the electrical version by analogy, giving \[\begin{equation*} \omega \approx \frac{1}{\sqrt{LC}} . \end{equation*}\] Since the resistance \(R\) is analogous to \(b\) in the mechanical case, we find that the \(Q\) (quality factor, not charge) of the resonance is inversely proportional to \(R\), and the width of the resonance is directly proportional to \(R\). Example 22: Tuning a radio receiver A radio receiver uses this kind of circuit to pick out the desired station. Since the receiver resonates at a particular frequency, stations whose frequencies are far off will not excite any response in the circuit. The value of \(R\) has to be small enough so that only one station at a time is picked up, but big enough so that the tuner isn't too touchy. The resonant frequency can be tuned by adjusting either \(L\) or \(C\), but variable capacitors are easier to build than variable inductors. Example 23: A numerical calculation The phone company sends more than one conversation at a time over the same wire, which is accomplished by shifting each voice signal into different range of frequencies during transmission. The number of signals per wire can be maximized by making each range of frequencies (known as a bandwidth) as small as possible. It turns out that only a relatively narrow range of frequencies is necessary in order to make a human voice intelligible, so the phone company filters out all the extreme highs and lows. (This is why your phone voice sounds different from your normal voice.) \(\triangleright\) If the filter consists of an LRC circuit with a broad resonance centered around 1.0 kHz, and the capacitor is 1 \(\mu\text{F}\) (microfarad), what inductance value must be used? \(\triangleright\) Solving for \(L\), we have \[\begin{align*} L &= \frac{1}{ C\omega^2} \\ &= \frac{1}{(10^{-6}\ \text{F})(2\pi\times10^3\ \text{s}^{-1})^2} \\ &= 2.5\times10^{-3}\ \text{F}^{-1}\text{s}^2 \end{align*}\] Checking that these really are the same units as henries is a little tedious, but it builds character: \[\begin{align*} \text{F}^{-1}\text{s}^2 &= (\text{C}^2/\text{J})^{-1}\text{s}^2 \\ &= \text{J}\cdot\text{C}^{-2}\text{s}^2 \\ &= \text{J}/\text{A}^2 \\ &= \text{H} \end{align*}\] The result is 25 mH (millihenries). This is actually quite a large inductance value, and would require a big, heavy, expensive coil. In fact, there is a trick for making this kind of circuit small and cheap. There is a kind of silicon chip called an op-amp, which, among other things, can be used to simulate the behavior of an inductor. The main limitation of the op-amp is that it is restricted to low-power applications. 10.5.3 Voltage and current What is physically happening in one of these oscillating circuits? Let's first look at the mechanical case, and then draw the analogy to the circuit. For simplicity, let's ignore the existence of damping, so there is no friction in the mechanical oscillator, and no resistance in the electrical one. Suppose we take the mechanical oscillator and pull the mass away from equilibrium, then release it. Since friction tends to resist the spring's force, we might naively expect that having zero friction would allow the mass to leap instantaneously to the equilibrium position. This can't happen, however, because the mass would have to have infinite velocity in order to make such an instantaneous leap. Infinite velocity would require infinite kinetic energy, but the only kind of energy that is available for conversion to kinetic is the energy stored in the spring, and that is finite, not infinite. At each step on its way back to equilibrium, the mass's velocity is controlled exactly by the amount of the spring's energy that has so far been converted into kinetic energy. After the mass reaches equilibrium, it overshoots due to its own momentum. It performs identical oscillations on both sides of equilibrium, and it never loses amplitude because friction is not available to convert mechanical energy into heat. Now with the electrical oscillator, the analog of position is charge. Pulling the mass away from equilibrium is like depositing charges \(+q\) and \(-q\) on the plates of the capacitor. Since resistance tends to resist the flow of charge, we might imagine that with no friction present, the charge would instantly flow through the inductor (which is, after all, just a piece of wire), and the capacitor would discharge instantly. However, such an instant discharge is impossible, because it would require infinite current for one instant. Infinite current would create infinite magnetic fields surrounding the inductor, and these fields would have infinite energy. Instead, the rate of flow of current is controlled at each instant by the relationship between the amount of energy stored in the magnetic field and the amount of current that must exist in order to have that strong a field. After the capacitor reaches \(q=0\), it overshoots. The circuit has its own kind of electrical “inertia,” because if charge was to stop flowing, there would have to be zero current through the inductor. But the current in the inductor must be related to the amount of energy stored in its magnetic fields. When the capacitor is at \(q=0\), all the circuit's energy is in the inductor, so it must therefore have strong magnetic fields surrounding it and quite a bit of current going through it. The only thing that might seem spooky here is that we used to speak as if the current in the inductor caused the magnetic field, but now it sounds as if the field causes the current. Actually this is symptomatic of the elusive nature of cause and effect in physics. It's equally valid to think of the cause and effect relationship in either way. This may seem unsatisfying, however, and for example does not really get at the question of what brings about a voltage difference across the resistor (in the case where the resistance is finite); there must be such a voltage difference, because without one, Ohm's law would predict zero current through the resistor. Voltage, then, is what is really missing from our story so far. Let's start by studying the voltage across a capacitor. Voltage is electrical potential energy per unit charge, so the voltage difference between the two plates of the capacitor is related to the amount by which its energy would increase if we increased the absolute values of the charges on the plates from \(q\) to \(q+dq\): \[\begin{align*} V_C &= (U_{q+dq}-U_q)/dq \\ &= \frac{dU_C}{dq} \\ &= \frac{d}{dq}\left(\frac{1}{2C}q^2\right) \\ &= \frac{q}{C} \end{align*}\] Many books use this as the definition of capacitance. This equation, by the way, probably explains the historical reason why \(C\) was defined so that the energy was inversely proportional to \(C\) for a given value of \(C\): the people who invented the definition were thinking of a capacitor as a device for storing charge rather than energy, and the amount of charge stored for a fixed voltage (the charge “capacity”) is proportional to \(C\). In the case of an inductor, we know that if there is a steady, constant current flowing through it, then the magnetic field is constant, and so is the amount of energy stored; no energy is being exchanged between the inductor and any other circuit element. But what if the current is changing? The magnetic field is proportional to the current, so a change in one implies a change in the other. For concreteness, let's imagine that the magnetic field and the current are both decreasing. The energy stored in the magnetic field is therefore decreasing, and by conservation of energy, this energy can't just go away --- some other circuit element must be taking energy from the inductor. The simplest example, shown in figure l, is a series circuit consisting of the inductor plus one other circuit element. It doesn't matter what this other circuit element is, so we just call it a black box, but if you like, we can think of it as a resistor, in which case the energy lost by the inductor is being turned into heat by the resistor. The junction rule tells us that both circuit elements have the same current through them, so \(I\) could refer to either one, and likewise the loop rule tells us \(V_{inductor}+V_{black\ box}=0\), so the two voltage drops have the same absolute value, which we can refer to as \(V\). Whatever the black box is, the rate at which it is taking energy from the inductor is given by \(|P|=|IV|\), so \[\begin{align*} |IV| &= \left|\frac{dU_L}{dt}\right| \\ &= \left|\frac{d}{dt}\left( \frac{1}{2}LI^2\right) \right| \\ &= \left|LI\frac{dI}{dt}\right| ,\\ \text{or} |V| &= \left|L\frac{dI}{dt}\right| , \\ \end{align*}\] which in many books is taken to be the definition of inductance. The direction of the voltage drop (plus or minus sign) is such that the inductor resists the change in current. There's one very intriguing thing about this result. Suppose, for concreteness, that the black box in figure l is a resistor, and that the inductor's energy is decreasing, and being converted into heat in the resistor. The voltage drop across the resistor indicates that it has an electric field across it, which is driving the current. But where is this electric field coming from? There are no charges anywhere that could be creating it! What we've discovered is one special case of a more general principle, the principle of induction: a changing magnetic field creates an electric field, which is in addition to any electric field created by charges. (The reverse is also true: any electric field that changes over time creates a magnetic field.) Induction forms the basis for such technologies as the generator and the transformer, and ultimately it leads to the existence of light, which is a wave pattern in the electric and magnetic fields. These are all topics for chapter 11, but it's truly remarkable that we could come to this conclusion without yet having learned any details about magnetism. The cartoons in figure m compares electric fields made by charges, 1, to electric fields made by changing magnetic fields, 2-3. In m/1, two physicists are in a room whose ceiling is positively charged and whose floor is negatively charged. The physicist on the bottom throws a positively charged bowling ball into the curved pipe. The physicist at the top uses a radar gun to measure the speed of the ball as it comes out of the pipe. They find that the ball has slowed down by the time it gets to the top. By measuring the change in the ball's kinetic energy, the two physicists are acting just like a voltmeter. They conclude that the top of the tube is at a higher voltage than the bottom of the pipe. A difference in voltage indicates an electric field, and this field is clearly being caused by the charges in the floor and ceiling. In m/2, there are no charges anywhere in the room except for the charged bowling ball. Moving charges make magnetic fields, so there is a magnetic field surrounding the helical pipe while the ball is moving through it. A magnetic field has been created where there was none before, and that field has energy. Where could the energy have come from? It can only have come from the ball itself, so the ball must be losing kinetic energy. The two physicists working together are again acting as a voltmeter, and again they conclude that there is a voltage difference between the top and bottom of the pipe. This indicates an electric field, but this electric field can't have been created by any charges, because there aren't any in the room. This electric field was created by the change in the magnetic field. The bottom physicist keeps on throwing balls into the pipe, until the pipe is full of balls, m/3, and finally a steady current is established. While the pipe was filling up with balls, the energy in the magnetic field was steadily increasing, and that energy was being stolen from the balls' kinetic energy. But once a steady current is established, the energy in the magnetic field is no longer changing. The balls no longer have to give up energy in order to build up the field, and the physicist at the top finds that the balls are exiting the pipe at full speed again. There is no voltage difference any more. Although there is a current, \(dI/dt\) is zero. Example 24: Ballasts In a gas discharge tube, such as a neon sign, enough voltage is applied to a tube full of gas to ionize some of the atoms in the gas. Once ions have been created, the voltage accelerates them, and they strike other atoms, ionizing them as well and resulting in a chain reaction. This is a spark, like a bolt of lightning. But once the spark starts up, the device begins to act as though it has no resistance: more and more current flows, without the need to apply any more voltage. The power, \(P=IV\), would grow without limit, and the tube would burn itself out. The simplest solution is to connect an inductor, known as the “ballast,” in series with the tube, and run the whole thing on an AC voltage. During each cycle, as the voltage reaches the point where the chain reaction begins, there is a surge of current, but the inductor resists such a sudden change of current, and the energy that would otherwise have burned out the bulb is instead channeled into building a magnetic field. A common household fluorescent lightbulb consists of a gas discharge tube in which the glass is coated with a fluorescent material. The gas in the tube emits ultraviolet light, which is absorbed by the coating, and the coating then glows in the visible spectrum. Until recently, it was common for a fluroescent light's ballast to be a simple inductor, and for the whole device to be operated at the 60 Hz frequency of the electrical power lines. This caused the lights to flicker annoyingly at 120 Hz, and could also cause an audible hum, since the magnetic field surrounding the inductor could exert mechanical forces on things. These days, the trend is toward using a solid-state circuit that mimics the behavior of an inductor, but at a frequency in the kilohertz range, eliminating the flicker and hum. Modern compact fluorescent bulbs electronic have ballasts built into their bases, so they can be used as plug-in replacements for incandescent bulbs. A compact fluorescent bulb uses about 1/4 the electricity of an incandescent bulb, lasts ten times longer, and saves $30 worth of electricity over its lifetime. Discussion Question What happens when the physicist at the bottom in figure m/3 starts getting tired, and decreases the current? 10.5.4 Decay Up until now I've soft-pedaled the fact that by changing the characteristics of an oscillator, it is possible to produce non-oscillatory behavior. For example, imagine taking the mass-on-a-spring system and making the spring weaker and weaker. In the limit of small \(k\), it's as though there was no spring whatsoever, and the behavior of the system is that if you kick the mass, it simply starts slowing down. For friction proportional to \(v\), as we've been assuming, the result is that the velocity approaches zero, but never actually reaches zero. This is unrealistic for the mechanical oscillator, which will not have vanishing friction at low velocities, but it is quite realistic in the case of an electrical circuit, for which the voltage drop across the resistor really does approach zero as the current approaches zero. We do not even have to reduce \(k\) to exactly zero in order to get non-oscillatory behavior. There is actually a finite, critical value below which the behavior changes, so that the mass never even makes it through one cycle. This is the case of overdamping, discussed on page 186. Electrical circuits can exhibit all the same behavior. For simplicity we will analyze only the cases of LRC circuits with \(L=0\) or \(C=0\). The RC circuit We first analyze the RC circuit, o. In reality one would have to “kick” the circuit, for example by briefly inserting a battery, in order to get any interesting behavior. We start with Ohm's law and the equation for the voltage across a capacitor: \[\begin{align*} V_R &= IR \\ V_C &= q/C \end{align*}\] The loop rule tells us \[\begin{equation*} V_R + V_C = 0 , \end{equation*}\] and combining the three equations results in a relationship between \(q\) and \(I\): \[\begin{equation*} I = -\frac{1}{RC}q \end{equation*}\] The negative sign tells us that the current tends to reduce the charge on the capacitor, i.e., to discharge it. It makes sense that the current is proportional to \(q\) : if \(q\) is large, then the attractive forces between the \(+q\) and \(-q\) charges on the plates of the capacitor are large, and charges will flow more quickly through the resistor in order to reunite. If there was zero charge on the capacitor plates, there would be no reason for current to flow. Since amperes, the unit of current, are the same as coulombs per second, it appears that the quantity \(RC\) must have units of seconds, and you can check for yourself that this is correct. \(RC\) is therefore referred to as the time constant of the circuit. How exactly do \(I\) and \(q\) vary with time? Rewriting \(I\) as \(dq/dt\), we have \[\begin{equation*} \frac{dq}{dt} = -\frac{1}{RC}q . \end{equation*}\] We need a function \(q(t)\) whose derivative equals itself, but multiplied by a negative constant. A function of the form \(ae^t\), where \(e=2.718...\) is the base of natural logarithms, is the only one that has its derivative equal to itself, and \(ae^{bt}\) has its derivative equal to itself multiplied by \(b\). Thus our solution is \[\begin{equation*} q = q_\text{o}\exp\left(-\frac{t}{RC}\right) . \end{equation*}\] The RL circuit The RL circuit, q, can be attacked by similar methods, and it can easily be shown that it gives \[\begin{equation*} I = I_\text{o}\exp\left(-\frac{R}{L}t\right) . \end{equation*}\] The RL time constant equals \(L/R\). Example 25: Death by solenoid; spark plugs When we suddenly break an RL circuit, what will happen? It might seem that we're faced with a paradox, since we only have two forms of energy, magnetic energy and heat, and if the current stops suddenly, the magnetic field must collapse suddenly. But where does the lost magnetic energy go? It can't go into resistive heating of the resistor, because the circuit has now been broken, and current can't flow! The way out of this conundrum is to recognize that the open gap in the circuit has a resistance which is large, but not infinite. This large resistance causes the RL time constant \(L/ R\) to be very small. The current thus continues to flow for a very brief time, and flows straight across the air gap where the circuit has been opened. In other words, there is a spark! We can determine based on several different lines of reasoning that the voltage drop from one end of the spark to the other must be very large. First, the air's resistance is large, so \(V= IR\) requires a large voltage. We can also reason that all the energy in the magnetic field is being dissipated in a short time, so the power dissipated in the spark, \(P= IV\), is large, and this requires a large value of \(V\). (\(I\) isn't large --- it is decreasing from its initial value.) Yet a third way to reach the same result is to consider the equation \(V_{L}=dI/dt\) : since the time constant is short, the time derivative \(dI/dt\) is large. This is exactly how a car's spark plugs work. Another application is to electrical safety: it can be dangerous to break an inductive circuit suddenly, because so much energy is released in a short time. There is also no guarantee that the spark will discharge across the air gap; it might go through your body instead, since your body might have a lower resistance. Example 26: A spark-gap radio transmitter shows a primitive type of radio transmitter, called a spark gap transmitter, used to send Morse code around the turn of the twentieth century. The high voltage source, V, is typically about 10,000 volts. When the telegraph switch, S, is closed, the RC circuit on the left starts charging up. An increasing voltage difference develops between the electrodes of the spark gap, G. When this voltage difference gets large enough, the electric field in the air between the electrodes causes a spark, partially discharging the RC circuit, but charging the LC circuit on the right. The LC circuit then oscillates at its resonant frequency (typically about 1 MHz), but the energy of these oscillations is rapidly radiated away by the antenna, A, which sends out radio waves (chapter Discussion Questions A gopher gnaws through one of the wires in the DC lighting system in your front yard, and the lights turn off. At the instant when the circuit becomes open, we can consider the bare ends of the wire to be like the plates of a capacitor, with an air gap (or gopher gap) between them. What kind of capacitance value are we talking about here? What would this tell you about the \(RC\) time constant? 10.5.5 Review of complex numbers For a more detailed treatment of complex numbers, see ch. 3 of James Nearing's free book at We assume there is a number, \(i\), such that \(i^2=-1\). The square roots of \(-1\) are then \(i\) and \(-i\). (In electrical engineering work, where \(i\) stands for current, \(j\) is sometimes used instead.) This gives rise to a number system, called the complex numbers, containing the real numbers as a subset. Any complex number \(z\) can be written in the form \(z=a+bi\), where \(a\) and \(b\) are real, and \(a\) and \(b\) are then referred to as the real and imaginary parts of \(z\). A number with a zero real part is called an imaginary number. The complex numbers can be visualized as a plane, with the real number line placed horizontally like the \(x\) axis of the familiar \(x-y\) plane, and the imaginary numbers running along the \(y\) axis. The complex numbers are complete in a way that the real numbers aren't: every nonzero complex number has two square roots. For example, 1 is a real number, so it is also a member of the complex numbers, and its square roots are \(-1 \) and 1. Likewise, \(-1\) has square roots \(i\) and \(-i\), and the number \(i\) has square roots \(1/\sqrt{2}+i/\sqrt{2}\) and \(-1/\sqrt{2}-i/\sqrt{2}\). Complex numbers can be added and subtracted by adding or subtracting their real and imaginary parts. Geometrically, this is the same as vector addition. The complex numbers \(a+bi\) and \(a-bi\), lying at equal distances above and below the real axis, are called complex conjugates. The results of the quadratic formula are either both real, or complex conjugates of each other. The complex conjugate of a number \(z\) is notated as \(\bar{z}\) or \(z^*\). The complex numbers obey all the same rules of arithmetic as the reals, except that they can't be ordered along a single line. That is, it's not possible to say whether one complex number is greater than another. We can compare them in terms of their magnitudes (their distances from the origin), but two distinct complex numbers may have the same magnitude, so, for example, we can't say whether \ (1\) is greater than \(i\) or \(i\) is greater than \(1\). Example 27: A square root of \(i\) \(\triangleright\) Prove that \(1/\sqrt{2}+i/\sqrt{2}\) is a square root of \(i\). \(\triangleright\) Our proof can use any ordinary rules of arithmetic, except for ordering. \[\begin{align*} (\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}})^2 & = \frac{1}{\sqrt{2}}\cdot\frac{1}{\sqrt{2}} +\frac{1}{\sqrt{2}}\cdot\frac{i}{\sqrt{2}} +\frac{i}{\sqrt{2}}\cdot\frac{1}{\sqrt{2}} +\frac {i}{\sqrt{2}}\cdot\frac{i}{\sqrt{2}} \\ &= \frac{1}{2}(1+i+i-1) \\ &= i \end{align*}\] Example 27 showed one method of multiplying complex numbers. However, there is another nice interpretation of complex multiplication. We define the argument of a complex number as its angle in the complex plane, measured counterclockwise from the positive real axis. Multiplying two complex numbers then corresponds to multiplying their magnitudes, and adding their arguments. Using this interpretation of multiplication, how could you find the square roots of a complex number? (answer in the back of the PDF version of the book) Example 28: An identity The magnitude \(|z|\) of a complex number \(z\) obeys the identity \(|z|^2=z\bar{z}\). To prove this, we first note that \(\bar{z}\) has the same magnitude as \(z\), since flipping it to the other side of the real axis doesn't change its distance from the origin. Multiplying \(z\) by \(\bar{z}\) gives a result whose magnitude is found by multiplying their magnitudes, so the magnitude of \(z\ bar{z}\) must therefore equal \(|z|^2\). Now we just have to prove that \(z\bar{z}\) is a positive real number. But if, for example, \(z\) lies counterclockwise from the real axis, then \(\bar{z}\) lies clockwise from it. If \(z\) has a positive argument, then \(\bar{z}\) has a negative one, or vice-versa. The sum of their arguments is therefore zero, so the result has an argument of zero, and is on the positive real axis. This whole system was built up in order to make every number have square roots. What about cube roots, fourth roots, and so on? Does it get even more weird when you want to do those as well? No. The complex number system we've already discussed is sufficient to handle all of them. The nicest way of thinking about it is in terms of roots of polynomials. In the real number system, the polynomial \ (x^2-1\) has two roots, i.e., two values of \(x\) (plus and minus one) that we can plug in to the polynomial and get zero. Because it has these two real roots, we can rewrite the polynomial as \ ((x-1)(x+1)\). However, the polynomial \(x^2+1\) has no real roots. It's ugly that in the real number system, some second-order polynomials have two roots, and can be factored, while others can't. In the complex number system, they all can. For instance, \(x^2+1\) has roots \(i\) and \(-i\), and can be factored as \((x-i)(x+i)\). In general, the fundamental theorem of algebra states that in the complex number system, any nth-order polynomial can be factored completely into \(n\) linear factors, and we can also say that it has \(n\) complex roots, with the understanding that some of the roots may be the same. For instance, the fourth-order polynomial \(x^4+x^2\) can be factored as \((x-i)(x+i)(x-0)(x-0)\), and we say that it has four roots, \(i\), \(-i\), 0, and 0, two of which happen to be the same. This is a sensible way to think about it, because in real life, numbers are always approximations anyway, and if we make tiny, random changes to the coefficients of this polynomial, it will have four distinct roots, of which two just happen to be very close to zero. Discussion Questions Find \(\arg i\), \(\arg(-i)\), and \(\arg 37\), where \(\arg z\) denotes the argument of the complex number \(z\). Visualize the following multiplications in the complex plane using the interpretation of multiplication in terms of multiplying magnitudes and adding arguments: \((i)(i)=-1\), \((i)(-i)=1\), \((-i) If we visualize \(z\) as a point in the complex plane, how should we visualize \(-z\)? What does this mean in terms of arguments? Give similar interpretations for \(z^2\) and \(\sqrt{z}\). Find four different complex numbers \(z\) such that \(z^4=1\). Compute the following. Use the magnitude and argument, not the real and imaginary parts. \[\begin{equation*} |1+i| , \arg(1+i) , \left|\frac{1}{1+i}\right| , \arg\left(\frac{1}{1+i}\right) , \end{equation*}\] Based on the results above, compute the real and imaginary parts of \(1/(1+i)\). 10.5.6 Euler's formula Having expanded our horizons to include the complex numbers, it's natural to want to extend functions we knew and loved from the world of real numbers so that they can also operate on complex numbers. The only really natural way to do this in general is to use Taylor series. A particularly beautiful thing happens with the functions \(e^x\), \(\sin x\), and \(\cos x\): \[\begin{align*} e^x &= 1 + \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + ... \\ \cos x &= 1 - \frac{1}{2!}x^2 + \frac{1}{4!}x^4 - ... \\ \sin x &= x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - ... \end{align*}\] If \(x=i\phi\) is an imaginary number, we have \[\begin{equation*} e^{i\phi} = \cos \phi + i \sin \phi , \end{equation*}\] a result known as Euler's formula. The geometrical interpretation in the complex plane is shown in figure x. Although the result may seem like something out of a freak show at first, applying the definition of the exponential function makes it clear how natural it is: \[\begin{align*} e^x = \lim_{n\rightarrow \infty} \left(1+\frac{x}{n}\right)^n . \end{align*}\] When \(x=i\phi\) is imaginary, the quantity \((1+i\phi/n)\) represents a number lying just above 1 in the complex plane. For large \(n\), \((1+i\phi/n)\) becomes very close to the unit circle, and its argument is the small angle \(\phi/n\). Raising this number to the nth power multiplies its argument by \(n\), giving a number with an argument of \(\phi\). Euler's formula is used frequently in physics and engineering. Example 29: Trig functions in terms of complex exponentials \(\triangleright\) Write the sine and cosine functions in terms of exponentials. \(\triangleright\) Euler's formula for \(x=-i\phi\) gives \(\cos \phi - i \sin \phi\), since \(\cos(-\theta)=\cos\theta\), and \(\sin(-\theta)=-\sin\theta\). \[\begin{align*} \cos x &= \frac{e^{ix}+e^{-ix}}{2} \\ \sin x &= \frac{e^{ix}-e^{-ix}}{2i} \end{align*}\] Example 30: A hard integral made easy \(\triangleright\) Evaluate \[\begin{equation*} \int e^x \cos x dx \end{equation*}\] \(\triangleright\) This seemingly impossible integral becomes easy if we rewrite the cosine in terms of exponentials: \[\begin{align*} \int e^x & \cos x dx \\ &= \int e^x \left(\frac{e^{ix}+e^{-ix}}{2}\right) dx \\ &= \frac{1}{2} \int (e^{(1+i)x}+e^{(1-i)x})dx \\ &= \frac{1}{2} \left( \frac{e^{(1+i)x}}{1+i}+\frac{e^ {(1-i)x}}{1-i} \right)+ c \end{align*}\] Since this result is the integral of a real-valued function, we'd like it to be real, and in fact it is, since the first and second terms are complex conjugates of one another. If we wanted to, we could use Euler's theorem to convert it back to a manifestly real result.^5 10.5.7 Impedance So far we have been thinking in terms of the free oscillations of a circuit. This is like a mechanical oscillator that has been kicked but then left to oscillate on its own without any external force to keep the vibrations from dying out. Suppose an LRC circuit is driven with a sinusoidally varying voltage, such as will occur when a radio tuner is hooked up to a receiving antenna. We know that a current will flow in the circuit, and we know that there will be resonant behavior, but it is not necessarily simple to relate current to voltage in the most general case. Let's start instead with the special cases of LRC circuits consisting of only a resistance, only a capacitance, or only an inductance. We are interested only in the steady-state response. The purely resistive case is easy. Ohm's law gives \[\begin{equation*} I = \frac{V}{R} . \end{equation*}\] In the purely capacitive case, the relation \(V=q/C\) lets us calculate \[\begin{align*} I &= \frac{dq}{dt} \\ &= C \frac{dV}{dt} . \end{align*}\] This is partly analogous to Ohm's law. For example, if we double the amplitude of a sinusoidally varying AC voltage, the derivative \(dV/dt\) will also double, and the amplitude of the sinusoidally varying current will also double. However, it is not true that \(I=V/R\), because taking the derivative of a sinusoidal function shifts its phase by 90 degrees. If the voltage varies as, for example, \(V(t)=V_\text{o}\sin (\omega t)\), then the current will be \(I(t)=\omega C V_\text{o}\cos (\omega t)\). The amplitude of the current is \(\omega C V_\text{o}\), which is proportional to \(V_\text {o}\), but it's not true that \(I(t)=V(t)/R\) for some constant \(R\). A second problem that crops up is that our entire analysis of DC resistive circuits was built on the foundation of the loop rule and the junction rule, both of which are statements about sums. To apply the junction rule to an AC circuit, for exampe, we would say that the sum of the sine waves describing the currents coming into the junction is equal (at every moment in time) to the sum of the sine waves going out. Now sinusoidal functions have a remarkable property, which is that if you add two different sinusoidal functions having the same frequency, the result is also a sinusoid with that frequency. For example, \(\cos\omega t+\sin\omega t=\sqrt{2}\sin(\omega t+\pi/4)\), which can be proved using trig identities. The trig identities can get very cumbersome, however, and there is a much easier technique involving complex numbers. Figure aa shows a useful way to visualize what's going on. When a circuit is oscillating at a frequency \(\omega\), we use points in the plane to represent sinusoidal functions with various phases and amplitudes. Which of the following functions can be represented in this way? \(\cos(6t-4)\), \(\cos^2t\), \(\tan t\) (answer in the back of the PDF version of the book) The simplest examples of how to visualize this in polar coordinates are ones like \(\cos \omega t+\cos \omega t=2\cos \omega t\), where everything has the same phase, so all the points lie along a single line in the polar plot, and addition is just like adding numbers on the number line. The less trivial example \(\cos\omega t+\sin\omega t=\sqrt{2}\sin(\omega t+\pi/4)\), can be visualized as in figure ab. Figure ab suggests that all of this can be tied together nicely if we identify our plane with the plane of complex numbers. For example, the complex numbers 1 and \(i\) represent the functions \(\sin \omega t\) and \(\cos\omega t\). In figure z, for example, the voltage across the capacitor is a sine wave multiplied by a number that gives its amplitude, so we associate that function with a number \(\tilde{V}\) lying on the real axis. Its magnitude, \(|\tilde{V}|\), gives the amplitude in units of volts, while its argument \(\arg \tilde{V}\), gives its phase angle, which is zero. The current is a multiple of a sine wave, so we identify it with a number \(\tilde{I}\) lying on the imaginary axis. We have \(\arg\tilde{I}=90°\), and \(|\tilde{I}|\) is the amplitude of the current, in units of amperes. But comparing with our result above, we have \(|\tilde{I}|=\omega C|\tilde{V}|\). Bringing together the phase and magnitude information, we have \(\tilde{I}=i\omega C\tilde{V}\). This looks very much like Ohm's law, so we write \[\begin{equation*} \tilde{I} = \frac{\tilde{V}}{Z_C} , \end{equation*}\] where the quantity \[\begin{equation*} Z_C = -\frac{i}{\omega C} , \text{[impedance of a capacitor]} \end{equation*}\] having units of ohms, is called the impedance of the capacitor at this frequency. It makes sense that the impedance becomes infinite at zero frequency. Zero frequency means that it would take an infinite time before the voltage would change by any amount. In other words, this is like a situation where the capacitor has been connected across the terminals of a battery and been allowed to settle down to a state where there is constant charge on both terminals. Since the electric fields between the plates are constant, there is no energy being added to or taken out of the field. A capacitor that can't exchange energy with any other circuit component is nothing more than a broken (open) circuit. Note that we have two types of complex numbers: those that represent sinusoidal functions of time, and those that represent impedances. The ones that represent sinusoidal functions have tildes on top, which look like little sine waves. Why can't a capacitor have its impedance printed on it along with its capacitance? (answer in the back of the PDF version of the book) Similar math (but this time with an integral instead of a derivative) gives \[\begin{equation*} Z_L = i\omega L \text{[impedance of an inductor]} \end{equation*}\] for an inductor. It makes sense that the inductor has lower impedance at lower frequencies, since at zero frequency there is no change in the magnetic field over time. No energy is added to or released from the magnetic field, so there are no induction effects, and the inductor acts just like a piece of wire with negligible resistance. The term “choke” for an inductor refers to its ability to “choke out” high frequencies. The phase relationships shown in figures z and ac can be remembered using my own mnemonic, “eVIL,” which shows that the voltage (V) leads the current (I) in an inductive circuit, while the opposite is true in a capacitive one. A more traditional mnemonic is “ELI the ICE man,” which uses the notation E for emf, a concept closely related to voltage (see p. 686). Summarizing, the impedances of resistors, capacitors, and inductors are \[\begin{align*} Z_R &= R\\ Z_C &= -\frac{i}{\omega C}\\ Z_L &= i\omega L . \end{align*}\] Example 31: Low-pass and high-pass filters An LRC circuit only responds to a certain range (band) of frequencies centered around its resonant frequency. As a filter, this is known as a bandpass filter. If you turn down both the bass and the treble on your stereo, you have created a bandpass filter. To create a high-pass or low-pass filter, we only need to insert a capacitor or inductor, respectively, in series. For instance, a very basic surge protector for a computer could be constructed by inserting an inductor in series with the computer. The desired 60 Hz power from the wall is relatively low in frequency, while the surges that can damage your computer show much more rapid time variation. Even if the surges are not sinusoidal signals, we can think of a rapid “spike” qualitatively as if it was very high in frequency --- like a high-frequency sine wave, it changes very Inductors tend to be big, heavy, expensive circuit elements, so a simple surge protector would be more likely to consist of a capacitor in parallel with the computer. (In fact one would normally just connect one side of the power circuit to ground via a capacitor.) The capacitor has a very high impedance at the low frequency of the desired 60 Hz signal, so it siphons off very little of the current. But for a high-frequency signal, the capacitor's impedance is very small, and it acts like a zero-impedance, easy path into which the current is diverted. The main things to be careful about with impedance are that (1) the concept only applies to a circuit that is being driven sinusoidally, (2) the impedance of an inductor or capacitor is Discussion Question Figure z on page 607 shows the voltage and current for a capacitor. Sketch the \(q\)-\(t\) graph, and use it to give a physical explanation of the phase relationship between the voltage and current. For example, why is the current zero when the voltage is at a maximum or minimum? Figure ac on page 609 shows the voltage and current for an inductor. The power is considered to be positive when energy is being put into the inductor's magnetic field. Sketch the graph of the power, and then the graph of \(U\), the energy stored in the magnetic field, and use it to give a physical explanation of the \(P\)-\(t\) graph. In particular, discuss why the frequency is doubled on the \ (P\)-\(t\) graph. Relate the features of the graph in figure ac on page 609 to the story told in cartoons in figure m/2-3 on page 598. 10.5.8 Power How much power is delivered when an oscillating voltage is applied to an impedance? The equation \(P=IV\) is generally true, since voltage is defined as energy per unit charge, and current is defined as charge per unit time: multiplying them gives energy per unit time. In a DC circuit, all three quantities were constant, but in an oscillating (AC) circuit, all three display time variation. A resistor First let's examine the case of a resistor. For instance, you're probably reading this book from a piece of paper illuminated by a glowing lightbulb, which is driven by an oscillating voltage with amplitude \(V_\text{o}\). In the special case of a resistor, we know that \(I\) and \(V\) are in phase. For example, if \(V\) varies as \(V_\text{o}\cos \omega t\), then \(I\) will be a cosine as well, \(I_\text{o}\cos \omega t\). The power is then \(I_\text{o}V_\text{o}\cos^2\omega t\), which is always positive,^6 and varies between 0 and \(I_\text{o}V_\text{o}\). Even if the time variation was \(\cos\omega t\) or \(\sin(\omega t+\pi/4)\), we would still have a maximum power of \(I_\text{o}V_\text{o}\), because both the voltage and the current would reach their maxima at the same time. In a lightbulb, the moment of maximum power is when the circuit is most rapidly heating the filament. At the instant when \(P=0\), a quarter of a cycle later, no current is flowing, and no electrical energy is being turned into heat. Throughout the whole cycle, the filament is getting rid of energy by radiating light.^7 Since the circuit oscillates at a frequency^8 of \(60\ \text{Hz}\), the temperature doesn't really have time to cycle up or down very much over the 1/60 s period of the oscillation, and we don't notice any significant variation in the brightness of the light, even with a short-exposure photograph. Thus, what we really want to know is the average power, “average” meaning the average over one full cycle. Since we're covering a whole cycle with our average, it doesn't matter what phase we assume. Let's use a cosine. The total amount of energy transferred over one cycle is \[\begin{align*} E &= \int dE \\ &= \int_0^T \frac{dE}{dt} dt , \\ \text{where $T=2\pi/\omega$ is the period.} E &= \int_0^T P dt \\ &= \int_0^T P dt \\ &= \int_0^T I_\text{o}V_\text{o} \cos^2\omega t dt \\ &= I_\text{o}V_\text{o} \int_0^T \cos^2\omega t dt \\ &= I_\text{o}V_\text{o} \int_0^T \frac{1}{2} \left(1+\cos 2\omega t\right) dt \\ \text{The reason for using the trig identity $\cos^2 x= (1+\cos 2 x)/2$ in the last step is that it lets us get the answer without doing a hard integral. Over the course of one full cycle, the quantity $\cos 2\omega t$ goes positive, negative, positive, and negative again, so the integral of it is zero. We then have} E &= I_\text{o}V_\text{o} \int_0^T \frac{1}{2} dt \\ &= \frac{I_\text{o}V_\text{o}T}{2} \end{align*}\] The average power is \[\begin{align*} P_{av} &= \frac{\text{energy transferred in one full cycle}}{\text{time for one full cycle}} \\ &= \frac{I_\text{o}V_\text{o}T/2}{T} \\ &= \frac{I_\text{o}V_\text{o}}{2} ,\\ \end i.e., the average is half the maximum. The power varies from \(0\) to \(I_\text{o}V_\text{o}\), and it spends equal amounts of time above and below the maximum, so it isn't surprising that the average power is half-way in between zero and the maximum. Summarizing, we have \[\begin{align*} P_{av} &= \frac{I_\text{o}V_\text{o}}{2} \text{[average power in a resistor]}\\ \end{align*}\] for a resistor. Rms quantities Suppose one day the electric company decided to start supplying your electricity as DC rather than AC. How would the DC voltage have to be related to the amplitude \(V_\text{o}\) of the AC voltage previously used if they wanted your lightbulbs to have the same brightness as before? The resistance of the bulb, \(R\), is a fixed value, so we need to relate the power to the voltage and the resistance, eliminating the current. In the DC case, this gives \(P=IV=(V/R)V=V^2/R\). (For DC, \(P\) and \(P_{av}\) are the same.) In the AC case, \(P_{av} = I_\text{o}V_\text{o}/2=V_\text{o}^2/2R \). Since there is no factor of 1/2 in the DC case, the same power could be provided with a DC voltage that was smaller by a factor of \(1/\sqrt{2}\). Although you will hear people say that household voltage in the U.S. is 110 V, its amplitude is actually \((110\ \text{V})\times\sqrt{2}\approx160\ \text{V}\). The reason for referring to \(V_\text{o}/\sqrt{2}\) as “the” voltage is that people who are naive about AC circuits can plug \(V_\text{o}/\sqrt{2}\) into a familiar DC equation like \(P=V^2/R\) and get the right average answer. The quantity \(V_\text{o}/\sqrt{2}\) is called the “RMS” voltage, which stands for “root mean square.” The idea is that if you square the function \(V(t)\), take its average (mean) over one cycle, and then take the square root of that average, you get \(V_ \text{o}/\sqrt{2}\). Many digital meters provide RMS readouts for measuring AC voltages and currents. A capacitor For a capacitor, the calculation starts out the same, but ends up with a twist. If the voltage varies as a cosine, \(V_\text{o}\cos \omega t\), then the relation \(I=CdV/dt\) tells us that the current will be some constant multiplied by minus the sine, \(-V_\text{o}\sin \omega t\). The integral we did in the case of a resistor now becomes \[\begin{equation*} E = \int_0^T -I_\text{o}V_\text{o} \sin \omega t \cos \omega t dt ,\\ \end{equation*}\] and based on figure ae, you can easily convince yourself that over the course of one full cycle, the power spends two quarter-cycles being negative and two being positive. In other words, the average power is zero! Why is this? It makes sense if you think in terms of energy. A resistor converts electrical energy to heat, never the other way around. A capacitor, however, merely stores electrical energy in an electric field and then gives it back. For a capacitor, \[\begin{align*} P_{av} &= 0 \text{[average power in a capacitor]}\\ \end{align*}\] Notice that although the average power is zero, the power at any given instant is not typically zero, as shown in figure ae. The capacitor does transfer energy: it's just that after borrowing some energy, it always pays it back in the next quarter-cycle. An inductor The analysis for an inductor is similar to that for a capacitor: the power averaged over one cycle is zero. Again, we're merely storing energy temporarily in a field (this time a magnetic field) and getting it back later. 10.5.9 Impedance matching Figure af shows a commonly encountered situation: we wish to maximize the average power, \(P_{av}\), delivered to the load for a fixed value of \(V_\text{o}\), the amplitude of the oscillating driving voltage. We assume that the impedance of the transmission line, \(Z_T\) is a fixed value, over which we have no control, but we are able to design the load, \(Z_\text{o}\), with any impedance we like. For now, we'll also assume that both impedances are resistive. For example, \(Z_T\) could be the resistance of a long extension cord, and \(Z_\text{o}\) could be a lamp at the end of it. The result generalizes immediately, however, to any kind of impedance. For example, the load could be a stereo speaker's magnet coil, which is displays both inductance and resistance. (For a purely inductive or capacitive load, \(P_{av}\) equals zero, so the problem isn't very interesting!) Since we're assuming both the load and the transmission line are resistive, their impedances add in series, and the amplitude of the current is given by \[\begin{align*} I_\text{o} &= \frac{V_\text{o}}{Z_\text{o}+Z_T} ,\\ \text{so} P_{av} &= I_\text{o}V_\text{o}/2 \\ &= I_\text{o}^2Z_\text{o}/2 \\ &= \frac{V_\text{o}^2Z_\text{o}}{\left(Z_\text{o}+Z_T \right)^2}/2 . \text{The maximum of this expression occurs where the derivative is zero,} 0 &= \frac{1}{2}\frac{d}{dZ_\text{o}}\left[\frac{V_\text{o}^2Z_\text{o}}{\left(Z_\text{o}+Z_T\right)^2}\ right] \\ 0 &= \frac{1}{2}\frac{d}{dZ_\text{o}}\left[\frac{Z_\text{o}}{\left(Z_\text{o}+Z_T\right)^2}\right] \\ 0 &= \left(Z_\text{o}+Z_T\right)^{-2}-2Z_\text{o}\left(Z_\text{o}+Z_T\right)^{-3} \\ 0 &= \left(Z_\text{o}+Z_T\right)-2Z_\text{o} \\ Z_\text{o} &= Z_T \end{align*}\] In other words, to maximize the power delivered to the load, we should make the load's impedance the same as the transmission line's. This result may seem surprising at first, but it makes sense if you think about it. If the load's impedance is too high, it's like opening a switch and breaking the circuit; no power is delivered. On the other hand, it doesn't pay to make the load's impedance too small. Making it smaller does give more current, but no matter how small we make it, the current will still be limited by the transmission line's impedance. As the load's impedance approaches zero, the current approaches this fixed value, and the the power delivered, \(I_\text{o}^2Z_\text{o}\), decreases in proportion to \(Z_\text{o}\). Maximizing the power transmission by matching \(Z_T\) to \(Z_\text{o}\) is called impedance matching. For example, an 8-ohm home stereo speaker will be correctly matched to a home stereo amplifier with an internal impedance of 8 ohms, and 4-ohm car speakers will be correctly matched to a car stereo with a 4-ohm internal impedance. You might think impedance matching would be unimportant because even if, for example, we used a car stereo to drive 8-ohm speakers, we could compensate for the mismatch simply by turning the volume knob higher. This is indeed one way to compensate for any impedance mismatch, but there is always a price to pay. When the impedances are matched, half the power is dissipated in the transmission line and half in the load. By connecting a 4-ohm amplifier to an 8-ohm speaker, however, you would be setting up a situation in two watts were being dissipated as heat inside the amp for every amp being delivered to the speaker. In other words, you would be wasting energy, and perhaps burning out your amp when you turned up the volume to compensate for the mismatch. 10.5.10 Impedances in series and parallel How do impedances combine in series and parallel? The beauty of treating them as complex numbers is that they simply combine according to the same rules you've already learned as resistances. Example 32: Series impedance \(\triangleright\) A capacitor and an inductor in series with each other are driven by a sinusoidally oscillating voltage. At what frequency is the current maximized? \(\triangleright\) Impedances in series, like resistances in series, add. The capacitor and inductor act as if they were a single circuit element with an impedance \[\begin{align*} Z &= Z_{L}+ Z_{C}\\ &= i\omega L-\frac{ i}{\omega C} .\\ \text{The current is then} \tilde{ I} = \frac{\tilde{ V}}{ i\omega L- i/\omega C} . \end{align*}\] We don't care about the phase of the current, only its amplitude, which is represented by the absolute value of the complex number \(\tilde{ I}\), and this can be maximized by making \(| i\omega L- i /\omega C|\) as small as possible. But there is some frequency at which this quantity is zero --- \[\begin{gather*} 0 = i\omega L-\frac{ i}{\omega C}\\ \frac{1}{\omega C} = \omega L\\ \omega = \frac{1}{\sqrt{ LC}} \end{gather*}\] At this frequency, the current is infinite! What is going on physically? This is an LRC circuit with \(R=0\). It has a resonance at this frequency, and because there is no damping, the response at resonance is infinite. Of course, any real LRC circuit will have some damping, however small (cf. figure j on page 181). Example 33: Resonance with damping \(\triangleright\) What is the amplitude of the current in a series LRC circuit? \(\triangleright\) Generalizing from example 32, we add a third, real impedance: \[\begin{align*} |\tilde{ I}| &= \frac{|\tilde{ V}|}{| Z|} \\ &= \frac{|\tilde{ V}|}{| R+ i\omega L- i/\omega C|} \\ &= \frac{|\tilde{ V}|}{\sqrt{ R^2+(\omega L-1/\omega C)^2}} \end{align*}\] This result would have taken pages of algebra without the complex number technique! Example 34: A second-order stereo crossover filter A stereo crossover filter ensures that the high frequencies go to the tweeter and the lows to the woofer. This can be accomplished simply by putting a single capacitor in series with the tweeter and a single inductor in series with the woofer. However, such a filter does not cut off very sharply. Suppose we model the speakers as resistors. (They really have inductance as well, since they have coils in them that serve as electromagnets to move the diaphragm that makes the sound.) Then the power they draw is \(I^2 R\). Putting an inductor in series with the woofer, /1, gives a total impedance that at high frequencies is dominated by the inductor's, so the current is proportional to \(\omega^{-1}\), and the power drawn by the woofer is proportional to \(\omega^ A second-order filter, like ag/2, is one that cuts off more sharply: at high frequencies, the power goes like \(\omega^{-4}\). To analyze this circuit, we first calculate the total impedance: \[\begin{equation*} Z = Z_{L}+( Z_{C}^{-1}+ Z_R^{-1})^{-1} \end{equation*}\] All the current passes through the inductor, so if the driving voltage being supplied on the left is \(\tilde{ V}_d\), we have \[\begin{equation*} \tilde{ V}_d = \tilde{ I}_{L} Z , \end{equation*}\] and we also have \[\begin{equation*} \tilde{ V}_{L} = \tilde{ I}_{L} Z_L . \end{equation*}\] The loop rule, applied to the outer perimeter of the circuit, gives \[\begin{equation*} \tilde{ V}_{d} = \tilde{ V}_{L}+\tilde{ V}_R . \end{equation*}\] Straightforward algebra now results in \[\begin{equation*} \tilde{ V}_{R} = \frac{\tilde{ V}_{d}}{1+ Z_L/ Z_{C}+ Z_{L}/ Z_R} . \end{equation*}\] At high frequencies, the \(Z_{L}/ Z_C\) term, which varies as \(\omega^2\), dominates, so \(\tilde{ V}_R\) and \(\tilde{ I}_R\) are proportional to \(\omega^{-2}\), and the power is proportional to \ 10.6 Fields by Gauss' Law 10.6.1 Gauss' law The flea of subsection 10.3.2 had a long and illustrious scientific career, and we're now going to pick up her story where we left off. This flea, whose name is Gauss^9, has derived the equation \(E_ \perp=2\pi k\sigma\) for the electric field very close to a charged surface with charge density \(\sigma\). Next we will describe two improvements she is going to make to that equation. First, she realizes that the equation is not as useful as it could be, because it only gives the part of the field due to the surface. If other charges are nearby, then their fields will add to this field as vectors, and the equation will not be true unless we carefully subtract out the field from the other charges. This is especially problematic for her because the planet on which she lives, known for obscure reasons as planet Flatcat, is itself electrically charged, and so are all the fleas --- the only thing that keeps them from floating off into outer space is that they are negatively charged, while Flatcat carries a positive charge, so they are electrically attracted to it. When Gauss found the original version of her equation, she wanted to demonstrate it to her skeptical colleagues in the laboratory, using electric field meters and charged pieces of metal foil. Even if she set up the measurements by remote control, so that her the charge on her own body would be too far away to have any effect, they would be disrupted by the ambient field of planet Flatcat. Finally, however, she realized that she could improve her equation by rewriting it as follows: \[\begin{equation*} E_{outward,\ on\ side\ 1}+E_{outward,\ on\ side\ 2} = 4\pi k\sigma . \end{equation*}\] The tricky thing here is that “outward” means a different thing, depending on which side of the foil we're on. On the left side, “outward” means to the left, while on the right side, “outward” is right. A positively charged piece of metal foil has a field that points leftward on the left side, and rightward on its right side, so the two contributions of \(2\pi k\sigma\) are both positive, and we get \(4\pi k\sigma\). On the other hand, suppose there is a field created by other charges, not by the charged foil, that happens to point to the right. On the right side, this externally created field is in the same direction as the foil's field, but on the left side, the it reduces the strength of the leftward field created by the foil. The increase in one term of the equation balances the decrease in the other term. This new version of the equation is thus exactly correct regardless of what externally generated fields are present! Her next innovation starts by multiplying the equation on both sides by the area, \(A\), of one side of the foil: \[\begin{align*} \left(E_{outward,\ on\ side\ 1}+E_{outward,\ on\ side\ 2}\right)A &= 4\pi k\sigma A \\ \text{or} E_{outward,\ on\ side\ 1}A+E_{outward,\ on\ side\ 2}A &= 4\pi kq , \\ \end{align*}\] where \(q\) is the charge of the foil. The reason for this modification is that she can now make the whole thing more attractive by defining a new vector, the area vector A. As shown in figure a, she defines an area vector for side 1 which has magnitude \(A\) and points outward from side 1, and an area vector for side 2 which has the same magnitude and points outward from that side, which is in the opposite direction. The dot product of two vectors, \(\mathbf{u}\cdot\mathbf{v}\), can be interpreted as \(u_{parallel\ to\ v}|\mathbf{v}|\), and she can therefore rewrite her equation as \[\begin{equation*} \mathbf{E}_1\cdot\mathbf{A}_1+\mathbf{E}_2\cdot\mathbf{A}_2 = 4\pi k q . \end{equation*}\] The quantity on the left side of this equation is called the flux through the surface, written \(\Phi\). Gauss now writes a grant proposal to her favorite funding agency, the BSGS (Blood-Suckers' Geological Survey), and it is quickly approved. Her audacious plan is to send out exploring teams to chart the electric fields of the whole planet of Flatcat, and thereby determine the total electric charge of the planet. The fleas' world is commonly assumed to be a flat disk, and its size is known to be finite, since the sun passes behind it at sunset and comes back around on the other side at dawn. The most daring part of the plan is that it requires surveying not just the known side of the planet but the uncharted Far Side as well. No flea has ever actually gone around the edge and returned to tell the tale, but Gauss assures them that they won't fall off --- their negatively charged bodies will be attracted to the disk no matter which side they are on. Of course it is possible that the electric charge of planet Flatcat is not perfectly uniform, but that isn't a problem. As discussed in subsection 10.3.2, as long as one is very close to the surface, the field only depends on the local charge density. In fact, a side-benefit of Gauss's program of exploration is that any such local irregularities will be mapped out. But what the newspapers find exciting is the idea that once all the teams get back from their voyages and tabulate their data, the total charge of the planet will have been determined for the first time. Each surveying team is assigned to visit a certain list of republics, duchies, city-states, and so on. They are to record each territory's electric field vector, as well as its area. Because the electric field may be nonuniform, the final equation for determining the planet's electric charge will have many terms, not just one for each side of the planet: \[\begin{equation*} \Phi = \sum \mathbf{E}_j\cdot\mathbf{A}_j = 4\pi k q_{total} \end{equation*}\] Gauss herself leads one of the expeditions, which heads due east, toward the distant Tail Kingdom, known only from fables and the occasional account from a caravan of traders. A strange thing happens, however. Gauss embarks from her college town in the wetlands of the Tongue Republic, travels straight east, passes right through the Tail Kingdom, and one day finds herself right back at home, all without ever seeing the edge of the world! What can have happened? All at once she realizes that the world isn't flat. Now what? The surveying teams all return, the data are tabulated, and the result for the total charge of Flatcat is \((1/4\pi k)\sum \mathbf{E}_j\cdot\mathbf{A}_j=37\ \text{nC}\) (units of nanocoulombs). But the equation was derived under the assumption that Flatcat was a disk. If Flatcat is really round, then the result may be completely wrong. Gauss and two of her grad students go to their favorite bar, and decide to keep on ordering Bloody Marys until they either solve their problems or forget them. One student suggests that perhaps Flatcat really is a disk, but the edges are rounded. Maybe the surveying teams really did flip over the edge at some point, but just didn't realize it. Under this assumption, the original equation will be approximately valid, and 37 nC really is the total charge of Flatcat. A second student, named Newton, suggests that they take seriously the possibility that Flatcat is a sphere. In this scenario, their planet's surface is really curved, but the surveying teams just didn't notice the curvature, since they were close to the surface, and the surface was so big compared to them. They divided up the surface into a patchwork, and each patch was fairly small compared to the whole planet, so each patch was nearly flat. Since the patch is nearly flat, it makes sense to define an area vector that is perpendicular to it. In general, this is how we define the direction of an area vector, as shown in figure d. This only works if the areas are small. For instance, there would be no way to define an area vector for an entire sphere, since “outward” is in more than one direction. If Flatcat is a sphere, then the inside of the sphere must be vast, and there is no way of knowing exactly how the charge is arranged below the surface. However, the survey teams all found that the electric field was approximately perpendicular to the surface everywhere, and that its strength didn't change very much from one location to another. The simplest explanation is that the charge is all concentrated in one small lump at the center of the sphere. They have no way of knowing if this is really the case, but it's a hypothesis that allows them to see how much their 37 nC result would change if they assumed a different geometry. Making this assumption, Newton performs the following simple computation on a napkin. The field at the surface is related to the charge at the center by \[\begin{equation*} |\mathbf{E}| &= \frac{kq_{total}}{r^2} , \end{equation*}\] where \(r\) is the radius of Flatcat. The flux is then \[\begin{equation*} \Phi &= \sum \mathbf{E}_j\cdot\mathbf{A}_j , \ \end{equation*}\] and since the \(\mathbf{E}_j\) and \(\mathbf{A}_j\) vectors are parallel, the dot product equals \(|\mathbf{E}_j||\mathbf{A}_j|\), so \[\begin{equation*} \Phi &= \sum \frac{kq_{total}}{r^2}|\mathbf{A}_j| .\ \end{equation*}\] But the field strength is always the same, so we can take it outside the sum, giving \[\begin{align*} \Phi &= \frac{kq_{total}}{r^2} \sum |\mathbf{A}_j| \ &= \frac{kq_{total}}{r^2} A_{total} \ &= \frac{kq_{total}}{r^2} 4\pi r^2 \ &= 4\pi kq_{total} . \end{align*}\] Not only have all the factors of \(r\) canceled out, but the result is the same as for a disk! Everyone is pleasantly surprised by this apparent mathematical coincidence, but is it anything more than that? For instance, what if the charge wasn't concentrated at the center, but instead was evenly distributed throughout Flatcat's interior volume? Newton, however, is familiar with a result called the shell theorem (page 102), which states that the field of a uniformly charged sphere is the same as if all the charge had been concentrated at its center.^10 We now have three different assumptions about the shape of Flatcat and the arrangement of the charges inside it, and all three lead to exactly the same mathematical result, \(\Phi = 4\pi kq_{total}\). This is starting to look like more than a coincidence. In fact, there is a general mathematical theorem, called Gauss' theorem, which states the following: For any region of space, the flux through the surface equals \(4\pi kq_{in}\), where \(q_{in}\) is the total charge in that region. Don't memorize the factor of \(4\pi\) in front --- you can rederive it any time you need to, by considering a spherical surface centered on a point charge. Note that although region and its surface had a definite physical existence in our story --- they are the planet Flatcat and the surface of planet Flatcat --- Gauss' law is true for any region and surface we choose, and in general, the Gaussian surface has no direct physical significance. It's simply a computational tool. Rather than proving Gauss' theorem and then presenting some examples and applications, it turns out to be easier to show some examples that demonstrate its salient properties. Having understood these properties, the proof becomes quite simple. Suppose we have a negative point charge, whose field points inward, and we pick a Gaussian surface which is a sphere centered on that charge. How does Gauss' theorem apply here? (answer in the back of the PDF version of the book) 10.6.2 Additivity of flux Figure e shows two two different ways in which flux is additive. Figure e/1, additivity by charge, shows that we can break down a charge distribution into two or more parts, and the flux equals the sum of the fluxes due to the individual charges. This follows directly from the fact that the flux is defined in terms of a dot product, \(\mathbf{E}\cdot\mathbf{A}\), and the dot product has the additive property \((\mathbf{a}+\mathbf{b})\cdot\mathbf{c}=\mathbf{a}\cdot\mathbf{c}+\mathbf{b}\cdot\mathbf{c}\). To understand additivity of flux by region, e/2, we have to consider the parts of the two surfaces that were eliminated when they were joined together, like knocking out a wall to make two small apartments into one big one. Although the two regions shared this wall before it was removed, the area vectors were opposite: the direction that is outward from one region is inward with respect to the other. Thus if the field on the wall contributes positive flux to one region, it contributes an equal amount of negative flux to the other region, and we can therefore eliminate the wall to join the two regions, without changing the total flux. 10.6.3 Zero flux from outside charges A third important property of Gauss' theorem is that it only refers to the charge inside the region we choose to discuss. In other words, it asserts that any charge outside the region contributes zero to the flux. This makes at least some sense, because a charge outside the region will have field vectors pointing into the surface on one side, and out of the surface on the other. Certainly there should be at least partial cancellation between the negative (inward) flux on one side and the positive (outward) flux on the other. But why should this cancellation be exact? To see the reason for this perfect cancellation, we can imagine space as being built out of tiny cubes, and we can think of any charge distribution as being composed of point charges. The additivity-by-charge property tells us that any charge distribution can be handled by considering its point charges individually, and the additivity-by-region property tells us that if we have a single point charge outside a big region, we can break the region down into tiny cubes. If we can prove that the flux through such a tiny cube really does cancel exactly, then the same must be true for any region, which we could build out of such cubes, and any charge distribution, which we can build out of point charges. For simplicity, we will carry out this calculation only in the special case shown in figure f, where the charge lies along one axis of the cube. Let the sides of the cube have length \(2b\), so that the area of each side is \((2b)^2=4b^2\). The cube extends a distance \(b\) above, below, in front of, and behind the horizontal \(x\) axis. There is a distance \(d-b\) from the charge to the left side, and \(d+b\) to the right side. There will be one negative flux, through the left side, and five positive ones. Of these positive ones, the one through the right side is very nearly the same in magnitude as the negative flux through the left side, but just a little less because the field is weaker on the right, due to the greater distance from the charge. The fluxes through the other four sides are very small, since the field is nearly perpendicular to their area vectors, and the dot product \(\mathbf{E}_j\cdot\mathbf{A}_j\) is zero if the two vectors are perpendicular. In the limit where \(b\) is very small, we can approximate the flux by evaluating the field at the center of each of the cube's six sides, giving \[\begin{align*} \Phi &= \Phi_{left}+4\Phi_{side}+\Phi_{right} \\ &= |\mathbf{E}_{left}||\mathbf{A}_{left}|\cos 180° +4|\mathbf{E}_{side}||\mathbf{A}_{side}|\cos \theta_{side} \\ & +|\mathbf{E}_ {right}||\mathbf{A}_{right}|\cos 0° ,\\ \text{and a little trig gives $\cos\theta_{side}\approx b/d$, so} \Phi &= -|\mathbf{E}_{left}||\mathbf{A}_{left}| +4|\mathbf{E}_{side}||\mathbf{A}_{side}|\frac {b}{d} +|\mathbf{E}_{right}||\mathbf{A}_{right}|\\ &= \left(4b^2\right)\left(-|\mathbf{E}_{left}| +4|\mathbf{E}_{side}|\frac{b}{d} +|\mathbf{E}_{right}|\right)\\ &= \left(4b^2\right)\left(-\frac{kq} {(d-b)^2} +4\frac{kq}{d^2}\frac{b}{d} +\frac{kq}{(d+b)^2}\right)\\ &= \left(\frac{4kqb^2}{d^2}\right)\left(-\frac{1}{(1-b/d)^2} +\frac{4b}{d} +\frac{1}{(1+b/d)^2}\right) .\\ \text{Using the approximation $(1+\epsilon)^{-2}\approx 1-2\epsilon$ for small $\epsilon$, this becomes} \Phi &= \left(\frac{4kqb^2}{d^2}\right)\left(-1-\frac{2b}{d} +\frac{4b}{d} +1-\frac{2b}{d}\right) \\ &= 0 . \ Thus in the limit of a very small cube, \(b\ll d\), we have proved that the flux due to this exterior charge is zero. The proof can be extended to the case where the charge is not along any axis of the cube,^11 and based on additivity we then have a proof that the flux due to an outside charge is always zero. Example 35: No charge on the interior of a conductor I asserted on p. 523 that for a perfect conductor in equilibrium, excess charge is found only at the surface, never in the interior. This can be proved using Gauss's theorem. Suppose that a charge \ (q\) existed at some point in the interior, and it was in stable equilibrium. For concreteness, let's say \(q\) is positive. If its equilibrium is to be stable, then we need an electric field everywhere around it that points inward like a pincushion, so that if the charge were to be perturbed slightly, the field would bring it back to its equilibrium position. Since Newton's third law forbids objects from making forces on themselves, this field would have to be the field contributed by all the other charges, not by \(q\) itself. But this is impossible, because this kind of inward-pointing pincushion pattern would have a nonzero (negative) flux through the pincushion, but Gauss's theorem says we can't have flux from outside charges. Discussion Questions One question that might naturally occur to you about Gauss's law is what happens for charge that is exactly on the surface --- should it be counted toward the enclosed charge, or not? If charges can be perfect, infinitesimal points, then this could be a physically meaningful question. Suppose we approach this question by way of a limit: start with charge \(q\) spread out over a sphere of finite size, and then make the size of the sphere approach zero. The figure shows a uniformly charged sphere that's exactly half-way in and half-way out of the cubical Gaussian surface. What is the flux through the cube, compared to what it would be if the charge was entirely enclosed? (There are at least three ways to find this flux: by direct integration, by Gauss's law, or by the additivity of flux by region.) The dipole is completely enclosed in the cube. What does Gauss's law say about the flux through the cube? If you imagine the dipole's field pattern, can you verify that this makes sense? The wire passes in through one side of the cube and out through the other. If the current through the wire is increasing, then the wire will act like an inductor, and there will be a voltage difference between its ends. (The inductance will be relatively small, since the wire isn't coiled up, and the \(\Delta V\) will therefore also be fairly small, but still not zero.) The \(\Delta V\) implies the existence of electric fields, and yet Gauss's law says the flux must be zero, since there is no charge inside the cube. Why isn't Gauss's law violated? The charge has been loitering near the edge of the cube, but is then suddenly hit with a mallet, causing it to fly off toward the left side of the cube. We haven't yet discussed in detail how disturbances in the electric and magnetic fields ripple outward through space, but it turns out that they do so at the speed of light. (In fact, that's what light is: ripples in the electric and magnetic fields.) Because the charge is closer to the left side of the cube, the change in the electric field occurs there before the information reaches the right side. This would seem certain to lead to a violation of Gauss's law. How can the ideas explored in discussion question C show the resolution to this paradox? 10.6.4 Proof of Gauss' theorem With the computational machinery we've developed, it is now simple to prove Gauss' theorem. Based on additivity by charge, it suffices to prove the law for a point charge. We have already proved Gauss' law for a point charge in the case where the point charge is outside the region. If we can prove it for the inside case, then we're all done. If the charge is inside, we reason as follows. First, we forget about the actual Gaussian surface of interest, and instead construct a spherical one, centered on the charge. For the case of a sphere, we've already seen the proof written on a napkin by the flea named Newton (page 619). Now wherever the actual surface sticks out beyond the sphere, we glue appropriately shaped pieces onto the sphere. In the example shown in figure h, we have to add two Mickey Mouse ears. Since these added pieces do not contain the point charge, the flux through them is zero, and additivity of flux by region therefore tells us that the total flux is not changed when we make this alteration. Likewise, we need to chisel out any regions where the sphere sticks out beyond the actual surface. Again, there is no change in flux, since the region being altered doesn't contain the point charge. This proves that the flux through the Gaussian surface of interest is the same as the flux through the sphere, and since we've already proved that that flux equals \(4\pi kq_{in}\), our proof of Gauss' theorem is complete. Discussion Questions A critical part of the proof of Gauss' theorem was the proof that a tiny cube has zero flux through it due to an external charge. Discuss qualitatively why this proof would fail if Coulomb's law was a \(1/r\) or \(1/r^3\) law. 10.6.5 Gauss' law as a fundamental law of physics Note that the proof of Gauss' theorem depended on the computation on the napkin discussed on page 10.6.1. The crucial point in this computation was that the electric field of a point charge falls off like \(1/r^2\), and since the area of a sphere is proportional to \(r^2\), the result is independent of \(r\). The \(1/r^2\) variation of the field also came into play on page 622 in the proof that the flux due to an outside charge is zero. In other words, if we discover some other force of nature which is proportional to \(1/r^3\) or \(r\), then Gauss' theorem will not apply to that force. Gauss' theorem is not true for nuclear forces, which fall off exponentially with distance. However, this is the only assumption we had to make about the nature of the field. Since gravity, for instance, also has fields that fall off as \(1/r^2\), Gauss' theorem is equally valid for gravity --- we just have to replace mass with charge, change the Coulomb constant \(k\) to the gravitational constant \(G\), and insert a minus sign because the gravitational fields around a (positive) mass point inward. Gauss' theorem can only be proved if we assume a \(1/r^2\) field, and the converse is also true: any field that satisfies Gauss' theorem must be a \(1/r^2\) field. Thus although we previously thought of Coulomb's law as the fundamental law of nature describing electric forces, it is equally valid to think of Gauss' theorem as the basic law of nature for electricity. From this point of view, Gauss' theorem is not a mathematical fact but an experimentally testable statement about nature, so we'll refer to it as Gauss' law, just as we speak of Coulomb's law or Newton's law of gravity. If Gauss' law is equivalent to Coulomb's law, why not just use Coulomb's law? First, there are some cases where calculating a field is easy with Gauss' law, and hard with Coulomb's law. More importantly, Gauss' law and Coulomb's law are only mathematically equivalent under the assumption that all our charges are standing still, and all our fields are constant over time, i.e., in the study of electrostatics, as opposed to electrodynamics. As we broaden our scope to study generators, inductors, transformers, and radio antennas, we will encounter cases where Gauss' law is valid, but Coulomb's law is not. 10.6.6 Applications Often we encounter situations where we have a static charge distribution, and we wish to determine the field. Although superposition is a generic strategy for solving this type of problem, if the charge distribution is symmetric in some way, then Gauss' law is often a far easier way to carry out the computation. Field of a long line of charge Consider the field of an infinitely long line of charge, holding a uniform charge per unit length \(\lambda\). Computing this field by brute-force superposition was fairly laborious (examples 10 on page 574 and 13 on page 580). With Gauss' law it becomes a very simple calculation. The problem has two types of symmetry. The line of charge, and therefore the resulting field pattern, look the same if we rotate them about the line. The second symmetry occurs because the line is infinite: if we slide the line along its own length, nothing changes. This sliding symmetry, known as a translation symmetry, tells us that the field must point directly away from the line at any given point. Based on these symmetries, we choose the Gaussian surface shown in figure i. If we want to know the field at a distance \(R\) from the line, then we choose this surface to have a radius \(R\), as shown in the figure. The length, \(L\), of the surface is irrelevant. The field is parallel to the surface on the end caps, and therefore perpendicular to the end caps' area vectors, so there is no contribution to the flux. On the long, thin strips that make up the rest of the surface, the field is perpendicular to the surface, and therefore parallel to the area vector of each strip, so that the dot product occurring in the definition of the flux is \(\mathbf {E}_j\cdot\mathbf{A}_j=|\mathbf{E}_j||\mathbf{A}_j||\cos\ 0°=|\mathbf{E}_j||\mathbf{A}_j|\). Gauss' law gives \[\begin{align*} 4\pi k q_{in} &= \sum \mathbf{E}_j\cdot\mathbf{A}_j \\ 4\pi k \lambda L &= \sum |\mathbf{E}_j||\mathbf{A}_j| .\\ \text{The magnitude of the field is the same on every strip, so we can take it outside the sum.} 4\pi k \lambda L &= |\mathbf{E}| \sum |\mathbf{A}_j| \\ \text{In the limit where the strips are infinitely narrow, the surface becomes a cylinder, with (area)= (circumference)(length)=$2\pi RL$.} 4\pi k \lambda L &= |\mathbf{E}| \times 2\pi RL \\ |\mathbf{E}| &= \frac{2k\lambda}{R} \\ \end{align*}\] Field near a surface charge As claimed earlier, the result \(E=2\pi k\sigma\) for the field near a charged surface is a special case of Gauss' law. We choose a Gaussian surface of the shape shown in figure j, known as a Gaussian pillbox. The exact shape of the flat end caps is unimportant. The symmetry of the charge distribution tells us that the field points directly away from the surface, and is equally strong on both sides of the surface. This means that the end caps contribute equally to the flux, and the curved sides have zero flux through them. If the area of each end cap is \(A\), then \[\begin{align*} 4\pi k q_{in} &= \mathbf{E}_1\cdot\mathbf{A}_1+\mathbf{E}_2\cdot\mathbf{A}_2 , \\ \text{where the subscripts 1 and 2 refer to the two end caps. We have $\mathbf{A}_2=-\mathbf{A}_1$, so} 4\pi k q_{in} &= \mathbf{E}_1\cdot\mathbf{A}_1-\mathbf{E}_2\cdot\mathbf{A}_1 \\ 4\pi k q_{in} &= \left(\mathbf{E}_1-\mathbf{E}_2\right)\cdot\mathbf{A}_1 , \\ \text{and by symmetry the magnitudes of the two fields are equal, so} 2|\mathbf{E}|A &= 4 \pi k \sigma A\\ |\mathbf{E}| &= 2\pi k\sigma \end{align*}\] The symmetry between the two sides could be broken by the existence of other charges nearby, whose fields would add onto the field of the surface itself. Even then, Gauss's law still guarantees \[\begin{align*} 4\pi k q_{in} &= \left(\mathbf{E}_1-\mathbf{E}_2\right)\cdot\mathbf{A}_1 , \text{or} |\mathbf{E}_{\perp,1}-\mathbf{E}_{\perp,2}| &= 4\pi k \sigma , \end{align*}\] where the subscript \(\perp\) indicates the component of the field parallel to the surface (i.e., parallel to the area vectors). In other words, the electric field changes discontinuously when we pass through a charged surface; the discontinuity occurs in the component of the field perpendicular to the surface, and the amount of discontinuous change is \(4\pi k \sigma\). This is a completely general statement that is true near any charged surface, regardless of the existence of other charges nearby. 10.7 Gauss' Law In Differential Form Gauss' law is a bit spooky. It relates the field on the Gaussian surface to the charges inside the surface. What if the charges have been moving around, and the field at the surface right now is the one that was created by the charges in their previous locations? Gauss' law --- unlike Coulomb's law --- still works in cases like these, but it's far from obvious how the flux and the charges can still stay in agreement if the charges have been moving around. For this reason, it would be more physically attractive to restate Gauss' law in a different form, so that it related the behavior of the field at one point to the charges that were actually present at that point. This is essentially what we were doing in the fable of the flea named Gauss: the fleas' plan for surveying their planet was essentially one of dividing up the surface of their planet (which they believed was flat) into a patchwork, and then constructing small a Gaussian pillbox around each small patch. The equation \(E_{\perp}=2\pi k\sigma\) then related a particular property of the local electric field to the local charge density. In general, charge distributions need not be confined to a flat surface --- life is three-dimensional --- but the general approach of defining very small Gaussian surfaces is still a good one. Our strategy is to divide up space into tiny cubes, like the one on page 621. Each such cube constitutes a Gaussian surface, which may contain some charge. Again we approximate the field using its six values at the center of each of the six sides. Let the cube extend from \(x\) to \(x+dx\), from \(y\) to \(y+dy\), and from \(y\) to \(y+dy\). The sides at \(x\) and \(x+dx\) have area vectors \(-dydz\hat{\mathbf{x}}\) and \(dydz\hat{\mathbf{x}}\), respectively. The flux through the side at \(x\) is \(-E_x(x)dydz\), and the flux through the opposite side, at \(x+dx\) is \(E_x(x+dx)dydz\). The sum of these is \((E_x(x+dx)-E_x(x))dydz\), and if the field was uniform, the flux through these two opposite sides would be zero. It will only be zero if the field's \(x\) component changes as a function of \(x\). The difference \(E_x(x+dx)-E_x(x)\) can be rewritten as \(dE_x=(dE_x)/(dx)dx\), so the contribution to the flux from these two sides of the cube ends up being \[\begin{equation*} \frac{dE_x}{dx}dxdydz . \end{equation*}\] Doing the same for the other sides, we end up with a total flux \[\begin{align*} d \Phi &= \left(\frac{dE_x}{dx}+\frac{dE_y}{dy} +\frac{dE_z}{dz}\right)dxdydz \\ &= \left(\frac{dE_x}{dx}+\frac{dE_y}{dy} +\frac{dE_z}{dz}\right)dv ,\\ \text{where $dv$ is the volume of the cube. In evaluating each of these three derivatives, we are going to treat the other two variables as constants, to emphasize this we use the partial derivative notation $\partial$ introduced in chapter ,} d \Phi &= \left(\frac{\partial E_x}{\partial x}+\frac{\partial E_y}{\partial y} +\frac{\partial E_z}{\partial z}\right)dv .\\ \text{Using Gauss' law,} 4\pi k q_{in} &= \left(\frac{\partial E_x}{\ partial x}+\frac{\partial E_y}{\partial y} +\frac{\partial E_z}{\partial z}\right)dv ,\\ \text{and we introduce the notation $\rho$ (Greek letter rho) for the charge per unit volume, giving} 4\pi k \ rho &= \frac{\partial E_x}{\partial x}+\frac{\partial E_y}{\partial y} +\frac{\partial E_z}{\partial z} .\\ \text{The quantity on the right is called the of the electric field, written $\divg \mathbf{E}$. Using this notation, we have} \divg \mathbf{E} = 4\pi k \rho . \end{align*}\] This equation has all the same physical implications as Gauss' law. After all, we proved Gauss' law by breaking down space into little cubes like this. We therefore refer to it as the differential form of Gauss' law, as opposed to \(\Phi=4\pi kq_{in}\), which is called the integral form. Figure b shows an intuitive way of visualizing the meaning of the divergence. The meter consists of some electrically charged balls connected by springs. If the divergence is positive, then the whole cluster will expand, and it will contract its volume if it is placed at a point where the field has \(\divg\mathbf{E}\lt0\). What if the field is constant? We know based on the definition of the divergence that we should have \(\divg\mathbf{E}=0\) in this case, and the meter does give the right result: all the balls will feel a force in the same direction, but they will neither expand nor Example 36: Divergence of a sine wave \(\triangleright\) Figure shows an electric field that varies as a sine wave. This is in fact what you'd see in a light wave: light is a wave pattern made of electric and magnetic fields. (The magnetic field would look similar, but would be in a plane perpendicular to the page.) What is the divergence of such a field, and what is the physical significance of the result? \(\triangleright\) Intuitively, we can see that no matter where we put the div-meter in this field, it will neither expand nor contract. For instance, if we put it at the center of the figure, it will start spinning, but that's it. Mathematically, let the \(x\) axis be to the right and let \(y\) be up. The field is of the form \[\begin{equation*} \mathbf{E} = (\text{sin} Kx)\: \hat{\mathbf{y}} , \end{equation*}\] where the constant \(K\) is not to be confused with Coulomb's constant. Since the field has only a \(y\) component, the only term in the divergence we need to evaluate is \[\begin{equation*} \mathbf{E} = \frac{\partial E_{y}}{\partial y} , \end{equation*}\] but this vanishes, because \(E_y\) depends only on \(x\), not \(y\) : we treat \(y\) as a constant when evaluating the partial derivative \(\partial E_{y}/\partial y\), and the derivative of an expression containing only constants must be zero. Physically this is a very important result: it tells us that a light wave can exist without any charges along the way to “keep it going.” In other words, light can travel through a vacuum, a region with no particles in it. If this wasn't true, we'd be dead, because the sun's light wouldn't be able to get to us through millions of kilometers of empty space! Example 37: Electric field of a point charge The case of a point charge is tricky, because the field behaves badly right on top of the charge, blowing up and becoming discontinuous. At this point, we cannot use the component form of the divergence, since none of the derivatives are well defined. However, a little visualization using the original definition of the divergence will quickly convince us that div \(E\) is infinite here, and that makes sense, because the density of charge has to be infinite at a point where there is a zero-size point of charge (finite charge in zero volume). At all other points, we have \[\begin{equation*} \mathbf{E} = \frac{ kq}{ r^2}\hat{\mathbf{r}} , \end{equation*}\] where \(\hat{\mathbf{r}}=\mathbf{r}/ r=( x\hat{\mathbf{x}}+ y\hat{\mathbf{y}}+ z\hat{\mathbf{z}})/ r\) is the unit vector pointing radially away from the charge. The field can therefore be written as \[\begin{align*} \mathbf{E} &= \frac{ kq}{ r^3}\hat{\mathbf{r}} \\ &= \frac{ kq( x\hat{\mathbf{x}}+ y\hat{\mathbf{y}}+ z\hat{\mathbf{z}})}{\left( x^2+ y^2+ z^2\right)^\text{3/2}} . \\ \text{The three terms in the divergence are all similar, e.g.,} \frac{\partial E_{x}}{\partial x} &= kq\frac{\partial}{\partial x}\left[\frac{ x}{\left( x^2+ y^2+ z^2\right)^\text{3/2}}\right] \\ &= kq\left[\frac{1} {\left( x^2+ y^2+ z^2\right)^\text{3/2}}-\frac{3}{2}\:\frac{2 x^2}{\left( x^2+ y^2+ z^2\right)^\text{5/2}}\right] \\ &= kq\left( r^{-3}-3 x^2 r^{-5}\right) . \end{align*}\] Straightforward algebra shows that adding in the other two terms results in zero, which makes sense, because there is no charge except at the origin. Gauss' law in differential form lends itself most easily to finding the charge density when we are give the field. What if we want to find the field given the charge density? As demonstrated in the following example, one technique that often works is to guess the general form of the field based on experience or physical intuition, and then try to use Gauss' law to find what specific version of that general form will be a solution. Example 38: The field inside a uniform sphere of charge \(\triangleright\) Find the field inside a uniform sphere of charge whose charge density is \(\rho\). (This is very much like finding the gravitational field at some depth below the surface of the \(\triangleright\) By symmetry we know that the field must be purely radial (in and out). We guess that the solution might be of the form \[\begin{equation*} \mathbf{E} = br^ p\hat{\mathbf{r}} , \end{equation*}\] where \(r\) is the distance from the center, and \(b\) and \(p\) are constants. A negative value of \(p\) would indicate a field that was strongest at the center, while a positive \(p\) would give zero field at the center and stronger fields farther out. Physically, we know by symmetry that the field is zero at the center, so we expect \(p\) to be positive. As in the example 37, we rewrite \(\hat{\mathbf{r}}\) as \(\mathbf{r}/ r\), and to simplify the writing we define \(n= p-1\), so \[\begin{equation*} \mathbf{E} = br^ n\mathbf{r} . \end{equation*}\] Gauss' law in differential form is \[\begin{equation*} \divg\mathbf{E} = 4\pi k\rho , \end{equation*}\] so we want a field whose divergence is constant. For a field of the form we guessed, the divergence has terms in it like \[\begin{align*} \frac{\partial E_{x}}{\partial x} &= \frac{\partial}{\partial x}\left( br^{n} x\right) \\ &= b\left( nr^{ n-1}\frac{\partial r}{\partial x} x+r^ n\right) \\ \end{align*}\] The partial derivative \(\partial r/\partial x\) is easily calculated to be \(x/ r\), so \[\begin{equation*} \frac{\partial E_{x}}{\partial x} = b\left( nr^{ n-2} x^2+r^ n\right) \end{equation*}\] Adding in similar expressions for the other two terms in the divergence, and making use of \(x^2+ y^2+ z^2= r^2\), we have \[\begin{equation*} \divg\mathbf{E} = b( n+3) r^ n . \end{equation*}\] This can indeed be constant, but only if \(n\) is 0 or \(-3\), i.e., \(p\) is 1 or \(-2\). The second solution gives a divergence which is constant and zero : this is the solution for the outside of the sphere! The first solution, which has the field directly proportional to \(r\), must be the one that applies to the inside of the sphere, which is what we care about right now. Equating the coefficient in front to the one in Gauss' law, the field is \[\begin{equation*} \mathbf{E} = \frac{4\pi k\rho}{3} r\:\hat{\mathbf{r}} . \end{equation*}\] The field is zero at the center, and gets stronger and stronger as we approach the surface. Discussion Questions As suggested by the figure, discuss the results you would get by inserting the div-meter at various locations in the sine-wave field. Homework Problems 1. The gap between the electrodes in an automobile engine's spark plug is 0.060 cm. To produce an electric spark in a gasoline-air mixture, an electric field of \(3.0\times10^6\) V/m must be achieved. On starting a car, what minimum voltage must be supplied by the ignition circuit? Assume the field is uniform.(answer check available at lightandmatter.com) (b) The small size of the gap between the electrodes is inconvenient because it can get blocked easily, and special tools are needed to measure it. Why don't they design spark plugs with a wider gap? 2. (a) As suggested in example 9 on page 573, use approximations to show that the expression given for the electric field approaches \(kQ/d^2\) for large \(d\). (b) Do the same for the result of example 12 on page 577. 3. Astronomers believe that the mass distribution (mass per unit volume) of some galaxies may be approximated, in spherical coordinates, by \(\rho=ae^{-br}\), for \(0\le r\le\infty\), where \(\rho\) is the density. Find the total mass. 4. (a) At time \(t=0\), a positively charged particle is placed, at rest, in a vacuum, in which there is a uniform electric field of magnitude \(E\). Write an equation giving the particle's speed, \ (v\), in terms of \(t\), \(E\), and its mass and charge \(m\) and \(q\).(answer check available at lightandmatter.com) (b) If this is done with two different objects and they are observed to have the same motion, what can you conclude about their masses and charges? (For instance, when radioactivity was discovered, it was found that one form of it had the same motion as an electron in this type of experiment.) 5. Show that the alternative definition of the magnitude of the electric field, \(|E|=\tau/D_t\sin\theta\), has units that make sense. 6. Redo the calculation of example 5 on page 566 using a different origin for the coordinate system, and show that you get the same result. 7. The definition of the dipole moment, \(\mathbf{D}=\sum q_i \mathbf{r}_i\), involves the vector \(\mathbf{r}_i\) stretching from the origin of our coordinate system out to the charge \(q_i\). There are clearly cases where this causes the dipole moment to be dependent on the choice of coordinate system. For instance, if there is only one charge, then we could make the dipole moment equal zero if we chose the origin to be right on top of the charge, or nonzero if we put the origin somewhere else. (a) Make up a numerical example with two charges of equal magnitude and opposite sign. Compute the dipole moment using two different coordinate systems that are oriented the same way, but differ in the choice of origin. Comment on the result. (b) Generalize the result of part a to any pair of charges with equal magnitude and opposite sign. This is supposed to be a proof for any arrangement of the two charges, so don't assume any numbers. (c) Generalize further, to \(n\) charges. 9. Find an arrangement of charges that has zero total charge and zero dipole moment, but that will make nonvanishing electric fields. 10. As suggested in example 11 on page 575, show that you can get the same result for the on-axis field by differentiating the voltage 11. Three charges are arranged on a square as shown. All three charges are positive. What value of \(q_2/q_1\) will produce zero electric field at the center of the square?(answer check available at 12. This is a one-dimensional problem, with everything confined to the \(x\) axis. Dipole A consists of a \(-1.000\) C charge at \(x=0.000\) m and a \(1.000\) C charge at \(x=1.000\) m. Dipole B has a \(-2.000\) C charge at \(x=0.000\) m and a \(2.000\) C charge at \(x=0.500\) m. (a) Compare the two dipole moments. (b) Calculate the field created by dipole A at \(x=10.000\) m, and compare with the field dipole B would make. Comment on the result.(answer check available at lightandmatter.com) 13. In our by-now-familiar neuron, the voltage difference between the inner and outer surfaces of the cell membrane is about \(V_{out}-V_{in}=-70\ \text{mV}\) in the resting state, and the thickness of the membrane is about 6.0 nm (i.e., only about a hundred atoms thick). What is the electric field inside the membrane?(answer check available at lightandmatter.com) 14. A proton is in a region in which the electric field is given by \(E=a+bx^3\). If the proton starts at rest at \(x_1=0\), find its speed, \(v\), when it reaches position \(x_2\). Give your answer in terms of \(a\), \(b\), \(x_2\), and \(e\) and \(m\), the charge and mass of the proton.(answer check available at lightandmatter.com) 15. (a) Given that the on-axis field of a dipole at large distances is proportional to \(D/r^3\), show that its voltage varies as \(D/r^2\). (Ignore positive and negative signs and numerical constants of proportionality.) (b) Write down an exact expression for the voltage of a two-charge dipole at an on-axis point, without assuming that the distance is large compared to the size of the dipole. Your expression will have to contain the actual charges and size of the dipole, not just its dipole moment. Now use approximations to show that, at large distances, this is consistent with your answer to part a.\hwhint 16. A hydrogen atom is electrically neutral, so at large distances, we expect that it will create essentially zero electric field. This is not true, however, near the atom or inside it. Very close to the proton, for example, the field is very strong. To see this, think of the electron as a spherically symmetric cloud that surrounds the proton, getting thinner and thinner as we get farther away from the proton. (Quantum mechanics tells us that this is a more correct picture than trying to imagine the electron orbiting the proton.) Near the center of the atom, the electron cloud's field cancels out by symmetry, but the proton's field is strong, so the total field is very strong. The voltage in and around the hydrogen atom can be approximated using an expression of the form \(V=r^ {-1}e^{-r}\). (The units come out wrong, because I've left out some constants.) Find the electric field corresponding to this voltage, and comment on its behavior at very large and very small \(r\). (solution in the pdf version of the book) 17. A carbon dioxide molecule is structured like O-C-O, with all three atoms along a line. The oxygen atoms grab a little bit of extra negative charge, leaving the carbon positive. The molecule's symmetry, however, means that it has no overall dipole moment, unlike a V-shaped water molecule, for instance. Whereas the voltage of a dipole of magnitude \(D\) is proportional to \(D/r^2\) (see problem 15), it turns out that the voltage of a carbon dioxide molecule at a distant point along the molecule's axis equals \(b/r^3\), where \(r\) is the distance from the molecule and \(b\) is a constant (cf. problem 9). What would be the electric field of a carbon dioxide molecule at a point on the molecule's axis, at a distance \(r\) from the molecule?(answer check available at 18. A hydrogen atom in a particular state has the charge density (charge per unit volume) of the electron cloud given by \(\rho=ae^{-br}z^2\), where \(r\) is the distance from the proton, and \(z\) is the coordinate measured along the \(z\) axis. Given that the total charge of the electron cloud must be \(-e\), find \(a\) in terms of the other variables. 19. A dipole has a midplane, i.e., the plane that cuts through the dipole's center, and is perpendicular to the dipole's axis. Consider a two-charge dipole made of point charges \(\pm q\) located at \(z=\pm\ell/2\). Use approximations to find the field at a distant point in the midplane, and show that its magnitude comes out to be \(kD/R^3\) (half what it would be at a point on the axis lying an equal distance from the dipole). 20. The figure shows a vacuum chamber surrounded by four metal electrodes shaped like hyperbolas. (Yes, physicists do sometimes ask their university machine shops for things machined in mathematical shapes like this. They have to be made on computer-controlled mills.) We assume that the electrodes extend far into and out of the page along the unseen \(z\) axis, so that by symmetry, the electric fields are the same for all \(z\). The problem is therefore effectively two-dimensional. Two of the electrodes are at voltage \(+V_\text{o}\), and the other two at \(-V_\text{o}\), as shown. The equations of the hyperbolic surfaces are \(|xy|=b^2\), where \(b\) is a constant. (We can interpret \(b\) as giving the locations \(x=\pm b\), \(y=\pm b\) of the four points on the surfaces that are closest to the central axis.) There is no obvious, pedestrian way to determine the field or voltage in the central vacuum region, but there's a trick that works: with a little mathematical insight, we see that the voltage \(V=V_\text{o}b^{-2}xy\) is consistent with all the given information. (Mathematicians could prove that this solution was unique, but a physicist knows it on physical grounds: if there were two different solutions, there would be no physical way for the system to decide which one to do!) (a) Use the techniques of subsection 10.2.2 to find the field in the vacuum region, and (b) sketch the field as a “sea of arrows.”(answer check available at lightandmatter.com) 21. (a) A certain region of three-dimensional space has a voltage that varies as \(V=br^2\), where \(r\) is the distance from the origin. Use the techniques of subsection 10.2.2 to find the field. (answer check available at lightandmatter.com) (b) Write down another voltage that gives exactly the same field. 22. (a) Example 10 on page 574 gives the field of a charged rod in its midplane. Starting from this result, take the limit as the length of the rod approaches infinity. Note that \(\lambda\) is not changing, so as \(L\) gets bigger, the total charge \(Q\) increases. \hwans{hwans:estrips} (b) In the text, I have shown (by several different methods) that the field of an infinite, uniformly charged plane is \(2\pi k\sigma\). Now you're going to rederive the same result by a different method. Suppose that it is the \(x-y\) plane that is charged, and we want to find the field at the point \((0,0,z)\). (Since the plane is infinite, there is no loss of generality in assuming \(x=0\) and \(y=0\).) Imagine that we slice the plane into an infinite number of straight strips parallel to the \(y\) axis. Each strip has infinitesimal width \(dx\), and extends from \(x\) to \(x+dx\). The contribution any one of these strips to the field at our point has a magnitude which can be found from part a. By vector addition, prove the desired result for the field of the plane of charge. 23. Consider the electric field created by a uniformly charged cylindrical surface that extends to infinity in one direction. (a) Show that the field at the center of the cylinder's mouth is \(2\pi k\sigma\), which happens to be the same as the field of an infinite flat sheet of charge! (b) This expression is independent of the radius of the cylinder. Explain why this should be so. For example, what would happen if you doubled the cylinder's radius? 24. In an electrical storm, the cloud and the ground act like a parallel-plate capacitor, which typically charges up due to frictional electricity in collisions of ice particles in the cold upper atmosphere. Lightning occurs when the magnitude of the electric field builds up to a critical value, \(E_c\), at which air is ionized. (a) Treat the cloud as a flat square with sides of length \(L\). If it is at a height \(h\) above the ground, find the amount of energy released in the lightning strike.(answer check available at (b) Based on your answer from part a, which is more dangerous, a lightning strike from a high-altitude cloud or a low-altitude one? (c) Make an order-of-magnitude estimate of the energy released by a typical lightning bolt, assuming reasonable values for its size and altitude. \(E_c\) is about \(10^6\) V/m. 25. (a) Show that the energy in the electric field of a point charge is infinite! Does the integral diverge at small distances, at large distances, or both? \hwhint{hwhint:epointinfty} [4] (b) Now calculate the energy in the electric field of a uniformly charged sphere with radius \(b\). Based on the shell theorem, it can be shown that the field for \(r>b\) is the same as for a point charge, while the field for \(r\ltb\) is \(kqr/b^3\). (Example 38 shows this using a different technique.) (answer check available at lightandmatter.com) 26. The neuron in the figure has been drawn fairly short, but some neurons in your spinal cord have tails (axons) up to a meter long. The inner and outer surfaces of the membrane act as the “plates” of a capacitor. (The fact that it has been rolled up into a cylinder has very little effect.) In order to function, the neuron must create a voltage difference \(V\) between the inner and outer surfaces of the membrane. Let the membrane's thickness, radius, and length be \(t\), \(r\), and \(L\). (a) Calculate the energy that must be stored in the electric field for the neuron to do its job. (In real life, the membrane is made out of a substance called a dielectric, whose electrical properties increase the amount of energy that must be stored. For the sake of this analysis, ignore this fact.) \hwhint{hwhint:neuronenergy}(answer check available at lightandmatter.com) (b) An organism's evolutionary fitness should be better if it needs less energy to operate its nervous system. Based on your answer to part a, what would you expect evolution to do to the dimensions \(t\) and \(r?\) What other constraints would keep these evolutionary trends from going too far? 27. The figure shows cross-sectional views of two cubical capacitors, and a cross-sectional view of the same two capacitors put together so that their interiors coincide. A capacitor with the plates close together has a nearly uniform electric field between the plates, and almost zero field outside; these capacitors don't have their plates very close together compared to the dimensions of the plates, but for the purposes of this problem, assume that they still have approximately the kind of idealized field pattern shown in the figure. Each capacitor has an interior volume of 1.00 \(\text {m}^3\), and is charged up to the point where its internal field is 1.00 V/m. (a) Calculate the energy stored in the electric field of each capacitor when they are separate. (answer check available at lightandmatter.com) (b) Calculate the magnitude of the interior field when the two capacitors are put together in the manner shown. Ignore effects arising from the redistribution of each capacitor's charge under the influence of the other capacitor.(answer check available at lightandmatter.com) (c) Calculate the energy of the put-together configuration. Does assembling them like this release energy, consume energy, or neither?(answer check available at lightandmatter.com) 28. Find the capacitance of the surface of the earth, assuming there is an outer spherical “plate” at infinity. (In reality, this outer plate would just represent some distant part of the universe to which we carried away some of the earth's charge in order to charge up the earth.)(answer check available at lightandmatter.com) 29. (a) Show that the field found in example 10 on page 574 reduces to \(E=2k\lambda/R\) in the limit of \(L\rightarrow\infty\). (b) An infinite strip of width \(b\) has a surface charge density \(\sigma\). Find the field at a point at a distance \(z\) from the strip, lying in the plane perpendicularly bisecting the strip. (answer check available at lightandmatter.com) (c) Show that this expression has the correct behavior in the limit where \(z\) approaches zero, and also in the limit of \(z\gg b\). For the latter, you'll need the result of problem 22a, which is given on page 930. 30. A solid cylinder of radius \(b\) and length \(\ell\) is uniformly charged with a total charge \(Q\). Find the electric field at a point at the center of one of the flat ends. 31. Find the voltage at the edge of a uniformly charged disk. (Define \(V=0\) to be infinitely far from the disk.) (answer check available at lightandmatter.com)\hwhint{hwhint:vedgedisk} 32. Find the energy stored in a capacitor in terms of its capacitance and the voltage difference across it.(answer check available at lightandmatter.com) 33. (a) Find the capacitance of two identical capacitors in series. (b) Based on this, how would you expect the capacitance of a parallel-plate capacitor to depend on the distance between the plates? 34. (a) Use complex number techniques to rewrite the function \(f(t)=4\sin\omega t+3\cos\omega t\) in the form \(A\sin(\omega t+\delta)\).(answer check available at lightandmatter.com) (b) Verify the result using the trigonometric identity \(\sin(\alpha+\beta)=\sin\alpha\cos\beta+\sin\beta\cos\alpha\). 35. (a) Show that the equation \(V_L=LdI/dt\) has the right units. (b) Verify that \(RC\) has units of time. (c) Verify that \(L/R\) has units of time. 37. Calculate the quantity \(i^i\) (i.e., find its real and imaginary parts).(answer check available at lightandmatter.com) 38. The wires themselves in a circuit can have resistance, inductance, and capacitance. Would “stray” inductance and capacitance be most important for low-frequency or for high-frequency circuits? For simplicity, assume that the wires act like they're in series with an inductor or capacitor. 39. Starting from the relation \(V=LdI/dt\) for the voltage difference across an inductor, show that an inductor has an impedance equal to \(L\omega\). 40. A rectangular box is uniformly charged with a charge density \(\rho\). The box is extremely long and skinny, and its cross-section is a square with sides of length \(b\). The length is so great in comparison to \(b\) that we can consider it as being infinite. Find the electric field at a point lying on the box's surface, at the midpoint between the two edges. Your answer will involve an integral that is most easily done using computer software. 41. A hollow cylindrical pipe has length \(\ell\) and radius \(b\). Its ends are open, but on the curved surface it has a charge density \(\sigma\). A charge \(q\) with mass \(m\) is released at the center of the pipe, in unstable equilibrium. Because the equilibrium is unstable, the particle acclerates off in one direction or the other, along the axis of the pipe, and comes shooting out like a bullet from the barrel of a gun. Find the velocity of the particle when it's infinitely far from the “gun.” Your answer will involve an integral that is difficult to do by hand; you may want to look it up in a table of integrals, do it online at integrals.com, or download and install the free Maxima symbolic math software from maxima.sourceforge.net. 42. If an FM radio tuner consisting of an LRC circuit contains a 1.0 \(\mu\text{H}\) inductor, what range of capacitances should the variable capacitor be able to provide?(answer check available at 43. (a) Find the parallel impedance of a \(37\ \text{k}\Omega\) resistor and a 1.0 nF capacitor at \(f=1.0\times10^4\) Hz.(answer check available at lightandmatter.com) (b) A voltage with an amplitude of 1.0 mV drives this impedance at this frequency. What is the amplitude of the current drawn from the voltage source, what is the current's phase angle with respect to the voltage, and does it lead the voltage, or lag behind it?(answer check available at lightandmatter.com) 44. A series LRC circuit consists of a 1.000 \(\Omega\) resistor, a 1.000 F capacitor, and a 1.000 H inductor. (These are not particularly easy values to find on the shelf at Radio Shack!) (a) Plot its impedance as a point in the complex plane for each of the following frequencies: \(\omega\)=0.250, 0.500, 1.000, 2.000, and 4.000 Hz. (b) What is the resonant angular frequency, \(\omega_{res}\), and how does this relate to your plot?(answer check available at lightandmatter.com) (c) What is the resonant frequency \(f_{res}\) corresponding to your answer in part b?(answer check available at lightandmatter.com) 45. At a frequency \(\omega\), a certain series LR circuit has an impedance of \(1\ \Omega+(2\ \Omega)i\). Suppose that instead we want to achieve the same impedance using two circuit elements in parallel. What must the elements be? 46. (a) Use Gauss' law to find the fields inside and outside an infinite cylindrical surface with radius \(b\) and uniform surface charge density \(\sigma\).(answer check available at (b) Show that there is a discontinuity in the electric field equal to \(4\pi k \sigma\) between one side of the surface and the other, as there should be (see page 628). (c) Reexpress your result in terms of the charge per unit length, and compare with the field of a line of charge. (d) A coaxial cable has two conductors: a central conductor of radius \(a\), and an outer conductor of radius \(b\). These two conductors are separated by an insulator. Although such a cable is normally used for time-varying signals, assume throughout this problem that there is simply a DC voltage between the two conductors. The outer conductor is thin, as in part c. The inner conductor is solid, but, as is always the case with a conductor in electrostatics, the charge is concentrated on the surface. Thus, you can find all the fields in part b by superposing the fields due to each conductor, as found in part c. (Note that on a given length of the cable, the total charge of the inner and outer conductors is zero, so \(\lambda_1=-\lambda_2\), but \(\sigma_1\ne\sigma_2\), since the areas are unequal.) Find the capacitance per unit length of such a cable.(answer check available at lightandmatter.com) 47. In a certain region of space, the electric field is constant (i.e., the vector always has the same magnitude and direction). For simplicity, assume that the field points in the positive \(x\) direction. (a) Use Gauss's law to prove that there is no charge in this region of space. This is most easily done by considering a Gaussian surface consisting of a rectangular box, whose edges are parallel to the \(x\), \(y\), and \(z\) axes. (b) If there are no charges in this region of space, what could be making this electric field? 48. (a) In a series LC circuit driven by a DC voltage (\(\omega=0\)), compare the energy stored in the inductor to the energy stored in the capacitor. (b) Carry out the same comparison for an LC circuit that is oscillating freely (without any driving voltage). (c) Now consider the general case of a series LC circuit driven by an oscillating voltage at an arbitrary frequency. Let \(\overline{U_L}\) and be the average energy stored in the inductor, and similarly for \(\overline{U_C}\). Define a quantity \(u=\overline{U_C}/(\overline{U_L}+\overline{U_C})\), which can be interpreted as the capacitor's average share of the energy, while \(1-u\) is the inductor's average share. Find \(u\) in terms of \(L\), \(C\), and \(\omega\), and sketch a graph of \(u\) and \(1-u\) versus \(\omega\). What happens at resonance? Make sure your result is consistent with your answer to part a.(answer check available at lightandmatter.com) 49. Use Gauss' law to find the field inside an infinite cylinder with radius \(b\) and uniform charge density \(\rho\). (The external field has the same form as the one in problem 46.)(answer check available at lightandmatter.com) 50. (a) In a certain region of space, the electric field is given by \(\mathbf{E}=bx\hat{\mathbf{x}}\), where \(b\) is a constant. Find the amount of charge contained within a cubical volume extending from \(x=0\) to \(x=a\), from \(y=0\) to \(y=a\), and from \(z=0\) to \(z=a\). (b) Repeat for \(\mathbf{E}=bx\hat{\mathbf{z}}\). (c) Repeat for \(\mathbf{E}=13bz\hat{\mathbf{z}}-7cz\hat{\mathbf{y}}\). (d) Repeat for \(\mathbf{E}=bxz\hat{\mathbf{z}}\). 51. Light is a wave made of electric and magnetic fields, and the fields are perpendicular to the direction of the wave's motion, i.e., they're transverse. An example would be the electric field given by \(\mathbf{E}=b \hat{\mathbf{x}} \sin cz\), where \(b\) and \(c\) are constants. (There would also be an associated magnetic field.) We observe that light can travel through a vacuum, so we expect that this wave pattern is consistent with the nonexistence of any charge in the space it's currently occupying. Use Gauss's law to prove that this is true. 52. This is an alternative approach to problem 49, using a different technique. Suppose that a long cylinder contains a uniform charge density \(\rho\) throughout its interior volume. (a) Use the methods of section 10.7 to find the electric field inside the cylinder. (answer check available at lightandmatter.com) (b) Extend your solution to the outside region, using the same technique. Once you find the general form of the solution, adjust it so that the inside and outside fields match up at the surface. (answer check available at lightandmatter.com) 53. The purpose of this homework problem is to prove that the divergence is invariant with respect to translations. That is, it doesn't matter where you choose to put the origin of your coordinate system. Suppose we have a field of the form \(\mathbf{E}=ax\hat{\mathbf{x}}+by\hat{\mathbf{y}}+cz\hat{\mathbf{z}}\). This is the most general field we need to consider in any small region as far as the divergence is concerned. (The dependence on \(x\), \(y\), and \(z\) is linear, but any smooth function looks linear close up. We also don't need to put in terms like \(x\hat{\mathbf{y}}\), because they don't contribute to the divergence.) Define a new set of coordinates \((u,v,w)\) related to \((x,y,z)\) by \[\begin{align*} x &= u + p \\ y &= v + q \\ z &= w + r , \end{align*}\] where \(p\), \(q\), and \(r\) are constants. Show that the field's divergence is the same in these new coordinates. Note that \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{u}}\) are identical, and similarly for the other coordinates. 54. Using a techniques similar to that of problem 53, show that the divergence is rotationally invariant, in the special case of rotations about the \(z\) axis. In such a rotation, we rotate to a new \((u,v,z)\) coordinate system, whose axes are rotated by an angle \(\theta \) with respect to those of the \((x,y,z)\) system. The coordinates are related by \[\begin{align*} x &= u \cos \theta + v \sin \theta \\ y &= -u \sin \theta + v \cos \theta \end{align*}\] Find how the \(u\) and \(v\) components the field \(\mathbf{E}\) depend on \(u\) and \(v\), and show that its divergence is the same in this new coordinate system. 55. An electric field is given in cylindrical coordinates \((R,\phi,z)\) by \(E_R=ce^{-u|z|}R^{-1}\cos^2\phi\), where the notation \(E_R\) indicates the component of the field pointing directly away from the axis, and the components in the other directions are zero. (This isn't a completely impossible expression for the field near a radio transmitting antenna.) (a) Find the total charge enclosed within the infinitely long cylinder extending from the axis out to \(R=b\). (b) Interpret the \(R\)-dependence of your answer to part a. 56. Use Euler's theorem to derive the addition theorems that express \(\sin(a+b)\) and \(\cos(a+b)\) in terms of the sines and cosines of \(a\) and \(b\). (solution in the pdf version of the book) 58. Factor the expression \(x^3-y^3\) into factors of the lowest possible order, using complex coefficients. (Hint: use the result of problem 57.) Then do the same using real coefficients. (solution in the pdf version of the book) Exercise A: Field Vectors 3 solenoids DC power supply cut-off plastic cup At this point you've studied the gravitational field, \(\mathbf{g}\), and the electric field, \(\mathbf{E}\), but not the magnetic field, \(\mathbf{B}\). However, they all have some of the same mathematical behavior: they act like vectors. Furthermore, magnetic fields are the easiest to manipulate in the lab. Manipulating gravitational fields directly would require futuristic technology capable of moving planet-sized masses around! Playing with electric fields is not as ridiculously difficult, but static electric charges tend to leak off through your body to ground, and static electricity effects are hard to measure numerically. Magnetic fields, on the other hand, are easy to make and control. Any moving charge, i.e., any current, makes a magnetic field. A practical device for making a strong magnetic field is simply a coil of wire, formally known as a solenoid. The field pattern surrounding the solenoid gets stronger or weaker in proportion to the amount of current passing through the wire. 1. With a single solenoid connected to the power supply and laid with its axis horizontal, use a magnetic compass to explore the field pattern inside and outside it. The compass shows you the field vector's direction, but not its magnitude, at any point you choose. Note that the field the compass experiences is a combination (vector sum) of the solenoid's field and the earth's field. 2. What happens when you bring the compass extremely far away from the solenoid? What does this tell you about the way the solenoid's field varies with distance? Thus although the compass doesn't tell you the field vector's magnitude numerically, you can get at least some general feel for how it depends on distance. 3. The figure below is a cross-section of the solenoid in the plane containing its axis. Make a sea-of-arrows sketch of the magnetic field in this plane. The length of each arrow should at least approximately reflect the strength of the magnetic field at that point. Does the field seem to have sources or sinks? 4. What do you think would happen to your sketch if you reversed the wires? Try it. 5. Now hook up the two solenoids in parallel. You are going to measure what happens when their two fields combine in the at a certain point in space. As you've seen already, the solenoids' nearby fields are much stronger than the earth's field; so although we now theoretically have three fields involved (the earth's plus the two solenoids'), it will be safe to ignore the earth's field. The basic idea here is to place the solenoids with their axes at some angle to each other, and put the compass at the intersection of their axes, so that it is the same distance from each solenoid. Since the geometry doesn't favor either solenoid, the only factor that would make one solenoid influence the compass more than the other is current. You can use the cut-off plastic cup as a little platform to bring the compass up to the same level as the solenoids' axes. a)What do you think will happen with the solenoids' axes at 90 degrees to each other, and equal currents? Try it. Now represent the vector addition of the two magnetic fields with a diagram. Check your diagram with your instructor to make sure you're on the right track. b) Now try to make a similar diagram of what would happen if you switched the wires on one of the solenoids. After predicting what the compass will do, try it and see if you were right. c)Now suppose you were to go back to the arrangement you had in part a, but you changed one of the currents to half its former value. Make a vector addition diagram, and use trig to predict the Try it. To cut the current to one of the solenoids in half, an easy and accurate method is simply to put the third solenoid in series with it, and put that third solenoid so far away that its magnetic field doesn't have any significant effect on the compass.
{"url":"http://www.lightandmatter.com/html_books/0sn/ch10/ch10.html","timestamp":"2014-04-18T02:58:47Z","content_type":null,"content_length":"278994","record_id":"<urn:uuid:1742efc4-90b2-4fba-80a6-b1c8155a70e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00437-ip-10-147-4-33.ec2.internal.warc.gz"}