content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The Vector Is 3.50 Long And Is Directed Intothis ... | Chegg.com
Vectors Problem
The vector
Part A
In the right-handed coordinate system, where the+x-axis directed to the right, the +y-axis towardthe top of the page, and +z-axis out of the page, find thex-component of the vector product
Part B
Find the
-component of the vector product
Part C
Find the z-component of the vector product
Answers (0) | {"url":"http://www.chegg.com/homework-help/questions-and-answers/vector-350-long-directed-intothis-page-vector-points-lower-rightcorner-page-upper-left-cor-q379915","timestamp":"2014-04-19T13:54:20Z","content_type":null,"content_length":"24155","record_id":"<urn:uuid:06068518-8bc9-4c49-a8d3-14bd6d876e1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by TJR on Wednesday, August 15, 2012 at 9:58am.
A car whose speed is 90.0 km/h (25 m/s) rounds a curve 180 m in radius that is properly banked for speed of 45 km/h (12.5 m/s). Find the minimum coefficient of friction between tires and road that
will permit the car to make a turn. What will happen to the car in this case?
My teacher said we'll use 'tan' but i'm confused. :(
• Physics - Elena, Wednesday, August 15, 2012 at 12:35pm
The projections of the forces acting ob the car moving at the velocity v=12.5 m/s are:
x: ma=N•sin α,
y: 0=N•cos α – mg.
The normal force has a horizontal component , pointing toward the center of the curve. Because the ramp is to be designed so that the force of static friction is zero, only the component causes
the centripetal acceleration:
m•v²/R=N•sin α= m•g•sin α/cos α =m•g•tan α,
tan α= v²/g•R
α =arctan(v²/g•R)= arctan(12.5²/9.8•180) =5.06º
The equations of the motion of the car moving with velocity V=25 m/s are:
x: ma= N•sin α +F(fr) •cos α,
y: 0=N•cos α – mg- F(fr) •sin α.
Since F(fr) =μ•N, we obtain
ma= N•sin α + μ•N •cos α,
mg =N•cos α – μ•N •sin α.
ma/mg =N(sin α + μ •cos α)/N (cos α – μ •sin α)
a=g(sin α + μ •cos α)/ (cos α – μ •sin α)
V²/R =g(sin α + μ •cos α)/ (cos α – μ •sin α),
V²•cos α -R•g•sin α =μ• (V²• sin α +R•g• cos α),
μ=( V²•cos α -R•g•sin α)/ (V²• sin α +R•g• cos α)=
= 0.258
• Physics - bobpursley, Wednesday, August 15, 2012 at 12:43pm
Thanks, Elena, good work. My analysis was too quick.
Related Questions
physics - A car travels 0.90 km at an average speed of 84.8 km/h and then 0.97 ...
physics - HI. Can someone please help me with this question. Two identical cars ...
physics - A car traveling 59 km/h accelerates at the rate of 8 m/s2. How many ...
Physics - If the odometer on the car reads 25.0 km at the beginning of a trip ...
PHYSICS - An odometer in a car has a reading of 50 km at the beginning of a trip...
physics - Both car A and car B leave school at the same time, traveling in the ...
Physics - Both car A and car B leave school at the same time, traveling in the ...
physics - Both car A and car B leave school at the same time, traveling in the ...
PHYSICS - Both car A and car B leave school at the same time, traveling in the ...
physics - Both car A and car B leave school at the same time, traveling in the ... | {"url":"http://www.jiskha.com/display.cgi?id=1345039109","timestamp":"2014-04-25T09:11:16Z","content_type":null,"content_length":"10034","record_id":"<urn:uuid:25002db2-b8d7-48cc-85be-06a9a1ba4588>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry Perpendicular line help!?
October 21st 2012, 12:50 PM #1
Junior Member
Sep 2012
Trigonometry Perpendicular line help!?
8. In a triange ABC, AB = 4.8cm, BC = 3.8 cm and A ^B C =42 degrees
P is the foot of the perpendicular from A to BC.
i. Calculate the area of triangle ABC
ii. Deduce the length of AP.
Okay, I don't understand what they mean by "P is the foot of the perpendicular from A to BC"
And how would i go about deducing the answer? Help
Re: Trigonometry Perpendicular line help!?
did you make a sketch ... ?
from vertex A to side BC , AP is the altitude from A to BC.
also ...
... and A ^B C =42 degrees
what angle is this?
Re: Trigonometry Perpendicular line help!?
Okay thanks.
soo I worked out the area to be 61.02 usuing 1/2*a*b*sin C
now for part II i'm not sure what to do.. I know the angle is 90. can u give me a hint or guide me a bit? :S
Re: Trigonometry Perpendicular line help!?
Your area is 10 times greater than it should be. To find the altitude, simply use:
Last edited by MarkFL; October 22nd 2012 at 06:14 PM.
Re: Trigonometry Perpendicular line help!?
But i don't know what AP is!
Re: Trigonometry Perpendicular line help!?
oKAY I used Area= 1/2*base*height
and then i rearranged theformula to get the height as 3.7
Re: Trigonometry Perpendicular line help!?
You have:
$\bar{AP}=\bar{AB}\sin(42^{\circ})$ (which you would get from my previous post)
$\bar{AP}\approx3.2\text{ cm}$
October 21st 2012, 01:03 PM #2
October 22nd 2012, 11:20 AM #3
Junior Member
Sep 2012
October 22nd 2012, 11:55 AM #4
October 22nd 2012, 04:58 PM #5
Junior Member
Sep 2012
October 22nd 2012, 05:14 PM #6
Junior Member
Sep 2012
October 22nd 2012, 06:20 PM #7 | {"url":"http://mathhelpforum.com/trigonometry/205816-trigonometry-perpendicular-line-help.html","timestamp":"2014-04-20T10:57:42Z","content_type":null,"content_length":"47913","record_id":"<urn:uuid:ab3bfbd5-632d-4acd-b358-539d9963cba3>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Riemann Hypothesis is currently the most famous unsolved problem in mathematics. Like the Goldbach Conjecture (all positive even integers greater than two can be expressed as the sum of two
primes), it seems true, but is very hard to prove. I did some playing around with the Riemann Hypothesis, and I'm convinced it is true. My observations follow.
The Zeta Function
Euler showed that z(2) = p^2/6 , and solved all the even integers up to z(26). See the Riemann Zeta Function in the CRC Concise Encyclopedia of Mathematics for more information on this. It is
possible for the exponent s to be Complex Number (a + b I). A root of a function is a value x such that f(x) = 0.
The Riemann Hypothesis : all nontrivial roots of the Zeta function are of the form (1/2 + b I).
Mathematica can plot the Zeta function for complex values, so I plotted the absolute value of z(1/2 + b I) and z(1/3 + b I).
|z(1/2 + b I)| for b = 0 to 85. Note how often the function dips to zero.
|z(1/3 + b I)| for b = 0 to 85. Note how the function never dips to zero.
The first few zeroes of |z(1/2 + b I)| are at b = 14.1344725, 21.022040, 25.010858, 30.424876, 32.935062, and 37.586178. Next, I tried some 3D plots, looking dead on at zero. The plot of the function
looked like this:
Plot3D[Abs[Zeta[y+ x I]],{x,5,200},{y,.4,.6},PlotRange ->{0,.1}, PlotPoints ->200, ViewPoint->{200,.5,0}]
It seems like |z(a + b I)| is bounded away from zero when a doesn't equal 1/2. Based on these plots and a few others, I'm fairly certain the Riemann Hypothesis is true. The person who actually proves
the hypothesis will be as famous as Andrew Wiles (he proved Fermat's Last Thereom). By the analytic convergence theorem, we can get the slope: z'(x + q I) = z'(x + q I)) > 0 for x>1/2 and Re(z'(x + q
I)) < 0 for x<1/2 ?
May 4 2002 -- The first ten billion zeroes are on the critical line, 1/2. | {"url":"http://www.mathpuzzle.com/riemann.html","timestamp":"2014-04-21T07:05:43Z","content_type":null,"content_length":"4723","record_id":"<urn:uuid:848f88c4-1871-41fd-9385-fee2663b2c8d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Empirical bayes analysis of sequencing-based transcriptional profiling without replicates
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2010; 11: 564.
Empirical bayes analysis of sequencing-based transcriptional profiling without replicates
Recent technological advancements have made high throughput sequencing an increasingly popular approach for transcriptome analysis. Advantages of sequencing-based transcriptional profiling over
microarrays have been reported, including lower technical variability. However, advances in technology do not remove biological variation between replicates and this variation is often neglected in
many analyses.
We propose an empirical Bayes method, titled Analysis of Sequence Counts (ASC), to detect differential expression based on sequencing technology. ASC borrows information across sequences to establish
prior distribution of sample variation, so that biological variation can be accounted for even when replicates are not available. Compared to current approaches that simply tests for equality of
proportions in two samples, ASC is less biased towards highly expressed sequences and can identify more genes with a greater log fold change at lower overall abundance.
ASC unifies the biological and statistical significance of differential expression by estimating the posterior mean of log fold change and estimating false discovery rates based on the posterior
mean. The implementation in R is available at http://www.stat.brown.edu/Zwu/research.aspx.
Recent technological advancements have made high throughput sequencing an increasingly popular approach for transcriptome analysis. Unlike microarrays, enumeration of transcript abundance with
sequencing technology is based on direct counts of transcripts rather than relying on hybridization to probes. This has reduced the noise caused by cross-hybridization and the bias caused by the
variation in probe binding efficiency. Sequencing-based transcription profiling does have other challenges. For example, whole transcript analysis produces data with transcript length bias [1]. Other
biases, including GC content, have also been reported [2]. Nonetheless, sequencing based expression analysis has been shown to be more robust and have higher resolution compared to microarray
platforms [3]. Some researchers have predicted that it will eventually replace microarrays as the major platform for monitoring gene expression [4]. The importance of replicates is well recognized in
microarray analysis [5] and it is now standard practice to include biological replicates under each experimental condition. However, as of now, sequencing-based gene expression studies often do not
include replicates [6-8], posing the question of whether the biological variation is, or can be, adequately addressed.
For illustration, we use data from Illumina Digital Gene Expression (DGE) tag profiling in this paper. However, our statistical methodology, and its implementation in R, are general for all
sequencing-based technologies that quantify gene expression as counts instead of continuous measurements such as probe intensity in microarrays. In DGE, the 3' end of transcripts with a poly-A tail
are captured by beads coated with oligo dT. Two restriction enzymes, NlaIII and Mmel are used to digest the captured transcripts, generating a 21-base fragment starting at the most 3' NlaIII site.
The 21-base fragments are sequenced to quantify the transcriptome. Consider two samples in a comparison and let X[1 ]and X[2 ]be the counts of a particular sequence tag in the two samples. The most
common approach is to consider the counts as a realization of binomial distribution B(N[i], π[i]), i = 1,2. where N[i ]is the total number of sequences in a sample, representing sequencing depth. A
statistical test for π[1 ]= π[2 ]can be conducted. The classical Z-test using the Gaussian approximation to the binomial distribution is proposed for the Serial Analysis of Gene Expression (SAGE)
data [9,10] and recently applied to DGE and other sequencing data [11-13], and Fisher's exact test has also been proposed [14]. In other technologies, sequence counts may have to be combined at
either the exon or full transcript level to form the counts X[1 ]and X[2].
The test for H[0 ]: π[1 ]= π[2 ]can be performed without replicates. However, rejection of the H[0 ]hypothesis simply implies difference between the two samples. Unless the proportion of a gene in
the transcriptome is the same for all samples under the same condition (lack of within-class variation), we can not generalize the difference between two samples to the difference between two
classes. The within-class biological variation among replicates leads to over dispersion in Binomial or Poisson models. Models accounting for over dispersion, such as a beta-binomial, have been
introduced for the analysis of SAGE data when several replicates within each class are available [15,16]. Robinson and Smyth [17] use a negative binomial model and squeeze tag-wise dispersion towards
a common dispersion, estimated using all tags, with a weighted likelihood approach that yields an EB-like solution. edgeR [18], a Bioconductor package implementing this method, has been applied to
both DGE and RNA-seq data with replicates. However, since replicates are still rare in high throughput sequencing, many researchers have been relying on simple tests of equal proportion with multiple
testing correction. Another drawback in some current analysis methods, especially those applied to the no-replicate situation, is the use of Gaussian approximation of binomial distribution [11,19],
which does not work well with data that include low count numbers. In transcriptome analysis, due to the depth of sequencing, the majority of genes have low counts. Relying on Gaussian distribution
often gives highly expressed genes favorable statistical power, such that genes that have a lower expression but exhibit greater extent of differential expression between samples are less likely to
be discovered.
In this paper we present an empirical Bayes method, titled Analysis of Sequence Counts (ASC), to estimate the log fold change of transcription between two samples. We borrow information across
sequences to estimate the hyper parameters representing the normal biological variation among replicates and the distribution of a transcriptome. The statistical model does not rely on Gaussian
approximation of the binomial distribution for all tags and requires no special treatment of 0 counts. Differential expression is computed in the form of a shrinkage estimate of log fold change. This
estimate is the basis for ranking genes. We also compute the posterior probability that the log fold change is greater than a biologically relevant threshold chosen by the user. In contrast to
sorting genes simply by p-values, we focus on the biological significance (represented by the posterior expectation of log fold change) and provide uncertainty measure in the form of posterior
Modeling biological variation
It has been reported that the noise in gene expression by sequencing depends on expression level as observed in microarray data [19]. It has been widely observed that the scatter plot of the log
reads-per-million (rpm) between samples have wider spread for lower average counts, as shown in Figure Figure1A.1A. This is shown by the relationship between the empirical variance of log rpm across
replicates and the average of log rpm. For example, Stolovitzky et al [19] binned genes whose average rpm are closest and computed sample standard deviation (SD) of genes within each bin, and
reported higher SD in bins with smaller rpm. However, the sample SD is only expected to be a good estimate of the biological variation of log expression when the Gaussian approximation works well. We
conducted a simulation in which the biological variation of log expected rpm is constant, and the observed rpm is generated from a Poisson distribution. The sample SD of log rpm, as shown in Figure
Figure1B,1B, also appears to be inflated for low expression genes. This demonstrates that the inflated variance in observed log rpm does not necessarily imply higher biological variation, but often
is a result of poor approximation of a Binomial random variable by a Gaussian distribution. Figure Figure22 shows that the quantile-quantile plots of the differences of observed log rpm (i.e., the
observed log fold change) in T. pseudonana data (see Methods) at various average expression levels. The straight lines in all plots have the same slope, suggesting stable biological variation. The
fact that most of the points stay on the straight line also confirms that a Gaussian approximation is a reasonable choice for the biological variation.
Scatter plot of the log[10 ]rpm in two samples. A. Scatter plot of the log[10 ]rpm in two samples. The greater spread at lower counts suggests higher variance. B. Sample variance of log[10 ]rpm from
a simulation with constant biological variation shows that ...
Quantile quantile (QQ) plots of the differences of log rpm (log(p[1]) - log(p[2])) confirming the Gaussian distribution as a reasonable approximation of the biological variation. A.QQ plots of log(p
[1]) - log(p[2]) for genes with average log rpm within 2 ± ...
Distribution of expression levels in a transcriptome
As observed in both microarray data and sequencing-based transcriptome profiling, genes can differ by orders of magnitude in their expression levels, ranging from less than 1 per million to thousands
per million and the majority of genes have relatively low counts. Tags with 0 counts cause problems in statistical analyses that take a direct log transformation and some investigators have had to
develop special treatments for those genes [19]. In Figure Figure33 we show the empirical distribution of the average log rpm, defined as [log[10](x[1 ]∧ 0.5)/N[1 ]+ log[10](x[2 ]∧ 0.5)/N[2]] /2.
This is a highly skewed distribution even in the log scale. The skewness motivates us to use a shifted exponential distribution as the prior distribution for the average expression level.
Histogram of the average log[10 ]rpm between the A and B samples. Histogram of the average log[10 ]rpm between the A and B samples. The smooth curve shows the probability density function of the
fitted shifted exponential distribution.
Results and Discussion
We applied ASC to transcription profiles of the diatom Thalassiosira pseudonana under two culturing conditions, measured by DGE, and computed the posterior expectation of log fold change for all
genes. Figure Figure44 compares the shrinkage estimate with the "apparent log fold change" based on sample proportions. In order to display all tags including those with count 0 in log scale, we
define the apparent log ratio as $log10[(x1∧0.5)/N1(x2∧0.5)/N2]$. We also define the average proportion for each gene as the geometric mean of the two proportions, with similar adjustment at 0. This
adjustment at 0 is for the completeness of visual presentation, and is not done in ASC analysis. The estimated fold change from genes with high counts are almost the same as the apparent fold change,
but genes with sparse counts are shrunk more aggressively (Figure (Figure4).4). This is a desirable property since the coefficient of variation of binomial distribution decreases with the
expectation. This implies that when the expected count is small, it is much easier to produce counts with apparent large fold changes.
Shrinkage estimate of log fold change. Shrinkage estimate of log fold change from ASC plotted against apparent log fold change for all genes. The apparent log fold change is defined as $log10{(x1*/
N1)/(x2*/N2)}$ where x* = x ∧ 0.5 to avoid logarithm ...
We estimated the posterior mean of log fold change and the posterior probability that there is greater than two fold change for a given tag. There are 1050 genes with posterior probability greater
than 0.9 that the fold change is greater than 2. The average log rpm of those tags spread from less than 0.23 (1.7 rpm) to 3.6 (10,000 rpm) and most have approximately 1 (10 rpm). Figure Figure5A5A
highlights these genes in a scatter plot of observed log rpm. F5B shows the distribution of average rpm of these genes, suggesting that majority of genes displaying differential expression by ASC
analysis have moderate counts, even though we have aggressively shrunk the estimate of log fold change for tags with low counts.
Differential expressed genes identified by ASC. A. Scatter plot of the log[10 ]rpm in two samples. Differentially expressed genes with posterior probability of fold change greater than 2 are
highlighted in red. B. Distribution of average rpm for the highlighted ...
Comparison with other methods
All of the genes identified as differentially expressed by ASC have very small p-values if a simple test of equal proportions is performed. In fact, a simple Z-test identifies 3479 differentially
genes at significance level 0.05 with Bonforonni correction, as highlighted in red in Figure Figure6A.6A. Fisher's exact test gives almost the same results except adding a few more genes with less
total tag counts. We highlight the top 1000 genes with smallest p-values in blue, and it is clear that genes with lower average expression are not identified. Figure Figure6B6B shows that the
majority of genes identified to have differential expression have much higher average counts compared to those identified by ASC. We have also applied a software, DGEseq [11], recently developed
specifically for the analysis of digital gene expression data as in this example. DGEseq identified more than 7000 genes with estimated FDR less than 0.01, and also favors transcripts with higher
average rpm. The results are included in additional file 1, Figure S1.
Differential expressed genes identified by Z-test. A. Scatter plot of the log[10 ]rpm in the A and B samples. Differentially expressed genes with Bonferonni adjusted p-value less than 0.05
highlighted in red and the smallest 1000 of which highlighted in ...
ASC clearly prioritizes genes differently from Z test or DGEseq and finds more genes with modest expression but greater fold change as differentially expressed. In order to show that the top ranked
genes in ASC are associated with higher biological significance, we obtained DGE data from an experiment comparing expression from two genotypes with 4 replicates each [3] (
{"type":"entrez-geo","attrs":{"text":"GSE10782","term_id":"10782"}}GSE10782). Hoen et al [3] compute Bayes Error using SAGE BetaBin [16], a method that takes into account of biological variation
between replicates. The Bayes Error in SAGE BetaBin represents the "superposition" between the estimated posterior distributions of the classes in comparison and is used to rank the genes for
differential expression. To evaluate how various methods work when replicates are not available, we choose the first sample of each genotype and obtained lists of the top 1000 tags and compared the
Bayes Error for those tags based on the full data. Among the top 1000 tags found by ASC, 320 have an estimated Bayes Error of approximately 0, significantly more than those found by the other
statistics (Table (Table11).
Overlap between the top 1000 genes identified by different methods and the SAGE BetaBin ranking.
We have also used edgeR [18], a moderated statistical test for sequencing data with replicates [17,20], to analyze the full dataset. For the no replicate case, we first used the two samples as
replicates to estimate the dispersion parameter in edgeR and then estimated differential expression given that dispersion. We compared the overlap between the top ranked differentially expressed
genes using edgeR on the full data (using edgeR) and top ranked genes from the data without replicates (using ASC, edgeR, DGEseq, Z and fisher's exact test). The agreement between ASC and full data
edgeR is almost identical to the agreement between no-replicate and full data edgeR, while the latter is expected since it is based on the same methodology. About a third of the top 100 genes
identified in the full data are recovered in the top 100 genes ranked by ASC. In contrast, less than 2 of the top 100 genes from full data analysis made to the top 100 list by the other methods. The
comparison is summarized in Table Table22.
Overlap between the top 100 or top 1000 differentially expressed genes identified by edgeR on full data and by other statistics on data without replicate
Why is there so little overlap between the top genes by Z-test on two sample comparison and the top genes from edgeR analysis on the full data set? Strikingly, many genes with extreme p-values in a
Z-test have small fold changes. This is because there is greater statistical power to detect even subtle changes in gene expression when the counts are higher. From the Gaussian approximation to the
sample proportion $p|π~˙N(π,π(1−π)/N)$, we have for large N[π], the log sample proportion is also approximately Gaussian, $log(p)|π~˙N{log(π)−(1−π)/(2Nπ),(1−π)/(Nπ)}$. Since the expected counts N[π ]
varies greatly from a few to over a hundred thousand, and the variance of log sample proportion decreases sharply with the increase of expected counts, it is clear that statistical power is biased
towards genes with higher counts. This also causes the bias of higher power towards longer transcripts in full transcript analysis. An extreme p-value in such a test only suggests that the
proportions of a transcript is significantly different between the two samples of comparison, not whether the difference is beyond what is reasonable between biological replicates. Figure Figure77
overlays the distribution of apparent log fold changes for the top 1000 genes identified by either simple Z-test or ASC. Since ASC identifies a gene as differentially expressed only when a shrinkage
estimate of log fold change is above certain level, the apparent log fold change from sample proportions are always away from 0. On the other hand, a simple Z-test can identify many genes with very
small changes. It appears that DGEseq fails to adequately account for biological variation and the results from DGEseq are very similar to that from a Z-test (Additional file 2, Figure S2).
Histogram of the apparent fold change of the top 1000 genes found by Z-test or ASC.
We present a simple hierarchical model for sequencing-based gene expression data (e.g. DGE, RNAseq ect.) that provides a shrinkage estimate of differential expression in the form of posterior mean of
log fold change. Even in experiments lacking replicates, we take advantage of the large number of sequences quantified in the same experiment and establish a prior distribution of difference between
conditions. The differential expression of a gene is evaluated based on the posterior expectation of log fold change. This estimate takes into account the increased uncertainty for genes with smaller
counts (demonstrated by more aggressive shrinking in Figure Figure4)4) yet still allows the identification of differential expression among genes with lower expression. Our measure of statistical
uncertainty is the posterior probability that the differential expression is beyond a given threshold, thus the inference on differential expression avoids the problem of conflicting "statistical
significance" versus "biological significance" seen in Z-tests.
It is not uncommon to use hierarchical models for gene expression data. Several models used in microarray data analysis [21,22] add another level of hierarchy by assuming that only a fraction of
genes may have been affected by any treatment, and the rest have absolutely no change. Therefore, δ|Z = 1 ~ N(0, τ^2) and δ = 0|Z = 0 with P(Z = 1) = p[0]. This essentially assumes that the prior
distribution of δ is a zero-inflated Gaussian distribution. We show that using a simple Gaussian prior provides good shrinkage without the extra layer of hierarchy, which greatly simplifies
In biological terms, our model means that the mean gene expression levels between two populations are never absolutely equal for any gene. However, the difference for most genes are small. We use
posterior expectation as the estimate of the magnitude of difference. McCarthy and Smyth [23] showed that testing the differential expression relative to a biologically meaningful threshold
identifies more biologically meaningful genes. We take a similar approach and estimate the posterior probability that the differential expression is greater than a threshold. Therefore, we avoid
genes with very subtle differential expression even if that difference is statistically significant between the two samples in comparison. The genes identified by ASC include those that are modestly
expressed as well as highly expressed.
DGE data generation
The diatom Thalassiosira pseudonana (Strain 1335 from the Center for the Culture of Marine Phytoplankton) was grown axenically in 24 hr light at 14°C in f/2 media [24,25] made with Sargasso Sea
water, Treatments consisting of phosphorus-limited medium (0.4 μM PO[4]) and phosphorus-replete medium (36 μM PO[4]) were grown in triplicate and are herein referred to as treatments A and B,
respectively. Equal volumes of cell biomass from each replicate were pooled for the A or B treatments 96 hours after inoculation and harvested by gentle filtration. Filters were immediately frozen in
liquid nitrogen and stored at -80°C.
Total RNA was extracted using the RNeasy Midi Kit (Qiagen), following the manufacturer's instructions with the following changes: RNA samples were processed with Qiashredder columns (Qiagen) to
remove large cellular material and DNA was removed with an on-column DNAase digestion using RNase-free DNAase (Qiagen). A second DNA removal step was conducted using the Turbo DNA-free kit (Ambion,
Austin, TX, USA)[B1]. The RNA was quantified in triplicate using the Mx3005 Quantitative PCR System (Stratagene) and the Quant-iT RiboGreen RNA Assay Kit (Invitrogen) and was analyzed for integrity
by gel electrophoresis. Total RNA was sent to Illumina (Hayward, CA) and they constructed digital gene expression (DGE) libraries with NlaIII tags following their protocol. Sequencing libraries for
NlaIII digested tags were constructed by Illumina and sequenced on their Genome Analyzer. 12,525,833 tags were sequenced from the A library and 13,431,745 tags were sequenced from the B library.
Hierarchical model for gene counts
For each transcript, we assume the observed sequence counts follow a Binomial distribution given its expected expression under a biological condition. For a sequencing run that yields total count N
for all sequence fragments, the expected count for gene i is expressed as Nπ where π is the expected proportion of this gene in the transcriptome. For two samples in the comparison, we observe the
counts x[1 ]and x[2 ]while
Many researchers simply test π[1 ]= π[2 ]and perform a Bonferonni correction to account for multiple testing. We reparametrize π[1 ]and π[2 ]as follows:
Here δ has the interpretation of log fold change in gene expression, and λ is a nuisance parameter representing the average (log) expression.
We assume prior distributions
where Exp represents shifted exponential distribution with rate α and shift λ[0].
The posterior distribution of the differential expression is therefore
We obtain the posterior mean $δ˜=E[δ|x]$ given the gene counts as an estimate of differential expression. We refer to $δ˜$ as the shrinkage estimate of log fold change, which is sufficient to rank
the genes. To evaluate the statistical significance, we compute the posterior probability P(|δ| > Δ[0]|x), where Δ[0 ]is a user-defined effect size of biological significance. There is no closed form
expression for the posterior distribution and we use numerical integration for the evaluation of the posterior mean and probability.
Estimation of hyper parameters
The observed log rpm has a very skewed distribution, motivating us to use a distribution with exponential decay. But the location of this distribution is shifted compared to exponential distribution
with an unknown lower bound. One advantage of the exponential distribution is the closed form expression of its cumulative density function. For λ ~ Exp(α, λ[0]), $F(λ)=1−e−a(λ−λ0)$. From the
sequence counts, we first compute average log rpm between the two conditions and use these to obtain the empirical CDF $F^$. Thus for two quantiles q[1], q[2], we can obtain empirical quantiles and
$λ1=F^−1(q1)$ and $λ2=F^−1(q2)$. Solving the equations
gives estimates${a^=−[log(1−q1)−log(1−q2)]/(λ1−λ2)λ^0=λ1+log(1−q1)/α^$
We can also use the method of moments to estimate the rate without knowing the shift parameter since the conditional expectation also has a closed from, due to the lack of memory property. For a
given 0 <q < 1,
Thus we can estimate α as $1/{X¯X>F^−1(q)−F^−1(q)}$. Our default setting is q[1 ]= 0.8 and q[2 ]= 0.9. The posterior mean E[δ|x] is not sensitive to the choice of q[1], q[2 ](Additional file 3,
Figure S3). Again, due to the lack of memory property, the probability density of shifted exponential distributions with the same rate are proportional, thus the value of λ[0 ]does not affect the
posterior distribution and does not need to be estimated.
To estimate τ, the parameter representing the biological variation among replicates, we borrow information across genes. Although for any given gene we only observe one total count under each
condition, and thus the true differential expression and the biological variation cannot be identified, we assume that the majority of genes are not affected by the treatment, an assumption found to
be reasonable in microarray data in many experiments. We can model τ as a function of λ, but Figure Figure22 suggests that the biological variation is rather constant across expression levels. Thus
we estimate one global parameter τ. We start with the Gaussian approximation of the Binomial model $p1|π1=x1/N1~˙N(π1,π1(1−π1)/N1)$. Since the total counts N are usually a very large integer, we also
have, approximately, $log(p1)|π1~˙N{log(π1)−(1−π1)/(2N1π1),(1−π1)/(N1π1)}$. The variance of log(p[1]) decreases with rate of 1/N as π increases and becomes negligible compared to biological
variation. Thus we simply estimate τ from the differences of log rpm with the highest average log rpm. We use inter quartile range instead of sample standard deviation to avoid influence of genes
with extreme differential expression. $τ^=IQR[log(p1)−log(p2)]/IQR[N(0,1)]$. In practice we use total counts above 1000 and this allows us to have several thousand genes (over 4000 in our example)
for the estimation.
Authors' contributions
ZW developed the statistical methodology, in consultation with BDJ and TAR, and drafted the manuscript. BDJ, TAR, STD and MAS designed the study that generated the DGE data, and contributed to
writing the manuscript. MM and LPW performed experiments that generated the DGE data. All authors have read and approved the final manuscript.
Supplementary Material
Additional file 1:
Figure S1. A. Scatter plot of the log[10 ]rpm in the A and B samples. Differentially expressed genes identified by DGEseq with estimated q-value (Storey FDR) less than 0.01 highlighted in red and the
genes with smallest q-value 1000 of which highlighted in blue. B. Distribution of average rpm for the highlighted red or blue genes.
Additional file 2:
Figure S2. Histogram of the apparent fold change of the top 1000 genes found by DGEseq or ASC.
Additional file 3:
Figure S3. Sensitivity of $δ^$ to hyper-parameter estimation. Scatter plot of $δ^1$ and $δ^1$, based on q[1 ]= 0.8, q[2 ]= 0.9 and q[1 ]= .9, q[2 ]= 0.95, respectively. The maximum difference in
estimated fold change is less than 0.04, indicating that $δ^$ is not sensitive to the choice of q.
We thank the reviewers for their insightful comments and suggestions that greatly strengthened the manuscript. We thank A. Drzewianowski for her assistance with laboratory experiments. Funding was
provided by NSF OCE-0723677.
• Oshlack A, Wakefield M. Transcript length bias in RNA-seq data confounds systems biology. Biology Direct. 2009;4:14. doi: 10.1186/1745-6150-4-14. [PMC free article] [PubMed] [Cross Ref]
• Dohm J, Lottaz C, Borodina T, Himmelbauer H. Substantial biases in ultra-short read data sets from high-throughput DNA sequencing. Nucleic acids research. 2008;36(16):e105. doi: 10.1093/nar/
gkn425. [PMC free article] [PubMed] [Cross Ref]
• Hoen P, Ariyurek Y, Thygesen H, Vreugdenhil E, Vossen R, de Menezes R, Boer J, van Ommen G, den Dunnen J. Deep sequencing-based expression analysis shows major advances in robustness, resolution
and inter-lab portability over five microarray platforms. Nucleic acids research. 2008;36(21):e141. doi: 10.1093/nar/gkn705. [PMC free article] [PubMed] [Cross Ref]
• Li B, Ruotti V, Stewart R, Thomson J, Dewey C. RNA-Seq gene expression estimation with read mapping uncertainty. Bioinformatics. 2010;26(4):493. doi: 10.1093/bioinformatics/btp692. [PMC free
article] [PubMed] [Cross Ref]
• Lee M, Kuo F, Whitmore G, Sklar J. Importance of replication in microarray gene expression studies: statistical methods and evidence from repetitive cDNA hybridizations. Proceedings of the
National Academy of Sciences of the United States of America. 2000;97(18):9834. doi: 10.1073/pnas.97.18.9834. [PMC free article] [PubMed] [Cross Ref]
• Cheng L, Lu W, Kulkarni B, Pejovic T, Yan X, Chiang J, Hood L, Odunsi K, Lin B. Analysis of chemotherapy response programs in ovarian cancers by the next-generation sequencing technologies.
Gynecologic Oncology. 2010;117:159–169. doi: 10.1016/j.ygyno.2010.01.041. [PMC free article] [PubMed] [Cross Ref]
• Marti E, Pantano L, Banez-Coronel M, Llorens F, Minones-Moyano E, Porta S, Sumoy L, Ferrer I, Estivill X. A myriad of miRNA variants in control and Huntington's disease brain regions detected by
massively parallel sequencing. Nucleic acids research. 2010. [PMC free article] [PubMed]
• Cui L, Guo X, Qi Y, Qi X, Ge Y, Shi Z, Wu T, Shan J, Shan Y, Zhu Z, Wang H. Identification of microRNAs Involved in the Host Response to Enterovirus 71 Infection by a Deep Sequencing Approach.
Journal of Biomedicine and Biotechnology. 2010;2010:425–939. doi: 10.1155/2010/425939. [PMC free article] [PubMed] [Cross Ref]
• Kal A, Van Zonneveld A, Benes V, Van Den Berg M, Koerkamp M, Albermann K, Strack N, Ruijter J, Richter A, Dujon B. et al. Dynamics of gene expression revealed by comparison of serial analysis of
gene expression transcript profiles from yeast grown on two different carbon sources. Molecular biology of the cell. 1999;10(6):1859. [PMC free article] [PubMed]
• Schaaf G, van Ruissen F, van Kampen A, Kool M, Ruijter J. Statistical comparison of two or more SAGE libraries. Methods in Molecular Biology. 2008;387:151–168. full_text. [PubMed]
• Wang L, Feng Z, Wang X, Wang X, Zhang X. DEGseq: an R package for identifying differentially expressed genes from RNA-seq data. Bioinformatics. 2010;26:136. doi: 10.1093/bioinformatics/btp612. [
PubMed] [Cross Ref]
• Nygaard S, Jacobsen A, Lindow M, Eriksen J, Balslev E, Flyger H, Tolstrup N, Møller S, Krogh A, Litman T. Identification and analysis of miRNAs in human breast cancer and teratoma samples using
deep sequencing. BMC Medical Genomics. 2009;2:35. [PMC free article] [PubMed]
• Hashimoto S, Qu W, Ahsan B, Ogoshi K, Sasaki A, Nakatani Y, Lee Y, Ogawa M, Ametani A, Suzuki Y. et al. High-Resolution Analysis of the 5'-End Transcriptome Using a Next Generation DNA Sequencer.
PLoS One. 2009;4:e4108. doi: 10.1371/journal.pone.0004108. [PMC free article] [PubMed] [Cross Ref]
• Bloom J, Khan Z, Kruglyak L, Singh M, Caudy A. Measuring differential gene expression by short read sequencing: quantitative comparison to 2-channel gene expression microarrays. BMC genomics.
2009;10:221. doi: 10.1186/1471-2164-10-221. [PMC free article] [PubMed] [Cross Ref]
• Baggerly K, Deng L, Morris J, Aldaz C. Differential expression in SAGE: accounting for normal between-library variation. Bioinformatics. 2003;19(12):1477. doi: 10.1093/bioinformatics/btg173. [
PubMed] [Cross Ref]
• Vêncio R, Brentani H, Patrão D, Pereira C. Bayesian model accounting for within-class biological variability in Serial Analysis of Gene Expression(SAGE) BMC bioinformatics. 2004;5:119. doi:
10.1186/1471-2105-5-119. [PMC free article] [PubMed] [Cross Ref]
• Robinson M, Smyth G. Moderated statistical tests for assessing differences in tag abundance. Bioinformatics. 2007;23(21):2881. doi: 10.1093/bioinformatics/btm453. [PubMed] [Cross Ref]
• Robinson M, McCarthy D, Smyth G. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26:139. doi: 10.1093/bioinformatics/
btp616. [PMC free article] [PubMed] [Cross Ref]
• Stolovitzky G, Kundaje A, Held G, Duggar K, Haudenschild C, Zhou D, Vasicek T, Smith K, Aderem A, Roach J. Statistical analysis of MPSS measurements: Application to the study of LPS-activated
macrophage gene expression. Proceedings of the National Academy of Sciences of the United States of America. 2005;102(5):1402. doi: 10.1073/pnas.0406555102. [PMC free article] [PubMed] [Cross Ref
• Robinson M, Smyth G. Small-sample estimation of negative binomial dispersion, with applications to SAGE data. Biostatistics. 2008;9(2):321. doi: 10.1093/biostatistics/kxm030. [PubMed] [Cross Ref]
• Lonnstedt I, Speed T. Replicated microarray data. Statistical Sinica. 2002;12:31–46.
• Smyth GK. Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Statistical Applications in Genetics and Molecular Biology. 2004;3:Article 3.
doi: 10.2202/1544-6115.1027. [PubMed] [Cross Ref]
• McCarthy D, Smyth G. Testing significance relative to a fold-change threshold is a TREAT. Bioinformatics. 2009;25(6):765. doi: 10.1093/bioinformatics/btp053. [PMC free article] [PubMed] [Cross
• Guillard R, Ryther J. Studies of marine planktonic diatoms. I. Cyclotella nana Hustedt, and Detonula confervacea (Cleve) Gran. Canadian Journal of Microbiology. 1962;8:229. doi: 10.1139/m62-029.
[PubMed] [Cross Ref]
• Guillard R. Culture of phytoplankton for feeding marine invertebrates. Culture of marine invertebrate animals. 1975. pp. 29–60.
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3098101/?tool=pubmed","timestamp":"2014-04-18T21:15:34Z","content_type":null,"content_length":"124727","record_id":"<urn:uuid:22aefb3d-56c0-4b11-aea2-0f80d407b82a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal paths in cardiology
Seminar Room 1, Newton Institute
Mathematical concepts will be discussed that are shared by certain approaches for modelling cardiac waves of electrical activity potential with those that are used in the modelling of other types of
waves and also with the methods of imaging science using template matching. The main concept to be discussed is the derivation of optimal path dynamics obtained by applying reduction by symmetry to
Hamilton's principle for evolution on the tangent space of smooth invertible maps possessing smooth inverses (diffeomorphisms). This concept is shared by cardiac waves, fluid dynamics, shape
dynamics, shallow water waves, imaging science and solitons. Several applications will also be discussed.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/CPP/seminars/2009072016001.html","timestamp":"2014-04-19T01:59:45Z","content_type":null,"content_length":"6649","record_id":"<urn:uuid:b199a463-31ac-4be4-97ca-7b82343f2c0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orthogonal Polynomials and Continued Fractions: From Euler’s Point of View (Encyclopedia of Mathematics and its Applications)
Format Post in Mathematics BY Sergey Khrushchev
Shared By Guest
The author of Orthogonal Polynomials and Continued Fractions: From Euler’s Point of View (Encyclopedia of Mathematics and its Applications) is Sergey Khrushchev This new and exciting historical book
tells how Euler introduced the idea of orthogonal polynomials and how he combined them with continued fractions, as well as how Brouncker's formula of 1655 can be derived from Euler's efforts in
Special Functions and Orthogonal The most interesting applications of this work are discussed, including the great Markoff's Theorem on the Lagrange spectrum, Abel's Theorem on integration in finite
terms, Chebyshev's Theory of Orthogonal Polynomials, and very recent advances in Orthogonal Polynomials on the unit As continued fractions become more important again, in part due to their use in
finding algorithms in approximation theory, this timely book revives the approach of Wallis, Brouncker and Euler and illustrates the continuing significance of their A translation of Euler's famous
paper 'Continued Fractions, Observation' is included as an.This title is available at BookMoving on Sergey Khrushchev's eBooks, .Orthogonal Polynomials and Continued Fractions: ... Textbook, course,
ebook, pdf, download at bookmoving .
Orthogonal Polynomials and Continued Fractions: From Euler’s Point of View (Encyclopedia of Mathematics and its Applications)
You should be logged in to Download this Document. Membership is Required. Register here
Related Books on Orthogonal Polynomials and Continued Fractions: From Euler’s Point of View (Encyclopedia of Mathematics and its Applications)
Comments (0)
Currently,no comments for this book! | {"url":"http://bookmoving.com/book/orthogonal-polynomials-continued-fractions-from-euler-s-point-view-encyclopedia-mathematics-applications-_32920.html","timestamp":"2014-04-20T13:31:21Z","content_type":null,"content_length":"15052","record_id":"<urn:uuid:528e76fc-bbd1-43ee-b194-e87a102da455>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proceedings Abstracts of the Twenty-Third International Joint Conference on Artificial Intelligence
Iterated Boolean Games / 932
Julian Gutierrez, Paul Harrenstein, Michael Wooldridge
Iterated games are well-known in the game theory literature. We study iterated Boolean games. These are games in which players repeatedly choose truth values for Boolean variables they have control
over. Our model of iterated Boolean games assumes that players have goals given by formulae of Linear Temporal Logic (LTL), a formalism for expressing properties of state sequences. In order to model
the strategies that players use in such games, we use a finite state machine model. After introducing and formally defining iterated Boolean games, we investigate the computational complexity of
their associated game-theoretic decision problems as well as semantic conditions characterising classes of LTL properties that are preserved by pure strategy Nash equilibria whenever they exist. | {"url":"http://ijcai.org/papers13/Abstracts/143.html","timestamp":"2014-04-19T11:57:20Z","content_type":null,"content_length":"1602","record_id":"<urn:uuid:caf6c7b1-c6b0-42db-a348-0f614d8561f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
X101: Learning Strategies for Math - Student Academic Center
X101: Learning Strategies for Math
Course Overview
2 credits, semester long, graded course. Open to all students as long as they are also enrolled in any section of Math M118: Finite Mathematics, during the same semester they are enrolled in X101.
Course Description
The long-term goal of this course is to support students in their Finite Mathematics M118 course. However, the course should not be viewed as re-teaching the “content” of M118. Rather, the course is
designed to accomplish the above stated goal by helping students to become more active, independent problem solvers interested in truly understanding the mathematical concepts in contrast to a
passive approach that relies on memorization, learning step-by-step procedures, and outside authority. In addition to the regularly scheduled X101 class meeting, students engage in several personal
conferences with the X101 instructor and the Undergraduate Teaching Intern (UGTI), and are encouraged to attend M118 evening learning sessions with the UGTI at least once a week. The focus of these
M118 evening learning sessions is on problem solving and thinking about how to do the problem solving necessary to solve M118 problems.
Course Structure
Course activities guide students to focus more on the thought processes being used rather than focusing entirely on finding the “right” answer to the problem. Students will be encouraged to become
aware of, reflect upon, and consciously direct their thinking and problem-solving efforts. Another way to think about what is covered by this course is to think about success in M118 coming from not
only “doing” math, but also “thinking about doing” math and “questioning what you are doing” in math. This is done primarily through expressing mathematical ideas orally and in writing as well as
describing how they reached an answer or the difficulties they encountered while trying to solve a problem. In addition, students’ beliefs about the nature of mathematics and themselves as learners
are addressed, as well as math study skills and strategies for coping with math/test anxiety and various lecture styles.
Course Objectives
Students have the opportunity to develop skills in the following areas:
1. Understanding of Math Material
• Systematically and actively reading the math textbook so that one can recognize, understand, and remember the explicit and implicit main ideas and supporting details.
• Critically understanding and synthesizing the relationships between math ideas and the organization of the math text.
• Integrating one’s lecture and textbook notes so that one can prepare for tests.
• Employing “questioning” and organized problem-solving techniques, which encourage one to become aware of, reflect upon, and consciously direct one’s thinking and problem-solving efforts.
• Diagnosing and monitoring one’s mental processes while reading and learning math material and then writing personal reflections based on one’s learning.
• Engaging in the self-discovery of knowledge — e.g. refining effective problem-solving techniques and rejecting ineffective ones.
• Working with others to develop collaborative learning skills
2. Communication of Math Material
• Taking lecture notes that effectively communicate to oneself and to others the material presented by a lecturer.
• Writing about mathematical concepts and relational ideas
• Orally presenting mathematical concepts and relational ideas in conversational language, i.e. to “speak” mathematics
• Explicitly explaining one’s thought processes as one tries to solve math problems.
3. Math Behavior, Attitudes, and Beliefs
• Addressing one’s own beliefs about the nature of mathematics and learning and oneself as a math learner, and if necessary, to change and develop behaviors and beliefs that facilitate mathematics
• Perservering when having trouble understanding and figuring out something in one way and trying different problem-solving methods, i.e. remaining actively involved with the problem.
• Develop patience to employ to employ a step-by-step procedure to learn and solve math problems.
• Developing a faith in persistent systematic analysis.
• Developing a concern for accuracy and an avoidance of wild guessing.
• Managing time effectively to keep pace with academic demands.
• Feeling more comfortable with the format of objective and subjective exams.
• Coping more effectively with stress, test anxiety, and math anxiety.
Who Benefits from Taking Education X101
• Students that are likely to need to take more math classes beyond M118
• Students who never seem to do as well in math classes as they feel they are capable of doing
• Students who find themselves working harder in math classes than their other classes, but do not receive grades as high as they would like
• Students who likely will be enrolling in other quantitative classes like accounting, economics, statistics, chemistry, physics, computer programming
• Students who would not be satisfied with a “C” in M118
• Students who tend to rely on memorizing steps to solve problems rather than on understanding concepts
• Students who tend to rely on calculators to solve math problems
• Students who would like to optimize their study skills
• Students who would like additional help working in a small classroom with opportunities for more structured one-on-one math homework problem-solving
• Students who want to know what it takes to do well at the college level
• Students who tend to procrastinate and have poor self discipline
Typically, over 50% of the students who complete X101 earn “A’s” or “B’s” in Finite Math, M118, and fewer than 10% receive “D’s” or “F’s”. | {"url":"http://sac.indiana.edu/courses/xonezeroonemath/","timestamp":"2014-04-19T17:28:03Z","content_type":null,"content_length":"29655","record_id":"<urn:uuid:1e39fd7f-633d-4ae3-b1da-38a46f5b325c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Universal Transverse Mercator System Projection
For the Universal Transverse Mercator System, the globe is divided into 60 zones, each spanning six degrees of longitude. Each zone has its own central meridian from which it spans 3 degrees west and
3 degrees east. X and Y coordinates are recorded in meters. The origin of each zone is the equator and its central meridian. The value given to the central meridian is a false easting of 500,000. In
the continental United States, the North American Datum of 1927 (NAD27) and the Clarke spheroid are most commonly used.
A datum is a set of parameters defining a coordinate system and a set of control points whose geometric relationships are known either through measurements or calculation (Dewhurst, 1990). All datums
are based on a spheroid, which approximates the shape of the earth.
The North American Datum of 1927 uses the Clarke spheroid of 1866 to represent the shape of the earth. The origin of this datum is a point on the earth referred to as Meades Ranch in Kansas. Many
NAD27 control points were calculated from observations taken in the 1800s. These calculations were done manually and in sections over many years, therefore errors varied from station to station.
Many technological advances in surveying and geodesy since the establishment of NAD2--electronic theodolites, GPS satellites, Very Long Baseline Interferometry, and Doppler systems--revealed
weaknesses in the existing network of control points. The North American Datum of 1983 is based on earth and satellite observations, using the GRS80 spheroid. The origin of the datum is the
earth's center of mass. This affects the surface location of all latitude-longitude values enough to cause locations of previous North American control points to shift, sometimes as much as 500 | {"url":"http://www.esri.com/news/arcuser/0499/utm.html","timestamp":"2014-04-18T03:08:19Z","content_type":null,"content_length":"17634","record_id":"<urn:uuid:9ecf8a8b-5ee2-4cfa-9b9d-f400e0bce03b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robert Kantrowitz, Ph.D., Professor of Mathematics
Areas of Expertise: analysis and commutative Banach algebras.
Robert Kantrowitz, a 1982 graduate of Hamilton College, earned a master's and doctorate from Syracuse University. He returned to join the Hamilton faculty in 1990 and has served as Mathematics
Department chair since 2010.
More >>
His research is in analysis, with particular focus on Banach algebras, automatic continuity, and operator theory, and his teaching interests include analysis, linear algebra, and calculus.
Kantrowitz's latest article, "Series that converge absolutely but don't converge," appeared in The College Mathematics Journal. His other recent work has focused on modeling projectile motion and on
stochastic matrices. The article "Optimization of projectile motion in three dimensions" appeared in Canadian Applied Mathematics Quarterly, and "A fixed point approach to the steady state for
stochastic matrices" is slated to appear in a forthcoming issue of Rocky Mountain Journal of Mathematics. | {"url":"http://www.hamilton.edu/academics/departments/faculty?dept=Mathematics","timestamp":"2014-04-18T23:32:43Z","content_type":null,"content_length":"41894","record_id":"<urn:uuid:aa5b7420-fb1a-4b5e-be12-474ba324bf8e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Introduction to Numerical Analysis - Math 104B
Summer 2011
Monday to Thursday, 12:30-13:35 pm, Girvetz 1116
Instructor: Juan M. Molera
Office: South Hall, 6522.
E-mail: molera@math.uc3m.es
URL: http://gauss.uc3m.es/web/personal web/molera/math104B.html
Office Hours: Tuesday & Thursday, 10:00-11:30 am.
Textbook: Numerical Analysis, by Richard L. Burden and J. Douglas
Faires, 8th edition.
Course description: This is the second part of a three-quarters intro-
ductory course on Numerical Analysis. The focus this quarter will be the
numerical solution of linear systems of equations and eigenvalue problems.
We will also study families of orthogonal polynomials and their approxima-
tion properties. Although the emphasis will be in applications, the course
will have a strong theoretical component.
Prerequisites: Math 5 A, B, and C or equivalent. Knowledge of a computer
language suitable for numerical computing: FORTRAN, C, C++, or Matlab.
Assignments and grading: Homework will be assigned on Thursday, and
will be collected at the beginning of the class on the following Thursday. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/070/4184927.html","timestamp":"2014-04-21T06:02:12Z","content_type":null,"content_length":"8172","record_id":"<urn:uuid:48ff21f0-52ba-4340-ae4a-ce22f8170207>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
multiplying matrices
03-22-2005 #1
Registered User
Join Date
Jan 2005
multiplying matrices
i've written a program that will multiply two matrices, and as of right now it outputs the answers all in column..I was attempting to be able to output the answers in another matrix, matrix C,
but I have no idea how to even get around to doing that. I did all I knew, which was to declare the matrix..other than that im stuck
my code thus far:
include <iostream> //library for basic
#include <math.h>
using namespace std;
//declare functions
void matrixA();
void matrixB();
void multiply();
void validityCheck();
//declare variables
int arrayB[100][100];
int i,j,num,k=1;
//matrix B variables
int arrayA[100][100];
int row,col,numb=1;
int sum;
int answer[10][10];
int main()
/*------------------------------My Functions----------------------------*/
void matrixA()
//creates first matrix
cout<< "Matrix A "<<endl;
cout << endl;
cout<<" "<<endl;
void matrixB()
//creates second matrix
cout<< "Matrix B"<<endl;
cout << endl;
void multiply()
cout<<sum<<" sum "<<endl;
void multiply()
cout<<sum<<" sum "<<endl;
1) Are you aware that arrays have zero based indexes, i.e. they start at index 0? The rows start at index 0, and the columns in every row start at index 0. You are skipping the entire first row
of a matrix as well as the first spot in every row, which is a waste of memory--although only a tiny bit is wasted.
2) You don't need three for-loops to multiply a couple of 2 x 2 matrices together--you should only need two for-loops. I assume you want to multiply each value in one matrix by the value in the
same spot in the other matrix. Let's take a look at what your three loops do:
arrayA[1,1] * arrayB[1,1] //ok
arrayA[1,2] * arrayB[2,1] //??
arrayA[1,1] * arrayB[1,2] //??
arrayA[1,2] * arrayB[2,2] //??
etc., etc.
Is that what you want?
3) To output an array to look like a matrix, in your case, use two for-loops. The outer loop will specify the row, and the inner loop will specify the column in the row. You can use cout in the
inner loop to output the value at that position, and follow it with a space: " ". Just after the closing bracket of the inner for loop, which is before the outer loop has a chance to increment
the row, use cout to start a newline: cout<<endl;
Last edited by 7stud; 03-22-2005 at 10:57 PM.
int c[100][100];
for(int i = 0; i < 100; i++)
for(int j = 0; j < 100; j++)
c[i][j] = a[i][j] * b[i][j];
i seem to have GCC 3.3.4
But how do i start it?
I dont have a menu for it or anything.
03-22-2005 #2
Registered User
Join Date
Apr 2003
03-22-2005 #3
Registered User
Join Date
Sep 2004 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/63361-multiplying-matrices.html","timestamp":"2014-04-16T18:37:53Z","content_type":null,"content_length":"49721","record_id":"<urn:uuid:462e5a23-5ca2-4bdc-94db-1946133e941f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Thermodynamics: Entropy of the Maxwell-Boltzman distribution?
I know that the entropy of Bose-Einstein and Fermi-Dirac statistics as a function of the mean number of particles per state r (nr) is:
where the upper sign is for FD and the lower sign for BE
What is this expression for Maxwell-Boltzman statistics? | {"url":"http://www.physicsforums.com/showpost.php?p=4246946&postcount=1","timestamp":"2014-04-19T12:45:47Z","content_type":null,"content_length":"9134","record_id":"<urn:uuid:fae21a9f-3b9a-4280-929f-001089a87ec3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sterling, VA Trigonometry Tutor
Find a Sterling, VA Trigonometry Tutor
...Algebra 2 is my favorite subject. This course allows you to truly explore your understanding of functions. Whether they are linear, quadratic, rational, polynomial or exponential.
24 Subjects: including trigonometry, reading, geometry, algebra 1
...My 14 years of professional experience span both education and research. As an educator, I currently teach calculus at Howard Community College and have been a tutor, grader, laboratory
assistant, and laboratory instructor. As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students.
9 Subjects: including trigonometry, calculus, physics, geometry
...This education included virtually enough for a Bachelor's in Mathematics. In addition I have approximately 40 years work experience in the aerospace field, most of it designing hardware, which
always involves significant math. I am currently retired but have always enjoyed working with younger engineers and have often acted as their mentors.
12 Subjects: including trigonometry, calculus, physics, geometry
...My interest in tutoring begins with a deep love for the subject matter, which means that for me there's no substitute for actually understanding it: getting the right answer isn't nearly as
important as being able to explain why it's right. As a tutor, my main job isn't to talk, but to listen: I...
18 Subjects: including trigonometry, writing, calculus, geometry
...I have earned a bachelor's degree in Biology and Chemistry from the University of Virginia. I had one year of general chemistry, one year of organic chemistry, and one year of physical
chemistry. I also took organic chemistry II from GMU when I tried to apply to professional school.
15 Subjects: including trigonometry, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Sterling_VA_trigonometry_tutors.php","timestamp":"2014-04-18T21:58:52Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:ff02a897-e365-4d54-9e4a-a469ca1f67b4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sterling, VA Trigonometry Tutor
Find a Sterling, VA Trigonometry Tutor
...Algebra 2 is my favorite subject. This course allows you to truly explore your understanding of functions. Whether they are linear, quadratic, rational, polynomial or exponential.
24 Subjects: including trigonometry, reading, geometry, algebra 1
...My 14 years of professional experience span both education and research. As an educator, I currently teach calculus at Howard Community College and have been a tutor, grader, laboratory
assistant, and laboratory instructor. As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students.
9 Subjects: including trigonometry, calculus, physics, geometry
...This education included virtually enough for a Bachelor's in Mathematics. In addition I have approximately 40 years work experience in the aerospace field, most of it designing hardware, which
always involves significant math. I am currently retired but have always enjoyed working with younger engineers and have often acted as their mentors.
12 Subjects: including trigonometry, calculus, physics, geometry
...My interest in tutoring begins with a deep love for the subject matter, which means that for me there's no substitute for actually understanding it: getting the right answer isn't nearly as
important as being able to explain why it's right. As a tutor, my main job isn't to talk, but to listen: I...
18 Subjects: including trigonometry, writing, calculus, geometry
...I have earned a bachelor's degree in Biology and Chemistry from the University of Virginia. I had one year of general chemistry, one year of organic chemistry, and one year of physical
chemistry. I also took organic chemistry II from GMU when I tried to apply to professional school.
15 Subjects: including trigonometry, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Sterling_VA_trigonometry_tutors.php","timestamp":"2014-04-18T21:58:52Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:ff02a897-e365-4d54-9e4a-a469ca1f67b4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Point A(–4, 2) is reflected over the line x = 3 to create the point A'. What are the coordinates of A'? A. (–4, –1) B. (–4, 8) C. (–1, 2) D. (10, 2)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/512e2458e4b02acc415e8566","timestamp":"2014-04-19T19:45:13Z","content_type":null,"content_length":"46853","record_id":"<urn:uuid:da569cef-9944-4ea0-b334-7fa41599b49e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Easy Question...I can't do it
Icefire63111 says...
Ok, My homework is "How many different isosceles triangles are possible if the the sides must have whole number lengths and the perimeter must be 93 inches?
I need All the different ways. please. I would do it myself, but for some reason, I can't.
REVIEWS! IF You Need Them, PM me!
You Can make it all go away
The pain the suffering
The Hurt You Put her through
Let you won't;
You can't
So for every love She knows
Another stich she sews
Karsten says...
This took a little lateral thinking. I'm not an expert, so anyone out there is free to double-check my working.
Q. How many different isosceles triangles are possible if the the sides must have whole number lengths and the perimeter must be 93 inches? (Note that this question doesn't ask you to state all the
possible triangles, as you said in your post, but how many triangles are possible.)
Perimeter of 93 inches = 2 equal sides + 1 unequal side.
Let's call the equal sides x and the unequal side y. So:
93 = 2x + y
Let’s imagine that y = 1.
93 = 2x + 1
92 = 2x
46 = x
Let’s imagine that x = 1.
93 = 2(1) + y
93 = 2 + y
91 = y
So the range of possible y values is from 1 to 91 - that's 91 values. And the range of possible x values is from 1 to 46 - that's 46 possible values. Uh-oh! We can't have a ton of triangles without
any x values. More trial and error required.
Let’s imagine that y = 2.
93 = 2x + 2
91 = 2x
x = not a whole number, so this is not a possible answer.
As you can see, y can only be an odd number. This slashes half of the possible triangles. Our range of possible y values is now odd numbers only from 1 to 91. That's 46 possible values. Both our x
and our y are now giving us 46 possible triangles! That's great.
One final step is to exclude the values of y = 31, x = 31, because these would give us an equilateral, not isoceles, triangle.
So the final answer is 45 possible triangles.
(Edited to cover up my inability to count.) | {"url":"http://www.youngwriterssociety.com/viewtopic.php?f=63&t=52690","timestamp":"2014-04-20T13:21:46Z","content_type":null,"content_length":"17517","record_id":"<urn:uuid:f677b874-7d25-4f28-bb79-a5840b1821ab>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Let C* be the set of all nonzero complex numbers a+bi
March 17th 2010, 05:38 PM #1
Mar 2010
A) Prove that C* is a group under multiplication
B) let H={a+bi € G|a^2 + b^2 = 1}. Prove that H is a subgroup of C*.
C) Prove that the set of nth roots of unity Un is a subgroup of H
D) let G be the group of all real 2x2 matrices of the form (a,b, -b, a, where not both a and b are 0, under matrix multiplication. Show that C* and G are isomorphic.
A) Prove that C* is a group under multiplication
B) let H={a+bi € G|a^2 + b^2 = 1}. Prove that H is a subgroup of C*.
C) Prove that the set of nth roots of unity Un is a subgroup of H
D) let G be the group of all real 2x2 matrices of the form (a,b, -b, a, where not both a and b are 0, under matrix multiplication. Show that C* and G are isomorphic.
A) This is pretty obvious. What trouble are you having?
B)Also obvious. Note though that $a^2+b^2=|a+bi|^2$ and so $|(a+bi)(a'+b'i)|^2=|a+bi|^2|a'b'i|^2=1^2\cdot 1^2=1$
C) It's easier to note that $\phi:I_n\to\mathbb{Z}_n$ given by $e^{\frac{2\pi i k}{n}}\mapsto k$ is an isomorphism.
D) What's the canonical homomorphism?
March 17th 2010, 05:45 PM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/134347-let-c-set-all-nonzero-complex-numbers-bi.html","timestamp":"2014-04-16T12:07:57Z","content_type":null,"content_length":"35653","record_id":"<urn:uuid:0d650cb0-821e-4225-aa5b-69bd9def266c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics PLEASE
Posted by Physics PLEASE on Friday, June 19, 2009 at 10:39am.
I'm trying to derive the formula
v^2 = v0^2 + 2a(x-x0)
were zeros are subscripts
my book tells me to derive it this way
use the definition of average velocity to derive a formula for x
use the formula for average velocity when constant acceleration is assumed to derive a formula for time
rearange the defintion of aceleration for a formula for t
then combine equations to get the derived formula for v^2
so here's my work please show me were I won't wrong
def of average velocity = t^-1 (x - x0)
(average velocity = t^-1(x-x0))t=(avearge velocity)t + x0= x - x0 + x0 = x = (average velocity)t + x0
x = (average velocity)t + x0
def of average velocity were costant acceleration is assumed = 2^-1(v0 + v)
plug into
x = (average velocity)t + x0
x = 2^-1(v0 + v)t + x0
def of acceleration = t^-1(v-v0)
(a=t^-1(v-v0))t=(at=(v-v0))a^-1 = t = a^-1(v-v0)
t = a^-1(v-v0)
plug into x = 2^-1(v0 + v)t + x0
x = 2^-1(v0 + v)a^-1(v-v0) + x0
solve for v^2
x = 2^-1(v0 + v)a^-1(v-v0) + x0
x = (a2)^-1(v^2 -v0^2)+ x0
(x = (a2)^-1(v^2 -v0^2)+ x0)2a
(2a)x = (v^2-v0^2) + x0
(2a)x - x0 = (v^2-v0^2) + x0 - x0
(2a)x - x0 + v0^2= (v^2 - v0^2) + v0^2
(2a)x - x0 + v0^2 = v^2
so here's what I got for my equation
v^2 = v0^2 +(2a)x - x0
here's what I was suppose to get
v^2 = v0^2 + 2a(x-x0)
please show me were I went wrong
thank you!
Show me step by step and as to why because I thing your suppose to subtract x0 from both sides why do you multiply???
Related Questions
AP Physics - I'm trying to derive the formula v^2 = v0^2 + 2a(x-x0) were zeros ...
AP Physics - Hello I was trying to find the proof of the formula x = x0 + v0t +....
AP Physics - hi I need help on this proof ok I'm tyring to prove that v^2 = v^20...
Physics - def of average velocity = t^-1 (x - x0) (average velocity = t^-1(x-x0...
TO BOBPURSLEY - def of average velocity = t^-1 (x - x0) (average velocity = t^-1...
physics - A car traveling 56 km/h is 24 m from a barrier when the driver slams ...
Physics - A car traveling 56 km/h is 24 m from a barrier when the driver slams ...
physics - derive the formula v0=sqrt((deltaX)squared)*g)/2*deltaY
micro economics - Define and then derive the expression for the MRTS. How do you...
Diff Eq - Consider a body moving along a straight line, in the presence of an ... | {"url":"http://www.jiskha.com/display.cgi?id=1245422387","timestamp":"2014-04-21T00:26:50Z","content_type":null,"content_length":"10206","record_id":"<urn:uuid:b23c5944-fc78-4964-b5d8-bd7222c81938>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: Middle School Word Problems
This page:
word problems
Dr. Math
See also the
Dr. Math FAQ:
classic problems
word problems
About Math
factoring expressions
graphing equations
factoring numbers
conic sections/
3D and higher
Number Sense
factoring numbers
negative numbers
prime numbers
square roots
Word Problems
Browse Middle School Word Problems
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Mixture problems.
Every hour, on the hour, a train leaves Tallahassee for Jacksonville, while another train leaves Jacksonville for Tallahassee. The trip between the two cities takes exactly two hours. How many
trains going in the opposite direction will a Tallahassee train to Jacksonville meet?
If everyone in your class gave a Valentine to everyone else in your class, how many valentines would be exchanged?
A white pine was 16 m tall in 1962. It was 20 m tall in 1965. It is now 34 m tall. How much has it grown since 1962?
Five brothers, each born in a different year, share a gift of $100...
Timothy spent all of his money at five stores. At each store, he spent $1 more than half of the amount he had when entering the store. How much money did he have when he entered the first store?
Paul made $44.14 selling 27 items (beer and popcorn). If he made $1.22 selling popcorn and $2.62 selling beer, how many boxes of popcorn were sold?
Tonya must add to a mixture enough of a substance to make the whole weigh 2.5 grams - how much should she add?
A substance is 99% water. Some water evaporates, leaving a substance that is 98% water. How much of the water evaporated?
Julia is as old as John will be when Julia is twice as old as John was when Julia's age was half the sum of their present ages. John is as old as Julia was when John was half the age he will be
10 years from now. How old are John and Julia?
Aunt Isabella gives each niece and nephew $10 on his or her first birthday, and on each birthday thereafter the children get $20.00 more than on the birthday before. How old will a child be when
he or she receives a total of $4,000.00?
A word problem involving fractions.
In 1930, a correspondent proposed the following question: A man's age at death was one twenty-ninth of the year of his birth. How old was the man in 1900?
Korinth is twice as old as Marin was when Korinth was as old as Marin is now. Marin is 18.
How do I increase 3.7 by 1/10?
Two ferryboats ply back and forth across a river with constant but different speeds...
A sample of dimes and quarters totals $18.00. If there are 111 coins in all, how many are there of each coin?
When Alexis, Chelsea, and Kammi had lunch together, Alexis spent $1.60 for two small hamburgers, a drink and one order of fries. Chelsea's two orders of fries, two drinks and one small burger
cost $1.40 altogether. How much does Kammi owe for a small burger, one order of fries and one drink?
In 1984 the average American ate 55.7 lbs. of chicken. This was 2.8 lbs. more than the average in 1982. What was the percent of increase in chicken consumption?
Five members of a basketball team are weighed and an average weight is recalculated after each weighing. If the average increases 2 pounds each time, how much heavier is the last player than the
Sometimes, when doing inequalities problems, I have to add or subtract one from the answer I have calculated. I don't understand when to add, subtract, or do nothing at all.
Find the largest possible 2 integers such that the larger integer is more than 3 less than 3 times the smaller one.
Find all numbers such that 9 less than the product of the number and -4 is less than 7.
A population of beetles increases from 5 to 15 after one month. How many beetles will there be after 4 months if the population is increasing linearly? What if it's increasing exponentially?
In Hughmoar County, residents shall be allowed to build a straight road between two homes as long as the new road is not perpendicular to any existing county road...
Given functions in a word problem, an adult student doesn't know how to begin subtracting one from the other. Before clarifying her use of variables, Doctor Peterson suggests reading through the
problem strategically, to distill it down to just algebraic statements.
Jack climbed up the beanstalk at a uniform rate. At 2 P.M. he was one- sixth the way up and at 4 P.M. he was three fourths the way up...
A jeep can carry 200 liters of gasoline and can drive 2.5 km/l. You want to travel 1000 km...
Jim and Joe leave their homes at the same time and drive toward each other... how far apart were they when they started?
How many marbles did John have before he lost 42 green ones?
Jupiter revolves around the Sun about once every 12 earth years; Saturn about once every 30 earth years. In 1982, Jupiter and Saturn appeared very close to each other. When will they appear
together again?
Are there words or phrases like 'in all' that help suggest what operation to use to solve a word problem?
If x knights are sitting at a round table, and every other one is removed, who is the last one left sitting at the table?
If a hen and a half lays an egg and a half in a day and a half, how many and a half that lay better by half will lay half a score and a half in a week and a half?
How many men are needed to pump a ship dry in 2 hours?
If lemonade is made with a ratio of 3 cups water to 2 cups lemon juice, how many cups of lemon juice are in 10 gallons of lemonade?
One-fifth of some bees fly to the rosebush, one-third fly to the apple tree, and three times the difference fly to the acorn tree. One bee is left flying around. How many bees are there
A machine for processing pizzas puts salami on every 18th pizza...
After three tests, Amanda's average score is 88. What grade does she need on her next test to score a four-test average of 90?
Write a problem that could be solved by using the division sentence 1489/28=n; then write a pair of compatible numbers and estimate the quotient.
A man is jogging across a bridge at 8 mph. When he is 3/8 of the way across...
Page: [<prev] 1 2 3 4 5 6 7 8 9 10 11 12 [next>] | {"url":"http://mathforum.org/library/drmath/sets/mid_word_problems.html?start_at=241&s_keyid=38891292&f_keyid=38891293&num_to_see=40","timestamp":"2014-04-17T15:59:49Z","content_type":null,"content_length":"24537","record_id":"<urn:uuid:e11462af-26e0-4dc4-b549-65cfd5750b10>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
proof of inifinite primes
September 19th 2007, 06:56 AM #1
proof of inifinite primes
Could someone help me understand this proof?
It says, by assumption, $p=p_i$ for some $i=1,2,3,...,n$.
It also says $p_i | a$ and by assumption, $p_i | a+1$. It makes perfect sense if the $p_i$ in the first statement and the $p_i$ in the second are different values of $i$. But it declines this by
saying that $p_i | (a+1)-a=1$ (the $p_i$ is the same in both statements). I don't see how this proves anything however, because you could simply choose a different prime for a+1, instead of
sticking with the same prime and forcing a contradiction. Also I'm not confortable with all the assumptions being made.
N.B: Exercise 3.2 states that $n|a \wedge n|b \Rightarrow n|a-b$
and Axiom 3 states that $n|1 \Rightarrow n=\pm 1$.
Could someone help me understand this proof?
It says, by assumption, $p=p_i$ for some $i=1,2,3,...,n$.
It also says $p_i | a$ and by assumption, $p_i | a+1$. It makes perfect sense if the $p_i$ in the first statement and the $p_i$ in the second are different values of $i$. But it declines this by
saying that $p_i | (a+1)-a=1$ (the $p_i$ is the same in both statements). I don't see how this proves anything however, because you could simply choose a different prime for a+1, instead of
sticking with the same prime and forcing a contradiction. Also I'm not confortable with all the assumptions being made.
N.B: Exercise 3.2 states that $n|a \wedge n|b \Rightarrow n|a-b$
and Axiom 3 states that $n|1 \Rightarrow n=\pm 1$.
By construction every p_i divides a, as it is the product of all the primes.
Presumably Lemma 10.2 is that evey number >3 say is either a prime or
divisible by a prime. Hence N has a prime divisor which is amoung p_1..p_n,
and a+1=N. So if p_i is the prime divisor of N it divides both a and a+1.
Ah, I think it's coming together now...
So $N$ must be prime, right? Hmmm, then this is a bit like induction. Whenever you find an $n_k$, you have proof that $n_{k+1}$ exists.
3 is a prime
7 is a prime
3*7 +1 =22 is not a prime
is'nt this conter the proof ?
No the assertion is not that $P=\left[1+\prod_{i=1}^n p_i\right]$ is prime but that it has a prime divisor not equal to any of the $p_i$'s. Now that may be $P$ itself but does not have to be.
You will note that $11|22$ and $11>3$ and $11>7$.
I see , thanks for explanation
September 19th 2007, 07:13 AM #2
Grand Panjandrum
Nov 2005
September 19th 2007, 07:36 AM #3
September 19th 2007, 09:30 AM #4
Global Moderator
Nov 2005
New York City
December 12th 2008, 09:19 PM #5
Dec 2008
December 12th 2008, 09:53 PM #6
Grand Panjandrum
Nov 2005
December 13th 2008, 01:10 AM #7
Dec 2008 | {"url":"http://mathhelpforum.com/number-theory/19190-proof-inifinite-primes.html","timestamp":"2014-04-16T04:41:16Z","content_type":null,"content_length":"55995","record_id":"<urn:uuid:a88122dd-c301-402f-9476-3e383e4c8b5c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from December 2009 on The Lumber Room
Archive for December 2009
[If you haven't seen Groundhog Day, you may want to stop reading.]
A hilarious comment from the brilliant Everyone Is Jesus In Purgatory page (well, brilliant idea, anyway) on TVTropes (statutory warning: TVTropes will ruin your life):
Bill Murray’s Groundhog Day stands as Hollywood’s sole Buddhist message movie. As Phil (short for “philosopher”, obviously, a common name for the Buddha), Murray eventually realizes what takes
many lifetimes to understand; namely, that every cycle of birth-death-rebirth (every “day”) is always the same, over and over, depressing, painful, and bound by karma (i.e.- how you’ve treated
others in the past), until you awaken and make a conscious choice to change that destiny. It’s interesting that Phil takes the Tantric path, initially using the opportunity of being “reborn”
every morning to simply fulfill all desires, and therefore, to ultimately purge himself of them. Still, over who knows how many “days” — how many lifetimes of days — he eventually comes to see
the connectedness of all things, the sacredness of all life, and the joy to be found in knowledge, wisdom, and simply making a difference in the lives of others. By his own effort, and even
against his own initial nature, over many lifetimes he achieves Enlightenment, and is able “move on.” Plus, that scene where he lets the groundhog drive the truck is freakin’ hilarious…
But searching the internet further, the connection seems to have been made more than merely at TVTropes:
Paul E. Schindler’s notes from a screening of the film, sponsored by San Francisco Zen Center
A Buddhist Interpretation of Groundhog Day By Sanja Blackburn
The Groundhog Day Buddhism Sutra by Perry Garfinkel at the Huffington Post
Longer one in the Shambala Sun
And other religions too:
Groundhog Almighty by Alex Kuczynski in the NYT. (Also available here here.)
[I was going to summarize them, but as of 2010-05-12, decided to just dump the draft I had.]
(Of course, the idea of the endless cycle/samsara/whatever is not a Buddhist idea but a pre-Buddhist Hindu one, but it occupies a more central role in Buddhism, and because of the greater popularity
of the ideas of Buddhism in the west, it comes to be associated with Buddhism.)
The Oxford University Press has been publishing a book series known as “Very Short Introductions”. These slim volumes are an excellent idea, and cover over 200 topics already. The volume Mathematics:
A Very Short Introduction is written by Timothy Gowers.
Gowers is one of the leading mathematicians today, and a winner of the Fields Medal (in 1998). In addition to his research work, he has also done an amazing amount of service to mathematics in other
ways. He edited the 1000-page Princeton Companion to Mathematics, getting the best experts to write, and writing many articles himself. He also started the Polymath project and the Tricki, the
“tricks wiki”. You can watch his talk on The Importance of Mathematics (with slides) (transcript), and read his illuminating mathematical discusssions, and his blog. His great article The Two
Cultures of Mathematics is on the “theory builders and problem solvers” theme, and is a paper every mathematician should read.
Needless to say, “Mathematics: A Very Short Introduction” is a very good read. Unlike many books aimed at non-mathematicians, Gowers is quite clear that he does “presuppose some interest on the part
of the reader rather than trying to drum it up myself. For this reason I have done without anecdotes, cartoons, exclamation marks, jokey chapter titles, or pictures of the Mandelbrot set. I have also
avoided topics such as chaos theory and Godel’s theorem, which have a hold on the public imagination out of proportion to their impact on current mathematical research”. What follows is a great book
that particularly excels at describing what it is that mathematicians do. Some parts of the book, being Gowers’s personal views on the philosophy of mathematics, might not work very well when
directed at laypersons, not because they require advanced knowledge, but assume a culture of mathematics. Doron Zeilberger thinks that this book “should be recommended reading to everyone and
required reading to mathematicians”.
Its last chapter, “Some frequently asked questions”, carries Gowers’s thoughts on some interesting questions. With whole-hearted apologies for inserting my own misleading “summaries” of the answers
in brackets, they are the following: “1.Is it true that mathematicians are past it by the time they are 30?” (no), “2. Why are there so few women mathematicians?” (puzzling and regrettable), “3. Do
mathematics and music go together?” (not really), “4. Why do so many people positively dislike mathematics?” (more on this below), “5. Do mathematicians use computers in their work?” (not yet), “6.
How is research in mathematics possible?” (if you have read this book you won’t ask), “7. Are famous mathematical problems ever solved by amateurs?” (not really), “8. Why do mathematicians refer to
some theorems and proofs as beautiful?” (already discussed. Also, “One difference is that [...] a mathematician is more anonymous than an artist. [...] it is, in the end, the mathematics itself that
delights us”.) As I said, you should read the book itself, not my summaries.
The interesting one is (4).
4. Why do so many people positively dislike mathematics?
One does not often hear people saying that they have never liked biology, or English literature. To be sure, not everybody is excited by these subjects, but those who are not tend to understand
perfectly well that others are. By contrast, mathematics, and subjects with a high mathematical content such as physics, seem to provoke not just indifference but actual antipathy. What is it
that causes many people to give mathematical subjects up as soon as they possibly can and remember them with dread for the rest of their lives?
Probably it is not so much mathematics itself that people find unappealing as the experience of mathematics lessons, and this is easier to understand. Because mathematics continually builds on
itself, it is important to keep up when learning it. For example, if you are not reasonably adept at multiplying two-digit numbers together,then you probably won’t have a good intuitive feel for
the distributive law (discussed in Chapter 2). Without this, you are unlikely to be comfortable with multiplying out the brackets in an expression such as $(x+2)(x+3)$, and then you will not be
able to understand quadratic equations properly. And if you do not understand quadratic equations, then you will not understand why the golden ratio is $\frac{1+\sqrt{5}}{2}$.
There are many chains of this kind, but there is more to keeping up with mathematics than just maintaining technical fluency. Every so often, a new idea is introduced which is very important and
markedly more sophisticated than those that have come before, and each one provides an opportunity to fall behind. An obvious example is the use of letters to stand for numbers, which many find
confusing but which is fundamental to all mathematics above a certain level. Other examples are negative numbers, complex numbers, trigonometry, raising to powers, logarithms, and the beginnings
of calculus. Those who are not ready to make the necessary conceptual leap when they meet one of these ideas will feel insecure about all the mathematics that builds on it. Gradually they will
get used to only half understanding what their mathematics teachers say, and after a few more missed leaps they will find that even half is an overestimate. Meanwhile, they will see others in
their class who are keeping up with no difficulty at all. It is no wonder that mathematics lessons become, for many people, something of an ordeal.
This seems to be exactly the right reason. No one would enjoy being put through drudgery that they were not competent at, and without the beauty at the end of the pursuit being apparent. (I hated my
drawing classes in school, too.) See also Lockhart’s Lament, another article that everyone — even, or especially, non-mathematicians — should read.
As noted earlier, Gowers has some things to say about the philosophy of mathematics. As is evident from his talk “Does mathematics need a philosophy?” (also typeset as essay 10 of 18 Unconventional
Essays on the Nature of Mathematics), he has rejected the Platonic philosophy (≈ mathematical truths exist, and we’re discovering them) in favour of a formalist one (≈ it’s all just manipulating
expressions and symbols, just stuff we do). The argument is interesting and convincing, but I find myself unwilling to change my attitude. Yuri Manin says in a recent interview that “I am an
emotional Platonist (not a rational one: there are no rational arguments in favor of Platonism)”, so it’s perhaps just as well.
Anyway, the anti-Platonist / formalist idea of Gowers is evident throughout the book, and of course it has its great side: “a mathematical object is what it does” is his slogan, and most of us can
agree that “one should learn to think abstractly, because by doing so many philosophical difficulties disappear” , etc. The only controversial suggestion, perhaps, follows the excerpt quoted above
(of “Why do so many people positively dislike mathematics?”):
Is this a necessary state of affairs? Are some people just doomed to dislike mathematics at school? Or might it be possible to teach the subject differently in such a way that far fewer people
are excluded from it? I am convinced that any child who is given one-to-one tuition in mathematics from an early age by a good and enthusiastic teacher will grow up liking it. This, of course,
does not immediately suggest a feasible educational policy, but it does at least indicate that there might be room for improvement in how mathematics is taught.
One recommendation follows from the ideas I have emphasized in this book. Above, I implicitly drew a contrast between being technically fluent and understanding difficult concepts, but it seems
that almost everybody who is good at one is good at the other. And indeed, if understanding a mathematical object is largely a question of learning the rules it obeys rather than grasping its
essence, then this is exactly what one would expect — the distinction between technical fluency and mathematical understanding is less clear-cut than one might imagine.
How should this observation influence classroom practice? I do not advocate any revolutionary change — mathematics has suffered from too many of them already — but a small change in emphasis
could pay dividends. For example, suppose that a pupil makes the common mistake of thinking that x^a+b = x^a + x^b. A teacher who has emphasized the intrinsic meaning of expressions such as x^a
will point out that x^a+b means a+b xs all multiplied together, which is clearly the same as a of them multiplied together multiplied by b of them multiplied together. Unfortunately, many
children find this argument too complicated to take in, and anyhow it ceases to be valid if a and b are not positive integers.
Such children might benefit from a more abstract approach. As I pointed out in Chapter 2, everything one needs to know about powers can be deduced from a few very simple rules, of which the most
important is x^a+b = x^a x^b. If this rule has been emphasized, then not only is the above mistake less likely in the first place, but it is also easier to correct: those who make the mistake can
simply be told that they have forgotten to apply the right rule. Of course, it is important to be familiar with basic facts such as that x^3 means x times x times x, but these can be presented as
consequences of the rules rather than as justifications for them.
I do not wish to suggest that one should try to explain to children what the abstract approach is, but merely that teachers should be aware of its implications. The main one is that it is quite
possible to learn to use mathematical concepts correctly without being able to say exactly what they mean. This might sound a bad idea, but the use is often easier to teach, and a deeper
understanding of the meaning, if there is any meaning over and above the use, often follows of its own accord.
Of course, there is an instinctive reason to immediately reject such a proposal — as the MAA review by Fernando Q. Gouvêa observes, ‘I suspect, however, that there is far too much “that’s the rule”
teaching, and far too little explaining of reasons in elementary mathematics teaching. Such a focus on rules can easily lead to students having to remember a huge list of unrelated rules. I fear
Gowers’ suggestion here may in fact be counterproductive.’ Nevertheless, the idea that technical fluency may precede and lead to mathematical understanding is worth pondering.
(Unfortunately, even though true, it may not actually help with teaching: in practice, drilling-in “mere” technical fluency can be as unsuccessful as imparting understanding.) | {"url":"http://shreevatsa.wordpress.com/2009/12/","timestamp":"2014-04-19T01:48:55Z","content_type":null,"content_length":"49993","record_id":"<urn:uuid:a87cb269-b3e7-457f-b44c-f152f6c0519f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Numbers #2
December 1st 2010, 02:35 AM #1
Junior Member
Oct 2010
Complex Numbers #2
The question says:
Find the Argument of z for each of the following in the interval [2,2pi]
A) Z=1-√3i
B) Z=-√7
C) Z=55
D) -2-2√3i
please help i dont understand the method for solving this, i got a worked example but i dont understand it, can any please explain the process of solving.
The question says:
Find the Argument of z for each of the following in the interval [2,2pi]
A) Z=1-√3i
B) Z=-√7
C) Z=55
D) -2-2√3i
please help i dont understand the method for solving this, i got a worked example but i dont understand it, can any please explain the process of solving.
The argument of a complex number $z=x+iy\,,\,\,x,y\in\mathbb{R}\,,\,xeq 0$ is given by $arg(z):=\arctan(y/x)$ , choosing
the angle depending on the signs of $x,y$.
Thus, for example, $arg(1+i\sqrt{3})=\arctan\sqrt{3}=\frac{\pi}{3}$ , whereas $\arg(-1-\sqrt{3})=\arctan\sqrt{3}=\frac{4\pi}{3}$
December 1st 2010, 03:17 AM #2
Oct 2009 | {"url":"http://mathhelpforum.com/algebra/164929-complex-numbers-2-a.html","timestamp":"2014-04-18T14:57:14Z","content_type":null,"content_length":"34142","record_id":"<urn:uuid:a35fbbf4-6d34-4d10-8d30-604dce9156fb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
T-Space Collection:High Frequency Trading in a Regime-switching ModelConvergence Results for Rearrangements: Old and New.On the Plane Fixed Point ProblemOn Moments of Class Numbers of Real Quadratic Fields
http://hdl.handle.net/1807/25335 2014-04-19T22:09:56Z http://hdl.handle.net/1807/25636 Title: High Frequency Trading in a Regime-switching Model Authors: Jeon, Yoontae Abstract: One of the most
famous problem of finding optimal weight to maximize an agent's expected terminal utility in finance literature is Merton's optimal portfolio problem. Classic solution to this problem is given by
stochastic Hamilton-Jacobi-Bellman Equation where we briefly review it in chapter 1. Similar idea has found many applications in other finance literatures and we will focus on its application to the
high-frequency trading using limit orders in this thesis. In [1], major analysis using the constant volatility arithmetic Brownian motion stock price model with exponential utility function is
described. We re-analyze the solution of HJB equation in this case using different asymptotic expansion. And then, we extend the model to the regime-switching volatility model to capture the status
of market more accurately. 2011-01-01T15:28:35Z http://hdl.handle.net/1807/25582 Title: Convergence Results for Rearrangements: Old and New. Authors: Fortier, Marc Abstract: The purpose of this
thesis is twofold. On the one hand, it aims to give a thorough review and exposition of current best results regarding approximating the symmetric decreasing rearrangement by polarizations and
Steiner symmetrizations. These results include those of Van Schaftingen on explicit universal approximation to the symmetric decreasing rearrangement by sequences of polarizations as well as his
results on almost sure convergence of rearrangements to the symmetric decreasing rearrangement. They also include those of Klartag and Milman which yield rates of convergence for Steiner
symmetrizations of convex bodies. On the other hand, new results are proven. We extend Van Schaftingen's results on almost sure convergence of polarizations and Steiner symmetrizations by showing
that the conditions on the random variables can be weakened without affecting almost sure convergence to the symmetric decreasing rearrangement. Lastly, we derive rates of convergence for
polarizations and Steiner symmetrizations of Hölder continuous functions. 2010-12-31T20:34:53Z http://hdl.handle.net/1807/25445 Title: On the Plane Fixed Point Problem Authors: Chambers, Gregory
Abstract: Several conjectured and proven generalizations of the Brouwer Fixed Point Theorem are examined, the plane fixed point problem in particular. The difficulties in proving this important
conjecture are discussed. It is shown that it is true when strong additional assumptions are made. Canonical examples are produced which demonstrate the differences between this result and other
generalized fixed point theorems. 2010-12-15T16:34:54Z http://hdl.handle.net/1807/24553 Title: On Moments of Class Numbers of Real Quadratic Fields Authors: Dahl, Alexander Oswald Abstract: Class
numbers of algebraic number fields are central invariants. Once the underlying field has an infinite unit group they behave very irregularly due to a non-trivial regulator. This phenomenon occurs
already in the simplest case of real quadratic number fields of which very little is known. Hooley derived a conjectural formula for the average of class numbers of real quadratic fields. In this
thesis we extend his methods to obtain conjectural formulae and bounds for any moment, i.e., the average of an arbitrary real power of class numbers. Our formulae and bounds are based on similar
(quite reasonable) assumptions of Hooley's work. In the final chapter we consider the case of the -1 power from a numerical point of view and develop an efficient algorithm to compute the average for
the -1 class number power without computing class numbers. 2010-07-22T19:09:26Z | {"url":"https://tspace.library.utoronto.ca/feed/rss_1.0/1807/25335","timestamp":"2014-04-19T22:09:56Z","content_type":null,"content_length":"5697","record_id":"<urn:uuid:d772004a-31d6-4601-9fe8-cdf54a211bb1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math.pow() Method
Siebel eScript Language Reference > Siebel eScript Commands > The Math Object >
Math.pow() Method
This function returns the value of its first parameter raised to the power of its second parameter.
Math.pow(x, y)
│ Parameter │ Description │
│ x │ The number to be raised to a power │
│ y │ The power to which to raise x │
The value of x to the power of y.
This function returns the value of x raised to the power of y.
This example uses the Math.pow() function to determine which number is larger: 99^100 (99 to the 100th power) or 100^99 (100 to the 99th power). Note that if you attempt to use the Math.pow() method
with numbers as large as those used in the example in Math.log() Method, the result returned is Infinity.
function Test_Click ()
var a = Math.pow(99, 100);
var b = Math.pow(100, 99);
if ( a > b )
TheApplication().RaiseErrorText("99^100 is greater than 100^99.");
TheApplication().RaiseErrorText("100^99 is greater than 99^100.");
See Also
Math.exp() Method
Math.log() Method
Math.sqrt() Method | {"url":"http://docs.oracle.com/cd/E05553_01/books/eScript/eScript_JSReference225.html","timestamp":"2014-04-17T08:44:58Z","content_type":null,"content_length":"6361","record_id":"<urn:uuid:49c19e0c-f617-43ea-a101-2ae2fdf887c0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Patterns in African American Hairstyles
MATHEMATICAL PATTERNS
IN AFRICAN AMERICAN HAIRSTYLES
by GLORIA GILMER
MATH-TECH, MILWAUKEE
The discipline of mathematics includes the study of patterns. Patterns can be found everywhere in nature. See Figure 1 with two bees in a beehive. Often these patterns are copied and adapted by
humans to enhance their world. See the pineapple in Figures 2a and the adapted hairstyle in Figure 2b. Ethnomathematics is the study of such mathematical ideas involved in the cultural practices of a
people. Its richness is in exploring both the mathematical and educational potential of these same practices. The idea is to provide quicker and better access to the scientific knowledge of humanity
as a whole by using related knowledge inherent in the culture of pupils and teachers.
fig1 TWO BEES IN A BEEHIVE FIGURE 2a PINEAPPLE TESSELATING HEXAGONS
FIGURE 2b GIRL
Going into a community, examining its languages and values, as well as its experience with mathematical ideas is a first and necessary step in understanding ethnomathematics. In some cases, these
ideas are embedded in products developed in the community. Examples of this phenomena are geometrical designs and patterns commonly used in hair braiding and weaving in African-American communities.
For me, the excitement is in the endless range of scalp designs formed by parting the hair lengthwise, crosswise, or into curves.
The main objective of my work with black hair is to uncover the ethnomathematics of some hair braiders and at the same time answer the complex research question: "What can the hair braiding
enterprise contribute to mathematics education and conversely what can mathematics education contribute to the hair braiding enterprise?" It is clear to me that this single practical activity, can by
its nature, generate more mathematics than the application of a theory to a particular case.
My collaborators include Stephanie Desgrottes, a fourteen year old student of Haitian descent, at Half Hollow Hills East School in Dix Hills, New York and Mary Porter, a teacher in the Milwaukee
Public Schools. We have each observed and interviewed hair stylists at work in their salons along with their customers. Today's workshop for middle school teachers will focus on the mathematical
concept of tesselations which is widely used and understood by hair braiders and weavers but not thought of by them as being related to mathematics.
A tesselation is a filling up of a two-dimensional space by congruent copies of a figure that do not overlap. The figure is called the fundamental shape for the tesselation. In Figure 1, the
fundamental shape is a regular hexagon. Recall that a regular polygon is a convex polygon whose sides all have the same length and whose angles all have the same measure. A regular hexagon is a
regular polygon with six sides. Only two other regular polygons tesselate. They are the square and the equilaterial triangle. See Figures 3a and 3b for parts of tesselations using squares and
triangles. In each figure, the fundamental shape is shaded. To our surprise two types of braids found to be very common in the salons we visited were triangular braids and box braids which describe
these tesselations on the scalp!
Box Braids.
In the tesselations we saw, the boxes were shaped like rectangles and the pattern resembled a brick wall starting with two boxes at the nape of the neck and increasing by one box at each successive
level away from the neck. The hair inside the box was drawn to the point of intersection of the diagonals of the box. Braids were then placed at this point. You may notice in Figure 3a that braids so
placed will hide the scalp at the previous level in the tesselation. In this style, the scalp is completely hidden. In addition, we were told that braids so placed are unlikely to move much when the
head is tossed.
Triangular Braids.
In the tesselations we saw, the triangles were shaped like equilateral triangles and the pattern resembled the one shown in Figure 3b. The hair inside the triangle was drawn to the point of
intersection of the bisectors of the angles of the triangle. Again, this style allowed hair to move less liberally than hair drawn to a vertex and then braided.
Tesselations can be formed by combining translation, rotation, and reflection images of the fundamental shape. Variations of these regular polygons can also tesselate. This can be done by modifying
one side of a regular fundamental shape and then modifying the opposite side in the same way. See Figure 4.
1. Draw tesselations using different fundamental shapes of squares and rectangles.
2. Draw a tessalation using an octagon and square connected along a side as the fundamental shape.
3. Draw tesselations with modified squares or triangles.
4. Have a hairstyle show featuring different tesselations.
Eglash, Ron, "African Fractals."New Brunswick, New Jersey: Rutgers University Press, 1999. ·
Gerdes, Paulus, "On Culture, Geometrical Thinking and Mathematics Education." Ethnomathematics:Challenging Eurocentrism in Mathematics Education edited by Arthur B. Powell and Marilyn Frankenstein,
Albany, N.Y.:State University of New York Press, 1997. ·
Gilmer, Gloria F. Sociocultural Influences on Learning, American Perspectives on the Fifth International Congress on Mathematical Education (ICME 5) Edited by Warren Page. Washington, D.C.:The
Mathematical Association of America, January 1985. ·
______"The Afterward. "Ethnomathematics:Challenging Eurocentrism in Mathematics Education edited by Arthur B. Powell and Marilyn Frankenstein, Albany, N.Y.:State University of New York Press, 1997. ·
______"Making Mathematics Work for African Americans from a Practitioner's Perspective." Making Mathematics Work for Minorities: Compendium of Papers Prepared for the Regional Workshops, Washington,
DC.:Mathematical Sciences Education Board, 1989. pp. 100-104. ·
______ and Mary Porter. "Hairstyles Talk a Hit at NCTM. " International Study Group on Ethnomathematics Newsletter, 13(May 1998)2, pp. 5-6. ·
_______ "An Ethnomath Approach to Curriculum Development," International Study Group on Ethnomathematics Newsletter, 5(May 1990)2, pp. 4-5. ·
_______ and Williams, Scott W. "An Interview with Clarence Stephens." UME Trends. March 1990. ·
Sagay, Esi, African Hairstyles, Portsmouth, New Hampshire; Heinemann Educational Books Inc. USA, 1983.
Mathematicians of the African Diaspora: http://www.math.buffalo.edu/mad/mad0.html
││Dr. GLORIA GILMER is a mathematician, mathematics educator, consultant and the founding president of the International Study Group on Ethnomathematics. She also Chairs the Commission on Education │
││of the National Council of Negro Women who for the last decade have established standards and awarded, with Shell Oil support, more than seventy teachers who have distinguished themselves by │
││developing outstanding African American students. This paper was prepared for presentation at the 77th annual meeting of the National Council of Teachers of Mathematics in San Francisco, │
││California, USA. │
││ │
││click for a profile on Dr. Gloria Gilmer │
Dr. Gloria Gilmer Math-Tech Milwaukee
9155 North 70th Street
Milwaukee, WI 53223-2115
Phone: 414-355-5191
Fax: 414- 355- 9175
E-mail: ggilme@aol.com
To read more about Dr. Goria F. Gilmer, click on her name.
Return to Special Articles
Since opening 5/25/97, visitors to
The website MATHEMATICIANS OF THE AFRICAN DIASPORA is brought to you by
The Mathematics Department of
The State University of New York at Buffalo
created and maintained by
Dr. Scott W. Williams
Professor of Mathematics | {"url":"http://www.math.buffalo.edu/mad/special/gilmer-gloria_HAIRSTYLES.html","timestamp":"2014-04-18T23:15:39Z","content_type":null,"content_length":"12706","record_id":"<urn:uuid:ba0a4ab0-a0fa-4347-9162-63be0b8e674c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indeterminate Forms and L'Hospital's Rule
November 3rd 2008, 06:22 PM #1
Junior Member
Oct 2008
Indeterminate Forms and L'Hospital's Rule
1. lim x->1 (x/x-1 - 1/ln x)
This is what I got so far...
=> lim x->1 (x-1)/(x-1 ln x)
=> lim x->1 d/dx (x-1)/d/dx (x-1 ln x)
=> lim x->1 1/x ln x .....I got stuck here
I check the book and the answer is 1/2 but how?
2. lim x->infinity ((squareroot x^2+x)-x)
This is what I got so far...
=> lim x->infinity d/dx (x^2+x^1/2 -x)
=> lim x->infinity d/dx (-x)/d/dx (x^2+x^-1/2) ...I got stuck here
The answer from the book is 1/2 but how?
I'm assuming the top line is $\frac{x}{x-1}-\frac{1}{\ln(x)}$
Then you made a mistake combining these two fractions. I don't think you need to combine these fractions though, since the limit at x=1 yields $\infty - \infty$, which is an acceptable
indeterminate form.
November 3rd 2008, 06:41 PM #2
MHF Contributor
Oct 2005 | {"url":"http://mathhelpforum.com/calculus/57408-indeterminate-forms-l-hospital-s-rule.html","timestamp":"2014-04-19T18:07:29Z","content_type":null,"content_length":"33831","record_id":"<urn:uuid:9a298333-44ea-4167-9929-d3a5e801a656>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Easton Calculus Tutor
Find a North Easton Calculus Tutor
...I tremendously enjoy sharing my love of math with others and consistently seek to impart an integrated understanding of the material rather than simply helping students memorize formulas and
procedures. As a tutor I am patient and supportive and delight in seeing my students succeed. If it sounds to you like I can be helpful I hope you will consider giving me a chance.
14 Subjects: including calculus, geometry, GRE, algebra 1
...I have formal education in Differential Equations at both undergraduate and graduate levels. The courses I've taught and tutored required differential equations, so I have experience working
with them in a teaching context. In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school.
16 Subjects: including calculus, physics, geometry, biology
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including calculus, chemistry, geometry, biology
...Doing so helps them develop the skills they need to solve problems and be successful long after our tutoring times are over. I've taught students in a wide variety of situations, everything
from helping middle school students prepare for competition in Science Olympiad to teaching interns the sa...
7 Subjects: including calculus, chemistry, algebra 1, algebra 2
...I have tutored the SAT many times, and the ACT is very similar. I have tutored the SAT many times, and the ACT is very similar. I have tutored the SAT many times, and the ACT is very similar.
29 Subjects: including calculus, reading, geometry, GED | {"url":"http://www.purplemath.com/north_easton_ma_calculus_tutors.php","timestamp":"2014-04-18T11:24:33Z","content_type":null,"content_length":"24199","record_id":"<urn:uuid:80e722a3-09ff-4245-8d42-267ad0f1435e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
US mortgage
A key issue for the $700 billion bail out plan now being finalised is the pricing of the ‘toxic assets’ the US Treasury should buy. The main target of the Paulson plan is the market for securities
based on low quality mortgages (sub prime and ‘Alt A’ mortgages). This subclass of the general universe of RMBS (residential mortgage-based securities) has become illiquid. How should these
securities be priced? In the few market transactions still taking place their value has often been less than 50 cents to the dollar of face value. But it is difficult to establish a reliable market
price. Are there any other ways to assess their value?
This column discusses a simple way to thinking the valuation of mortgages and the establishment on fair prices for these securities. Preliminary calculations suggest that the value of securities
based on lower quality mortgages might indeed be very low.
How to estimate the value of residential mortgage-based securities
The starting point is a key feature of the US mortgage market, namely that most loans are de facto or de jure ‘no recourse’. This means that the debtor cannot be held personally liable for the
mortgage even if, after a foreclosure, the bank receives only a fraction of the total mortgage outstanding from the sale of the house.
With a ‘no recourse’ mortgage, the debtor effectively receives a virtual put option to ‘sell’ to the mortgage-issuer the house at the amount of the loan still outstanding. Mortgage lenders are
‘short’ this option, but this is not recognised in the balance sheets. In most cases, the balance sheets of the banks report mortgages at face value – at least for all those mortgages on which
payments are still ongoing.
All RMBS, especially all securities based on low quality mortgages, should also take this put option into account in their pricing. It appears that this had not been done when these securities were
issued. In particular it appears that the ratings agencies neglected this point completely when evaluating the complex products build on bundles of mortgages. A key input in banks balance sheets and
the pricing of RMBS should thus have been a valuation of the put option given to US households.
Given certain basic data, it is actually fairly straightforward to calculate the value of the put option in a standard ‘no recourse’ mortgage.
The following calculations are for a mortgage of $100, which has an implicit put with a strike price equal to the loan to value ratio (LTV) because this is the amount for which the owner of the house
can ‘sell’ his house to the bank. Most of the key inputs needed for the pricing of this option are in fact relatively straightforward. In the following it is assumed that mortgages run for 10 years,
and that the riskless interest rate is 2% and the interest rate on mortgages is 6%.
It is more difficult, however, to put a number on a key input in the value of any option, namely the (expected) volatility in the price of the underlying asset. Recent data might be misleading, since
prices had been steadily increasing until 2006, but then started to decline precipitously. Over a longer horizon the standard deviation of the Case Shiller index has been around 5% per year, but over
the last few years the volatility has greatly increased. The figure is about 10%, if one looks only at the years since the start of the bubble (2002/3). The following will concentrate on the low
volatility case (5% standard deviation). It turns out, however, that this parameter is not as significant as one might first think. Under the high volatility case (10 % standard deviation) the losses
would be under most circumstances only moderately higher.
Applying the usual Black-Scholes formula to a typical subprime loan with an LTV ratio of 100% yields the result that the value of the put option embedded in the ‘no recourse’ feature is 26.8% of the
loan, even in the low volatility case. For a conforming loan (a loan that could be insured by Fannie or Freddie) with a loan to value ratio of 80%, the value of the put option would still be close to
14% (still in the low volatility case). This implies that all sub prime loans (and other mortgages with a high LTV) were worth much less than their face value from the beginning. It is evident that
the risk of a mortgage going into negative equity territory diminishes sharply with the loan to value ratio. For example, with an LTV of 60% the put option is worth only 2.8%.
In reality it is not the case that all mortgages with negative equity (where the present value of mortgage payments is higher than the value of home) go immediately into default since a default on a
mortgage (and a subsequent foreclosure) still has a cost to the household in terms of a poor credit record, some legal costs, etc. This fact could be taken into account by just adjusting the strike
price by the implicit cost of a worse credit history, etc, maybe by around 10%. However, a foreclosure usually leads to rather substantial costs for the bank, which can be a multiple of the amount of
negative equity that is calculated by using standard house prices indices. A sheriff sale often fetches a much lower price than a normal sale in which the time pressure is not that great. The loss to
the mortgage lender is often far in excess of 10% of the value of the home. These two effects thus tend to offset each other and the second might even be larger. It is thus likely that the value of
the option as calculated here does appropriately reflect the risk for banks, and might even constitute a slight under estimation.
Given the high value of the put option on mortgages with high LTV ratios (i.e. especially subprime) it is not surprising that the value of the securities build on these mortgages should be rather
low. The first loss tranches (e.g. first 10 % loss) are obviously worthless when the put option is worth already close to 28 %. Taking this put option feature properly into account shows why all
except the ‘super senior’ tranches of an RMBS based on sub prime mortgages can easily fall in value below 50 cents to the dollar.
How much are the assets still on the banks’ balance sheets worth?
Another implication of the approach proposed here concerns the ‘fair value’ accounting of the $3.6 thousand billion of mortgages still on the balance sheets of US banks. The value of the put option
granted to US mortgage debtors should reflect approximately the amount of capital the US banking system would need in order to cover itself against further fluctuations in house prices.
Little is known about the quality of the mortgages that are still on the balance sheets of US banks. It must be assumed that most of them are not conforming to the standards (limits on LTV,
documentation, size, etc.) set by the (now) state-owned mortgage financing institutions Fannie Mae and Freddie Mac, since banks could make substantial savings on regulatory capital by re-financing
conforming loans. It is thus likely that the mortgages still on the balance sheets of US banks are either jumbo loans (Fannie and Freddie refinance-only mortgages of up to around $400 thousand) or
lower quality ones. Assuming a realistic distribution of loan to value ratios, the average value of the put option embedded in all mortgages would be around 9.5% in the low volatility case and 12.7%
in the high volatility case (10% standard deviation for house prices). Given that the overall stock of mortgages still outstanding on the balance sheets of commercial banks is around $3.6 thousand
billion, this implies that the US banking system would need between $340 and $460 billion just to cover itself against the variability in house prices. Under ‘fair value’ accounting, this is the
amount of losses the US would have to book today if they recognised the put option as being implicit in the ‘no recourse’ mortgages on their books.
The total stock of mortgages outstanding in the US is about $10 thousand billion. However, the market value of these mortgages (whether still on banks’ balance sheets or securitised and embedded in
RMBS) is in reality lower by $1-1.2 thousand billion, if one takes into account the value of the put option granted to US households.
Why was the value of this option not recognised earlier? One simple reason might be that as long as the housing bubble lasted it was generally assumed that house prices could only go up, as they had
over the 1990s. The average annual increase in house prices had been about 5% in the 15 years to 2006. If that number is projected into the future the value of a put option even on a sub prime
mortgage with an LTV of 100% would have been below 5%, as compared to the 26.8% mentioned above, if one uses the standard assumption that the price of the underlying (house prices) follows a random
walk without drift. Viewed from the perspective of ever-increasing house prices, the risk of negative equity seemed minor. Expectations about house prices have now changed completely, however. If one
were to assume that house prices will decline by 3% annually over the next decade, the value of the put option would be even higher than calculated so far. For a sub prime mortgage with an LTV of
100% the value of the put option would be over 40% of the mortgage, and even for a conforming loan (80% LTV) the put option would still be worth 30 cents to the dollar. The value of the put options
on which the US banking system is short would then be above $900 billion, and the total losses on all US mortgages could amount to over $2 thousand billion.
If expectations of future house price declines are now appropriate, the value of all the securities built on sub prime mortgages might be close to zero. It remains to be seen what pricing, and thus
what underlying hypothesis is going to be used for the $700 billion rescue plan.
Editors’ note: this first appeared as CEPS Commentary, 23 September 2008 and CEPS retains the copyright.
Details for the calculation of value of a put option embedded in ‘no recourse’ feature of US mortgages.
│Underlying Price │Loan to value ratio │
│Exercise Price │100 │
│Time until expiration │10 years (mortgages tend to be long term) │
│Risk free interest rate│2% │
│Yield │6% │
│Volatility │5 % (low volatility case) and 10 % (high case). │
Distribution of mortgages by loan to value ratio
Value of put option under low volatility case.
Value of option (in cents on the dollar) under different expected future house price changes
LTV Weight
+5% p.a. Zero -3% p.a.
100 0.1 4.7 26.8 43.4
90 0.1 1.8 20.8 38.9
80 0.2 0.4 13.8 33.4
70 0.2 0.1 6.7 26.8
60 0.2 0 1.8 17.7
50 0.2 0 0.2 7.6
Average 0.8 9.3 25.3
Source: Own calculations based on options calculator from www.option-price.com. | {"url":"http://www.voxeu.org/article/valuing-us-mortgages-assets-help-logic-and-black-scholes","timestamp":"2014-04-20T13:24:53Z","content_type":null,"content_length":"34752","record_id":"<urn:uuid:e1d72f49-6c0b-4792-928c-6ac95a9ed850>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carolina context. :: State Publications Collection
Ferrel Guillory
guillory@ unc. edu
Thad Beyle
Associate Director
beyle@ email. unc. edu
odding carter, IIIIIIIII
Leadership Fellow
hoddingcarter@ unc. edu
Kendra Davenport Cotttton
Assistant Director for Programs
kendradc@ unc. edu
ndrew holton
Assistant Director for Research
holton@ unc. edu
. Leroy Towns
Research Fellow
dltowns@ email. unc. edu
The Program on Public Life is a non- partisan organization dev-
oted to serving the people of North Carolina and the South by informing the public agenda and nurturing leadership.
To receive an electronic version or to subscribe to the printed version, send your name and email address
to southnow@ unc. edu.
The Program on Public Life is part of the Center for the Study of the American South at the University
of North Carolina at Chapel Hill.
Carolina Context was printed with the use of state funds. 1000 copies of this public document were printed at a cost of $ 1,283, or $ 1.28 a copy.
www. southnow. org march2007 Number3
he program on public life
irector’s Note
Over the past year, the Program on Public Life and the UNC- CH School of Education have hosted an ongoing seminar on school improvement.
Math and science preparedness was the topic of one session, and several participants raised concerns about North Carolina’s supply of math and science teachers. Even more worrisome
to some participants was the lack of accessible
information about the state’s capacity to train math and science teachers and about where trained teachers go after graduation.
In response to these concerns, the Program on Public Life compiled a data- based study of both the supply and demand sides of math and science teacher preparation and placement. We enlisted the help of Trip Stallings, who is pursuing his Ph. D. in Education at UNC- CH. In addition to teaching middle and high school for seven years, Trip spent four years serving as Duke University’s teacher licensure coordinator and service- learning facilitator. He has a B. A. in English and a M. P. P. from Duke University.
Data for the full report, which can be found at www. southnow. org, were compiled from three primary sources: the North Carolina Department
of Public Instruction’s Licensure and Payroll Databases and survey responses from every university or college in the state with at least one secondary math or science licensure program. With the exception of some projections
about teacher turnover in math and science,
none of the results in this report are the product of complex statistical analyses. More often than not, they are simply head- counts, from which tentative conclusions have been drawn. Therefore, we report provisional findings
and not definitive assertions; in many cases, our findings raise more questions than answers and may be important starting points for further investigation.
— Ferrel Guillory
Director, Program on Public Life
Almost half of all active licensed math and science teachers in North Carolina middle and high schools were either trained out- of- state or in an alternate license program, ( e. g., lateral entry).
Most math and science teachers are hired to replace teachers who leave— not to meet the demand of increasing student enrollment.
Many school districts employ a high proportion of math and science teachers who are in the early stages of their careers.
School systems of similar size sometimes employ widely different numbers of licensed math and science teachers.
2 Carolina context
There are four main factors to consider when analyzing the math and science teacher pipeline:
teacher production, retention, quality, and presence.
The first component of the full study is an estimation of the production of math and science
teachers at all of the North Carolina- based institutions of higher education that house middle and high school math and/ or science licensure programs. Rather than simply examining
the number of candidates who pursued state licensure, the full study also estimates the number of candidates who met the requirements
for licensure, regardless of whether they chose to apply for a North Carolina license. By doing so, we can draw conclusions about the potential size of the math and science corps and estimate how many potential licensees either chose not to pursue licensure in- state or chose not to enter the profession at all.
The next part of this study attempts to estimate
patterns in math and science teacher retention
across the state. The section examines the relationship between years of experience and decisions to leave the math or science classroom,
as well as estimates of teacher positions created due to student population growth versus
positions created due to turnover.
The third component is an examination of the relative quality of the state’s math and science teachers, statewide and per school district. There is disconcerting evidence that without high- quality math and science instruction, particularly
at the secondary level, many young people will choose not to pursue careers in science- and math- related fields. In 2003- 2004, around 8% of all North Carolina secondary math and science teachers taught without full credentials— well above the national average of 3.6%. The numbers
of such teachers may be higher than that
if we were to include credentialed teachers who are teaching out of field. For example, across the nation in 2002, 45% of biology and life science high school students and 30% of high school math students were taught by teachers who did not hold a degree in the field being taught.
The study’s final element considers differences
in the presence of licensed math and science
teachers across districts. In addition, this section models the degree to which the largest
The Status of North Carolina’s Math/ Science Teacher Pipeline
Less than 45% licensed in- state
45- 59% licensed in- state60- 74% licensed in- state75% or greater licensed in- stateCity School Districts
ap 1
Percentage of North Carolina Middle and High School Math and Science Teachers Licensed in “ Traditional” In- State Programs
Note: Statewide proportion of middle and high school math and science teachers trained in
traditional in- state programs is 55.8%; all other licensed teachers either trained out- of- state or
via in- state alternate licensure programs. Source: N. C. Department of Public Instruction
uNote: In some sections of this report, we consider two subsets of teachers: those trained at in- state, traditional licensure programs ( programs
that train teachers before they take teaching jobs), and those who either earned licensure out- of- state or through in- state alternate licensure programs ( e. g., lateral entry.) C
arolina context t 3
licensure programs serve each area of the state by estimating where their graduates currently hold teaching positions. Teachers, like many other professionals, tend to migrate toward certain
geographic locations ( predominantly suburban
districts), and away from others ( such as inner- city and rural districts and high- needs schools), and they do so for a variety of reasons. They also tend toward certain licensure areas, such as elementary education, and away from other high- need licensure areas, such as mathematics
and science. The end result is an imbalance
in teacher distribution across disciplines and across regions that is often masked by aggregated
state licensure numbers.
Highlights from the full report:
n Only about one half of all active licensed math and science teachers were trained in “ traditional”
in- state programs. Traditional licensure
programs at North Carolina colleges and universities— public and private combined— have provided around 56% ( about 7,400 out of 13,200) of the state’s licensed math and science classroom teachers; at least 44% of the state’s math and science teachers either received all of their training out- of- state or entered the teaching
profession through an alternative licensure program ( 34%), or are currently enrolled in a lateral entry program ( 10%), indicating a far greater need for math and science teachers than is currently being met by traditional in- state teacher education programs.
Of those trained in traditional state programs,
a large majority also earned degrees from public universities. About 8,600 school employees ( teachers and administrators) statewide with traditional math or science licensure
earned at least one degree in state. At least 7,500 of their degrees came from public universities; just under 2,000 of their degrees came from private colleges.
n In- state colleges produce enough teachers
to meet demand due to student population growth, but not due to teacher turnover. Most math and science teachers are hired to replace
Table 1
Total College and University “ Touches,” All Active Teachers Trained
In- State in Traditional Licensure Programs
t t t
rivate “ TOuches”
Barton College 172
Bennett College 21
Campbell University 258
Catawba College 63
Duke University 57
Elon University 169
Gardner- Webb University 265
Greensboro College 40
High Point University 99
Johnson C. Smith University 9
Lenoir- Rhyne College 143
Livingstone College 17
Mars Hill College 130
Meredith College 150
Methodist College 46
North Carolina Wesleyan College 38
Pfeiffer University 71
Queens College 20
Salem College 13
Shaw University 18
Wake Forest University 110
Warren Wilson College 14
Wingate University 54
Total Private “ Touches” 1977
t total
public “ TOuches”
Appalachian State University 1457
East Carolina University 1184
Elizabeth City State University 118
Fayetteville State University 295
N. C. A& T State University 168
N. C. Central University 159
N. C. State University 814
University of North Carolina - Asheville 90
University of North Carolina - Chapel Hill 597
University of North Carolina - Charlotte 605
University of North Carolina - Greensboro 531
University of North Carolina - Pembroke 389
University of North Carolina - Wilmington 424
Western Carolina University 620
Winston- Salem State University 60
Total Public “ Touches” 7511
orth Carolina Colleges and Universities with active licensure programs
Note About Touches: The state currently tracks information about institutions from which license- holders
have degrees, but it does not track information about institutions at which license- holders completed their licensure work. Thus, a teacher may have graduated from Fayetteville State University with a bachelor’s
degree in math and from Duke University with a master’s degree in statistics but have earned his high school math licensure at the University of North Carolina- Greensboro. In the licensure database, only his degrees from FSU and Duke would be listed; there would be no indication of his work at UNC- G.
The data in this table and in some sections of the full report were generated by counting “ touches.” That is, every time a college with a math or science licensure program is mentioned in a teacher’s record, that school is given credit for “ touching” or potentially influencing a teacher and her or his decision to teach math or science. The resulting number is therefore not a true count of college and university production but instead only an approximation ( and an over- estimate in almost all cases). Thus, when we report that 1,457 of the math and science teachers teaching in 2005- 2006 were “ touched” by Appalachian State University, we are saying that 1,457 math and science teachers have at least one degree from ASU. The actual proportion
of those teachers who earned their licensure at ASU is smaller but unknown. While this solution to the problem is acceptable for the back- of- the- envelope estimates presented in this report, more accurate and meaningful assessments of college and university teacher production will require more accurate records.
Total “ Touches”
Total Public 7511
Total Private 1977
In- state- trained teachers whose college and university connection is unknown 393
Total 9881
Findings: 1 The majority of in- state trained math and science teachers who completed traditional programs ( c. 75– 80%) also earned degrees in public universities. 2 The impact of Regional Alternative Licensing Centers ( RALC) licensure on math and science teacher totals is limited ( 21 RALC- only; another
41 in conjunction with an NC college or university). 4 Carolina context
n Math and science teachers leave the classroom at a very high rate during their first two years of teaching, and they continue to leave at a lower but steady rate in succeeding
years. Based on 2004– 2005 and 2005– 2006 figures, math and science teachers leave the classroom rapidly after the first two years of teaching. A steady but lesser share of each cohort of teachers continues to leave the classroom
over the next 12 years of their careers. In 2005– 06, 7.4% of the secondary- school teaching
workforce had no experience, and 6.4% had only one year of experience.
n For every 10 math and science teachers
hired, more than 7 are hired to fill vacancies
and slightly more than 2 are hired to meet growth in the student population. It is difficult to estimate the total annual number of math and science teachers newly hired by the state. We can, however, make approximations.
Conservatively, we estimate that North Carolina districts made about 1,200 new hires in 2005– 2006. The secondary student population has grown by about 2.5% per year over the past 10 years. At the current teacher- student ratio, this rate of enrollment growth suggests that fewer than 300 of the state’s estimated 1,200 vacancies last year were new positions. As a result, the majority of the more than 900 zero- experience teachers hired for the 2005– 2006 school year were hired to replace
departing experienced teachers, not to meet demand created by growth in the student population. There always will— and should— be vacancies due to turnover; the question for the state is whether the current rate of turnover is acceptable, especially given the current production rate of in- state math and science teacher preparation programs.
teachers who leave the profession, the classroom,
or the state— not to meet the demand of increasing student enrollment. In 2005, just over 500 math and science licensure candidates completed in- state licensure programs. Our estimate for the demand for N. C. math and science
teachers is around 1,200 new teachers a year. Therefore, the state’s colleges and universities
are not producing as many teachers as are needed to meet the annual demand.
It is inaccurate to conclude, however, that the only reason for the shortage is that the state is not producing “ enough” teachers. A high teacher turnover rate and decisions to teach in other states contribute to the shortage. For example,
not all of the over 500 candidates who completed licensure programs in- state in 2005 chose to teach in North Carolina. Also, several colleges support many active teachers still enrolled
in lateral entry programs.
% of N. C. Math and Science Teachers in the Workforce ( Middle/ High)
Years of Experience8% 7% 6% 5% 4% 3% 2% 1% 0% 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 4013.8% of math and science teachers have less than two years of experience22% of math and science teachers have less than four years of experience
Graph 1
Distribution of Teachers by Experience, 2005 – 2006 C
arolina context t 5
n Some of the retention problem is a result
of teachers moving among districts in the state, not just of the state losing teachers due to retirement and career changes. Estimates for this report suggest that the statewide annual leaving rate for math and science teachers is probably around 7%. However, in a 2005 study, the Department of Public Instruction reported an average annual teacher leaving rate for all disciplines of about 12% per district. One reason
why our statewide estimate is lower than the average reported leaving rate is because our statewide rate does not include teachers who stay in- state and in- profession but who move between districts. In 2004– 2005, nearly 20% of all teachers who left their positions did so to take jobs in another district in the state.
n Most of the state’s licensed math and science teachers are designated as highly qualified; however, middle school science license
holders lag behind other math and science
teachers in earning the highly qualified designation. More than 88% of the licenses held by licensed math and science classroom teachers also were held by teachers who were designated “ highly qualified” for those areas of licensure under the stipulations of the federal No Child Left Behind Act. The rate was highest
for high school math and science teachers ( about 94%). However, in 2005– 2006, more than 1,000 ( nearly 22%) of the 4,750 middle school science license holders were not designated
“ highly qualified.”
n In 40 school districts, one- fourth of the math and/ or science teachers are teachers with 0 to 3 years of experience (“ early career teachers”); in 11 of these districts, early career
teachers make up one- third or more of the licensed math and science teacher corps. Though a definite pattern is not discernable, it is interesting to note that of the 16 largest districts ( districts with more than 10,000 students), 10 employed a higher- than- average
number of early career math and science teachers ( led by Guilford County at above 33%). By contrast, of the 14 smallest districts ( districts with fewer than 1,000 students), only 4 employed a higher- than- average number
of early career math and science teachers ( in fact, Camden and Gates Counties employed no math and science teachers with fewer than 4 years of experience). On the other hand, the teaching workforces of three small districts— Hoke, Jones, and Vance Counties— were composed
of at least 40% early career teachers.
n School systems of similar size sometimes employ widely different numbers of licensed math and science teachers. On average, North Carolina school districts employed about 21 teachers per 1,000 secondary students. Actual district employment ratios ranged from as few as 11 licensed teachers per 1,000 secondary
Less than 15% are early career teacher
s15- 19% are early career teachers20- 24% are early career teachers25- 29% are early career teachers30% or greater are early career teachersCity School Districts
ap 2
Percentage of Early Career Licensed Math and Science Teachers Working in Each District
Note: Average Statewide early career license state for math and science teachers is 23.2%
Source: N. C. Department of Public Instruction 6 Carolina context
Fewer than 20 teachers per 1,000 students
20- 24 teachers per 1,000 students25 or more teachers per 1,000 studentsCity School Districts
students ( Yadkin County) to as high as 35 licensed
teachers per 1,000 secondary students ( Hyde County).
n Many of our active and licensed math and science teachers ( more than 2,000, or roughly 13%) are not working in middle or high school classrooms. More than 1,600 ( about 11% of all) active teachers with middle or high school math and/ or science licenses were teaching in elementary schools during the 2005– 2006 school year. Nearly 400 more ( about 2.5% of all licensed and active) were working in central administration offices. Therefore, one possible reason for a math and science teacher shortage is because thousands of eligible teachers are working in some other capacity within school systems. To be sure, many of the elementary teachers likely hold elementary licenses as well as math and/ or science licenses, but according to NC DPI research ( 2005), math and science positions have been consistently much harder to fill than have been elementary positions.
imitations of the Data
This issue of Carolina Context is pulled from a larger report, Examining the Pipeline: An Analysis
of Math and Science Teacher Preparation in North Carolina, that can be found on the Program
web site, www. southnow. org. This analysis presents as many questions as it does answers.
The state currently tracks information about institutions from which license- holders have degrees, but it does not contain information
about institutions at which license- holders completed their licensure work. This is an important
distinction. A teacher may have graduated
from Fayetteville State University with a bachelor’s degree in math and from Duke University
with a master’s degree in statistics but earned his high school math licensure at the University of North Carolina- Greensboro. In the licensure database, only his degrees from FSU and Duke would be listed; there would be no indication of his work at UNC- G.
The effects of these missing data are twofold:
first, it is difficult to assess accurately how many North Carolina based teachers an institution
has produced; and second, it reduces the state’s ability to draw connections between teacher quality and teacher training.
In addition, available data do not allow us to compare the demand for math and science teachers for the past several years across local education agencies ( LEAs), nor do they allow us to compare accurately the number of math and/ or science teachers who leave the profession
to the number of new math and/ or science
hires. Finally, our data did not indicate how many unlicensed teachers are teaching math and/ or science in North Carolina.
We realize that the UNC General Administration,
the Department of Public Instruction, and others are actively considering these data shortcomings. Our goal in this report is to offer
an initial analysis of what is available and to inform the policy discussions over improving
math and science teacher education. n
ap 3
Number of licensed math and science teachers
per 1,000 students
Note: Not every school district in North Carolina has 1,000 middle and high school students. In the case of those districts, the number is an estimate based on the existing
ration of math and science teachers to students. The average number of licensed math and science teachers per 1,000 students within the 115 school districts is 22.
Source: N. C. Department of Public Instruction C
arolina context t 7
Work in Progress
The UNC Program on Public Life remains in the process of building a website ( www. southnow. org) that serves to provide policy makers, faculty,
students and citizens with data and analysis on electoral trends and issues in North Carolina and the South.
This issue of Carolina Context summarizes the research conducted by Trip Stallings, a PhD student at UNC- Chapel Hill. You can find a link to the full report, as well as a pdf version of this white paper, on the southnow. org home page. If you would like to receive a paper copy, please let us know by emailing our colleague Kendra Cotton at kendradc@ unc. edu.
In addition to Carolina Context, the website contains archives of our other publications, NC DataNet and SouthNow.
We have in process the following projects:
DataNet - 1) a data profile of the 2007- 08 General Assembly; 2) an analysis of the 2006 judicial elections; and 3) an analysis of trends in the 2006 Congressional elections
outhNow - A report on the state of entrepreneurship
in the Southern states.
arolina Context - 1) a white paper on workforce
training in bio- manufacturing and “ sector-
based economic development strategies; and 2) the results of two working roundtables on issues affecting coastal communities.
Note: Totals reflect all math and science licenses held by people
currently employed by NC public schools; a teacher can hold more than one license.
Table 3
Where Active Math and Science Teachers
Were Working in North Carolina, 2005– 06
school employees held a math
or science license for middle
or high school
of these employees were
licensed through traditional
in- state licensure programs
of these employees were
enrolled in lateral entry programs
of these employees were licensed through other licensure programs
licensees were teaching in
middle or high schools ( 86.7%)
licensees were teaching in
elementary schools ( 10.8%)
licensees were working in
central offices ( 2.5%)
Finding: There are a sizeable number of public
school employees who are licensed to teach math or science but who are currently working in other capacities
( e. g., elementary education or central office work).
Table 2
Total Math and Science Licenses Held, by License Level, 2005– 2006
Total # of Math and Science Licenses Held by NC Teachers 20875
Teachers Who Received At Least One License,
of Any Level, from a Traditional In- State Program 57.3% 11958
Teachers Who Received Their Licenses from Out- of- State
Institutions or Through Alternate Programs 42.7% 8917
Total # of Bachelor’s- Level Licenses in Math and Science 17837
Teachers Who Received At Least One License,
of Any Level, from a Traditional In- State Program 55.8% 9952
Teachers Who Received Their Licenses from Out- of- State
Institutions or Through Alternate Programs 44.2% 7885
Total # of Master’s- Level Licenses in Math and Science 2944
Teachers Who Received At Least One License,
of Any Level, from a Traditional In- State Program 66.4% 1955
Teachers Who Received Their Licenses from Out- of- State
Institutions or Through Alternate Programs 33.6% 989
Total # of 6th- Year- Level Licenses in Math and Science 59
Teachers Who Received At Least One License,
of Any Level, from a Traditional In- State Program 67.8% 40
Teachers Who Received Their Licenses from Out- of- State
Institutions or Through Alternate Programs 32.2% 19
Total # of Doctoral- Level Licenses in Math and Science 35
Teachers Who Received At Least One License,
of Any Level, from a Traditional In- State Program 31.4% 11
Teachers Who Received Their Licenses from Out- of- State
Institutions or Through Alternate Programs 68.6% 24 | {"url":"http://digital.ncdcr.gov/cdm/singleitem/collection/p249901coll22/id/6633/rec/3","timestamp":"2014-04-20T18:54:43Z","content_type":null,"content_length":"150806","record_id":"<urn:uuid:0aeabdcb-de32-426a-872a-959b8b2a604c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stafford, TX Calculus Tutor
Find a Stafford, TX Calculus Tutor
A mother of three, I graduated with honors in Chemical Engineering and taught at Kumon for over 5 years. I love helping students discover the unlimited ways of learning higher level math. I am
available on Saturday mornings and provide a free diagnostic to test for areas of weakness and strengths.
9 Subjects: including calculus, algebra 1, algebra 2, SAT math
...I have tutored students in 7th, 8th, 9th, 10th and 11th grades in geometry, algebra I and II, English; grammar and reading comprehension, world geography, science and history. My aim when I am
tutoring is to give students background knowledge on the subject area and then let them solve problems ...
19 Subjects: including calculus, chemistry, algebra 2, algebra 1
I am a Texas Certified Math Teacher, with experience teaching Algebra 1 and 7th grade math classes. Additionally, I've tutored other students from almost all age levels, ranging from elementary
math to College courses. I've also successfully helped students with learning disabilities such as dyslexia and ADD.
12 Subjects: including calculus, English, physics, geometry
...While the technical subjects are my greatest strength and specialty, I can also offer tutoring in the social sciences. In high school, I took AP American History and earned a 5 on the AP exam.
I have studied calculus, tutored calculus, and applied it to real-world design problems.
37 Subjects: including calculus, chemistry, writing, physics
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Stafford_TX_calculus_tutors.php","timestamp":"2014-04-16T13:46:25Z","content_type":null,"content_length":"24219","record_id":"<urn:uuid:22a6cc98-3262-43bb-927d-413bf63af61b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
please help i dont get it? For your birthday you are given $150. After three weeks you have $120. Two weeks after that, you have $100. If the amount that you spend each week remains constant, how
much will you have 10 weeks after your birthday? (1 point) $40 $50 $60 $70
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fbfe569e4b0964abc826d77","timestamp":"2014-04-18T10:46:07Z","content_type":null,"content_length":"35253","record_id":"<urn:uuid:1b63caf7-832c-4139-b1ef-af11c9f4529a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
hws 2 use d concept of calorimetry in solving numerical.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5055a033e4b0a91cdf449381","timestamp":"2014-04-20T16:16:44Z","content_type":null,"content_length":"51791","record_id":"<urn:uuid:f4809b01-1495-40b1-ba66-b94a71c25002>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recent Research on Buckling
This research has been supported by the National Science Foundation under grants DMS 9706594 and DMS 0074160 and by the Air Force Office of Scientific Research under grant F49620-98-0161. It has been
conducted jointly with Professor Monique Dauge of University of Rennes I, Rennes, France. DISCLAIMER: Any opinions, findings, and conclusions or recommendations expressed in this material are those
of the author(s) and do not necessarily reflect the views of the National Science Foundation or AFOSR.
In the above picture, a plate (or flange) with a thin strip attached to its bottom (which acts as a stiffener) is pressed inwards by a force acting on its two ends in the direction of its
longitudinal axis. If the force is not too large, the deformation of the flange will be very small, but once a certain threshold is crossed, the flange will buckle. Such buckling is a form of
engineering failure, and engineers are often interested in finding the minimum force or loading under which it will occur. What is shown in the pictures above are the first three buckling modes, i.e.
the deformed configurations of the flange after buckling has taken place.
Classical buckling formulations for plates usually use an idealized model for the plate called a Kirchhoff plate model, where the plate is considered so thin that its thickness approaches the limit
of zero. However, it becomes difficult with such formulations to consider truly three-dimensional objects (for instance, the role played by the stiffener). For this reason, a recent formulation by
Professor Barna Szabo and his group at the Mechanical Engineering Department at Washington University, St. Louis, MO, utilized the full three-dimensional plate rather than some reduced model. The
resulting algorithm was implemented in the code Stress Check, and shown to accurately predict the minimum force that caused buckling in a variety of test cases.
However, engineers cannot test all possible situations to which an algorithm may be applied. As a mathematician, my goal was to find out if there might be some cases where the algorithm failed to
give accurate results. Also, to mathematically prove that in all other cases, the algorithm did, in fact, give accurate results (the proof being general enough that it covered the widest possible
class of cases that would be of engineering interest).
The first thing proved (jointly with Professor Monique Dauge of University of Rennes I, Rennes, France) was that if the plate (or other structure) was sufficiently thin, then the algorithm would give
accurate results. Also, it was found that when the structure was too thick, then the algorithm returned buckling load predictions that were too low. The above pictures illustrate this. A circular
disc is subjected to inward forces along its circumferential face. In the first picture, the first four buckling modes are shown when the thickness of the disc is 0.6 (the radius is 1, and only a
quarter of the disc is shown). It turns out that the first two are true (and accurate) buckling modes, while the lower two (which have a markedly different character) are non-physical (or spurious)
buckling modes. In the second picture, the thickness is increased to 1.4. Now, all four modes shown are non-physical, and the buckling force returned is much lower than the true one needed to buckle
the cylinder.
It turns out that if the structure has corners and edges (as is true of most real-life objects), another problem can arise, even if the structure is very thin. For instance, consider the cracked
plate in the above pictures (the crack being a five degree notch cut into the side of the plate). Suppose the plate is subjected to a force which acts throughout its body, and is directed downwards
in the top half and upwards in the bottom half. It is known that the resulting stress will be highly concentrated at the vertex of the notch, which is why the immediate area there is usually highly
refined in calculations, to accurately capture this stress. Unfortunately, this refinement can have a negative effect where buckling is concerned - it can give rise to spurious buckling modes. In the
first picture, the top two buckling modes are accurate, but the third one is not. The character of this third mode is again markedly different, with all the deformation taking place in the elements
very close to the vertex (as seen in the blown-up detail). When the refinement around the vertex is increased further (second picture), all the calculated buckling modes turn out to be spurious (with
the one of engineering interest being about 50% higher than the computed one). Again, all the `action' is concentrated near the center with these spurious modes.
The above result help engineers by defining the limits for which the algorithm is useful. Fortunately, the cases where there might be a problem are either not of interest (such as very `thick'
structures) or can be easily avoided (such as over-refinement at vertices). Hence the reliability of this algorithm can be established. | {"url":"http://www.math.umbc.edu/~suri/buckling.html","timestamp":"2014-04-18T15:39:03Z","content_type":null,"content_length":"7248","record_id":"<urn:uuid:7b7f08dd-244d-43c9-bc0f-73b946f8928d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manchester, CT Math Tutor
Find a Manchester, CT Math Tutor
...Last year I worked for the Boys and Girls Club, where I was a part of the after-school program staff. It was during this job that I discovered how much I liked working with students. Most of
my time there was spent working with younger kids to help them with their homework, but I also conducted one-on-one tutoring sessions with high-school and middle-school aged students.
15 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
...On my Kaplan diagnostic exam I received an 11. Please see organic chemistry section for my qualifications as an organic chemistry tutor. Currently, I am a first year medicinal chemistry Ph.D.
student at the University of Connecticut.
14 Subjects: including SAT math, chemistry, biology, MCAT
...I would gladly provide proof if requested. I use MATLAB almost on a daily basis for my research purposes as a graduate assistant at UConn. My laboratory studies topics where derivations and
simulations are mainly carried out in MATLAB.
13 Subjects: including precalculus, algebra 1, algebra 2, calculus
...I use many of my own class-room materials and believe in a multi-sensory approach that allows students to explore a concept in depth using manipulatives, discussion, and practice. I use lots
of visuals when teaching and encourage students to connect their learning to real life experiences. In g...
33 Subjects: including calculus, discrete math, differential equations, Aspergers
...I work full time at MT Medical as an inventory manager. I hope to eventually transfer to a four year school where I plan on completing a degree in engineering and mathematics. I have many
hobbies I enjoy during my free time.
7 Subjects: including calculus, C++, algebra 1, algebra 2
Related Manchester, CT Tutors
Manchester, CT Accounting Tutors
Manchester, CT ACT Tutors
Manchester, CT Algebra Tutors
Manchester, CT Algebra 2 Tutors
Manchester, CT Calculus Tutors
Manchester, CT Geometry Tutors
Manchester, CT Math Tutors
Manchester, CT Prealgebra Tutors
Manchester, CT Precalculus Tutors
Manchester, CT SAT Tutors
Manchester, CT SAT Math Tutors
Manchester, CT Science Tutors
Manchester, CT Statistics Tutors
Manchester, CT Trigonometry Tutors
Nearby Cities With Math Tutor
Bolton, CT Math Tutors
East Hartford Math Tutors
Glastonbury Math Tutors
Hartford, CT Math Tutors
Meriden, CT Math Tutors
Middletown, CT Math Tutors
New Britain Math Tutors
Newington, CT Math Tutors
Rocky Hill, CT Math Tutors
South Windsor Math Tutors
Vernon, CT Math Tutors
Weathersfield, CT Math Tutors
West Hartfrd, CT Math Tutors
Wethersfield Math Tutors
Windsor, CT Math Tutors | {"url":"http://www.purplemath.com/manchester_ct_math_tutors.php","timestamp":"2014-04-18T08:41:28Z","content_type":null,"content_length":"23989","record_id":"<urn:uuid:c05bbc2c-2f38-45db-aabd-ca53a1ae8fd9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sort by:
Per page:
Now showing results 1-10 of 10
In this problem set, learners will analyze a table of global electricity consumption to answer a series of questions and consider the production of carbon dioxide associated with that consumption.
Answer key is provided. This is part of Earth Math:... (View More) A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will analyze a graph of solar irradiance since 1610. Answer key is provided. They will consider average insolation, percent changes and the link between irradiance and
climate change. This is part of Earth Math: A Brief... (View More) Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will analyze two figures: a graph of Arctic sea ice extent in September between 1950 and 2006, and a graph showing poll results for 2006-2009 for percentage of adults
that believe there exists scientific evidence for... (View More) global warming. They will develop linear models for both graphs. This is part of Earth Math: A Brief Mathematical Guide to Earth
Science and Climate Change. (View Less)
In this problem set, learners will calculate the energy consumption of a home in kilowatt-hours (kWh) to answer a series of questions. They will also consider carbon dioxide production associated
with that energy consumption. Answer key is provided.... (View More) This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will create and use a differential equation of rate-of-change of atmospheric carbon dioxide. They will refer to the "Keeling Curve" graph and information on the sources
and sinks of carbon on Earth to create the... (View More) equation and apply it to answer a series of questions. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to
Earth Science and Climate Change. (View Less)
In this problem set, students calculate precisely how much carbon dioxide is in a gallon of gasoline. A student worksheet provides step-by-step instructions as students calculate the production of
carbon dioxide. The investigation is supported the... (View More) textbook "Climate Change," part of "Global System Science," an interdisciplinary course for high school students that emphasizes how
scientists from a wide variety of fields work together to understand significant problems of global impact. (View Less)
Students are presented with a graph of atmospheric becomes CO² values from Mauna Loa Observatory, and are asked to explore the data by creating a trend line using the linear equation, and then use
the equation to predict future becomes CO² levels.... (View More) Students are asked to describe qualitatively what they have determined mathematically, and suggest reasons for the patterns they
observe in the data. A clue to the reason for the data patterning can be deduced by students by following up this activity with the resource, Seasonal Vegetation Changes. The data graph and a student
worksheet is included with this activity. This is an activity from Space Update, a collection of resources and activities provided to teach about Earth and space. Summary background information, data
and images supporting the activity are available on the Earth Update data site. (View Less)
In this online, interactive module, students learn about severe weather (thunderstorms, hurricanes, tornadoes, and blizzards) and the key features for each type of "wild weather" using satellite
images. The module is part of an online course for... (View More) grades 7-12 in satellite meteorology, which includes 10 interactive modules. The site also includes lesson plans developed by
teachers and links to related resources. Each module is designed to serve as a stand-alone lesson, however, a sequential approach is recommended. Designed to challenge students through the end of
12th grade, middle school teachers and students may choose to skim or skip a few sections. (View Less)
In this interactive, online module, students learn about satellite orbits (geostationary and polar), remote-sensing satellite instruments (radiometers and sounders), satellite images, and the math
and physics behind satellite technology. The module... (View More) is part of an online course for grades 7-12 in satellite meteorology, which includes 10 interactive modules. The site also includes
lesson plans developed by teachers and links to related resources. Each module is designed to serve as a stand-alone lesson, however, a sequential approach is recommended. Designed to challenge
students through the end of 12th grade, middle school teachers and students may choose to skim or skip a few sections. (View Less)
The rate that the snow melts is crucial in determining how fast water will reach the streams and rivers and thus, how damaging the flooding may be. This resource provides a step-by-step calculation
of the volume of water stored in the snow in the... (View More) Red River basin. This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by
scientists and engineers showing how math and science topics taught in K-12 classes have real world applications. (View Less) | {"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Earth+and+space+science%3AEarth+processes%3AClimate&resourceType=Instructional+materials%3AProblem+set&smdForumPrimary=Earth+Science","timestamp":"2014-04-18T20:35:33Z","content_type":null,"content_length":"67350","record_id":"<urn:uuid:a55915a9-54e8-4751-9041-b91c1c1c6913>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1. Create your own third degree polynomial that when divided by x + 2 has a remainder of –4. 2. Create your own division of polynomials problem. Demonstrate how this problem would be solved using
both long division and synthetic division.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fe5511ae4b06e92b873a2fc","timestamp":"2014-04-16T10:33:15Z","content_type":null,"content_length":"424769","record_id":"<urn:uuid:ceb5b853-9ea3-4062-a182-6b6cf383e9c8>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Van Nuys Calculus Tutor
Find a Van Nuys Calculus Tutor
...As a result, I use different tactics with different students. I have worked with clients from many different mathematical backgrounds from elementary all the way to advanced. As far as the test
prep goes, I make a sample test for the student from the homework problems covered in the test, and I give him a certain amount of time to do the test; then, we'll go over her mistakes together.
12 Subjects: including calculus, physics, geometry, algebra 1
...I specialize in helping students construct essays for the written portion of the exam. I've often been touted a "computer whiz," but more than anything I am humble, knowledgeable, and patient
when it comes to technology and sharing computer knowledge with others (my first "student" was my mother...
60 Subjects: including calculus, chemistry, reading, Spanish
...I also recognize which advisors were able to help me and which ones were not. In turn, I've learned how to be patient while remaining resolute in jointly pursuing a scholastic goal. Lastly, I
am in a perfect position to sympathize with the challenges facing students, as I myself am about to become one again!
58 Subjects: including calculus, English, reading, writing
...Reading Spanish articles, newspapers, poetry, novels, etc.4. Writing Spanish basic and complex sentences. I had many students in Los Angeles who passed their Spanish classes with an A plus.
21 Subjects: including calculus, English, Spanish, reading
I've been tutoring since 1993 and I taught high school for one year. I like to have a friendly relationship with my students so its not such a drag for them to show up to sessions and so they stay
inspired to learn. I've worked with students with different academic backgrounds and learning abilities and understand the potential problems students may run into while learning new material.
10 Subjects: including calculus, chemistry, algebra 2, algebra 1 | {"url":"http://www.purplemath.com/Van_Nuys_calculus_tutors.php","timestamp":"2014-04-18T06:22:12Z","content_type":null,"content_length":"24007","record_id":"<urn:uuid:16de25a3-9612-41d5-9693-6ed438cba003>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
AiS Challenge
AiS Challenge Team Interim
Team Number: 37
School Name: Las Cruces High School
Area of Science: Physics
Project Title: Mobile Asteroids
Objective: Is to find out approximately how much fuel and
different materials that would be required to obtain an asteroid from a
point in space, and be able to maneuver it's course into earth's orbit.
From there we will be able to extract different materials from the
asteroid. Such as hard metals, and possible gasses/fossil fuels.
Our resources are coming from various web sites across the world.
Some of the different equations we are using are:
3.986e14 m^3/s^2 (4e14) -- Gravitational constant times mass of Earth
7726 m/s (8000) -- Earth orbital velocity at 300 km altitude
at/c = Ve/c * ln(MR) -- Relativistic rocket with exhaust velocity Ve
and mass ratio
d = d0 + vt + .5at^2 \
v = v0 + at | -- For constant acceleration
v^2 = 2ad /
delta V = 2 sqrt(GM/r) sin (phi/2) -- For changing velocity
f=ma -- Force is mass times acceleration
w=fd -- Work (energy) is force times distance
These equations were gathered at NASA and will be used in our program with
other equations.
The program will include these equations and would
also include other equations in order to calculate the user's input.
The URL's are as follows:
Implementation: Using different mathematical formulas from different
resources we will compile a very complex C++ program. This program
will determine the amount of fuel, the distance, the time, weight, and
all other factors that will affect the moving of the asteroid. | {"url":"http://www.challenge.nm.org/archive/01-02/Interims/037.shtml","timestamp":"2014-04-19T07:13:22Z","content_type":null,"content_length":"2957","record_id":"<urn:uuid:654dd7a8-17ce-45bd-87f5-45c9436a57ce>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
62J02 General nonlinear regression
A uniform central limit theorem for neural network based autoregressive processes with applications to change-point analysis (2011)
We consider an autoregressive process with a nonlinear regression function that is modeled by a feedforward neural network. We derive a uniform central limit theorem which is useful in the
context of change-point analysis. We propose a test for a change in the autoregression function which - by the uniform central limit theorem - has asymptotic power one for a large class of
alternatives including local alternatives. | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/12225","timestamp":"2014-04-21T08:32:59Z","content_type":null,"content_length":"16321","record_id":"<urn:uuid:20725c3c-2abd-4df5-bdd4-b19e053cdce4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learn About Wiring Solar Panels And Batteries
There are three types of wiring configurations that are relatively easy to learn. Once mastered, the job of wiring batteries or solar modules becomes easy as pie. The three configurations are:
Series wiring
Parallel wiring
And a combination of the two known simply as series/parallel wiring.
In any DC generating device such as a battery or solar module, you will always have a negative (-) terminal and a positive (+). Electrons or (current) flows from the negative terminal through a load
to the positive terminal.
For ease of explanation we shall refer to a solar module or battery as a "Device"
Series Wiring
To wire any device in series you must connect the positive terminal of one device to the negative terminal of the next device
Important: When you wire devices in series the individual voltages of each device is additive. In other words if each device in the above example had the potential of producing 12 volts, then 12 + 12
+ 12 + 12 = 48 volts. If these devices were batteries then the total voltage of the battery pack would be 48 volts. If they were solar modules that produced 17 Volts each then the total voltage of
the solar array would be 68 volts.
The second important rule to remember about series circuits is that the current or amperage in a series circuit stays the same. So if these devices were batteries and each battery had a rating of 12
Volts @ 220 Amp hours then the total value of this series circuit would be 48 Volts @ 220 Amp hours. If they were solar modules and each solar module had a rating of 17 volts and were rated at 5 amps
each then the total circuit value would be 68 volts @ 5 amps.
In the example below two 6 Volt 350 Amp hour batteries were wired in series which yields 6 Volts + 6 Volts = 12 Volts @ 350 Amp hours.
If the above devices were solar modules which were rated at 17 volts each @ 4.4 amps then this series circuit would yield 34 volts at 4.4 amps.
Remember the Voltage in a series circuit is additive and the Current stays the same.
Parallel Circuits
To wire any device in parallel you must connect the positive terminal of the first device to the positive terminal of the next device and negative terminal of the first device to the negative
terminal of the next device.
Important: When you wire devices in parallel the resulting Voltage and Current is just the opposite of a series circuit. Instead the Voltage in a parallel circuit stays the same and the Current is
additive. If each device in the above example had the potential of producing 350 Amp hours then 350 + 350 = 700 Amp hours, the Voltage would stay the same.
If these devices were batteries then this parallel circuit would yield total voltage of 12 volts @ 700 Amp hours. If these devices were solar modules that produced 17 Volts @ 4.4 amps each then the
this parallel circuit would yield 17 Volts @ 8.8 amps.
In the example below four 17 Volt @ 4.4 Amp solar panels were wired in parallel which yields 4.4 Amps + 4.4 Amps + 4.4 Amps + 4.4 Amps = 17.6 amps total @ 17 volts
if the above devices were batteries which were rated at 12 volts each @ 220 Amps hours then this parallel circuit would yield 12 volts @ 880 Amp hours.
Remember the Voltage in a parallel circuit stays the same and the Current is additive.
Series/Parallel Circuits
Hold on to your hats because here's where it gets a little wild. Actually you've already learned all you need to know to under stand series/parallel circuits.
A Series/parallel circuit is simply two or more series circuits that are wired together in parallel.
In the above example two separate pairs of 6 Volt batteries have been wired in series and each of these series pairs have been wired together in parallel.
You might be asking why in the world would someone want to put them self through this ? Well lets say that you want to increase the Amp hour rating of a battery pack so that you could run your
appliances longer but you needed to wire the pack in such a way as to keep the battery pack at 12 volts, or you want to increase the charging capacity of your solar array but you needed to wire the
solar modules in such a way as to keep the solar array at 34 volts, well, series/parallel is the only way to do that.
Remember in parallel circuits the current is additive so thus you increase your run time or Amp hour capacity or in the case of solar modules, you increase your charging current by wiring the
batteries or solar modules in parallel. Since we need 12 volts and have 6 volt batteries or in the case of solar modules we need 34 Volts and have 17 Volt modules on hand on hand, wiring the
batteries or solar modules in series allows us to get the 12 Volts or 34 Volts that we need.
An easy way to visualize it would be to start by wiring the batteries in individual sets that will give you the voltage that you need. Lets say that you need 24 volts but have six volt batteries on
hand. First wire four of the batteries in series to get 24 volts. (Remember wire in series to increase the voltage) and continue to wire additional sets of four batteries until the batteries are used
Next wire each series set of four batteries in parallel to each other (Positive to positive to positive and so on and then negative to negative to negative and so on) until each series set is wired
together in parallel. If each series set of batteries equals 24 Volts at 350 Amp hours then five series sets wired to each other in parallel would give you a 24 Volt @ 1750 Amp hour battery pack.
Remember: In a series circuit the current stays the same but the voltage is additive. In a parallel circuit the voltage stays the same but the current is additive.
If you need any assistance in wiring your system together, remember we're just a phone call away. We can provide you with diagrams or any sort of assistance that you might need to complete your
installation. Give a a call, we're here to help 1-888-955-3471
Click here to learn more about alternative energy | {"url":"http://www.partsonsale.com/learnwiring.htm","timestamp":"2014-04-17T21:23:00Z","content_type":null,"content_length":"33033","record_id":"<urn:uuid:9d6c3a26-c3a2-4d2e-8f13-9981d8662912>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geronimo Cardano
Cardano, Geronimo (jārôˈnēmō kärdäˈnō) [key], 1501–76, Italian physician and mathematician. His works on arithmetic and algebra established his reputation. Barred from official status as a physician
because of his illegitimate birth, he practiced as a medical astrologer. His major work, De subtilitate rerum (1550), on natural history, is perceptive and implies a grasp of evolutionary principles.
His book on games of chance represents the first organized theory of probability. Cardano described a tactile system similar to Braille for teaching the blind and thought it possible to teach the
deaf by signs.
See his The Book of my Life (1643, tr. 1930); studies by O. Ore (with a tr. of Cardano's Book of Games of Chance, 1965) and A. Wykes (1969).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Geronimo Cardano from Fact Monster:
• Niccolò Tartaglia - Tartaglia, Niccolò Tartaglia, Niccolò , c.1500–1557, Italian engineer and ...
• mathematics: Development of Mathematics - Development of Mathematics The earliest records of mathematics show it arising in response to ... | {"url":"http://www.factmonster.com/encyclopedia/people/cardano-geronimo.html","timestamp":"2014-04-18T18:38:57Z","content_type":null,"content_length":"20471","record_id":"<urn:uuid:25edfa4c-dcfe-47b4-95f7-ef35716de5b8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix Multiplication
Multiplication in C language is very useful, below is a simple example on matrix multiplication, later on i will post matrix arithematic operations using C++
void main()
int a[10][10],b[10][10],c[10][10],i,j,k,m,n,p,q;
printf("Enter The Rows And Cloumns And Of The First Matrix:");
scanf("%d %d",&m,&n);
printf("\nEnter The Rows And Cloumns And Of The Second Matrix:");
scanf("%d %d",&p,&q);
printf("\nEnter Elements Of The First Matrix:\n");
for(i=0;i< m;i++)
for(j=0;j< n;j++)
printf("\nEnter Elements Of The Second Matrix:\n");
for(i=0;i< p;i++)
for(j=0;j< q;j++)
printf("The First Matrix Is:\n");
for(i=0;i< m;i++)
for(j=0;j< n;j++)
printf(" %d ",a[i][j]); //print the first matrix
printf("The Second Matrix Is:\n");
for(i=0;i< p;i++) // print the second matrix
for(j=0;j< q;j++)
printf(" %d ",b[i][j]);
printf("Aborting!!!!!!/nMultiplication Of The Above Matrices Not Possible.");
for(i=0;i< m;i++)
for(j=0;j< q;j++)
c[i][j] = 0;
for(k=0;k< n;k++)
c[i][j] = c[i][j] + a[i][k] * b[k][j];
printf("\nMultiplication Of The Above Two Matrices Are:\n\n");
for(i=0;i< m;i++)
for(j=0;j< q;j++)
printf(" %d ",c[i][j]);
23 comments:
i found some anomalies in your code:
printf("\nEnter The Rows And Cloumns And Of The Second Matrix:");
scanf("%d %d",&p,&q);
^ here, you put row and col into the variables p and q... but the next part...
printf("\nEnter Elements Of The Second Matrix:\n");
for(i=0;i< m;i++)
for(j=0;j< n;j++)
^ here, you used m and n as sentinels, instead of p and q...
printf("The Second Matrix Is:\n");
for(i=0;i< n;i++) // print the second matrix
^ same here...
Error Regreated !!!!!!!!!
Have corrected it .....
any future hlp warmly welcomed!
You are saying:
printf("\nMultiplication Of The Above Two Matrices Are:\n\n");
for(i=0;i< n;i++)
for(j=0;j< p;j++)
printf(" %d ",c[i][j]);
However, the resulting matrix should be R[r1][c2], which based on your syntax are: m and q. Is that right?
cause of the previous changes the program code was false
now the code is corrected thanx 4 yr support
sir can we show row by coloumn multiplicatn as a
out put if yes send me a code of this program
could u care your explain yr Query.
an example would be sufficient.
there is an error in this program
there is an error in this program
the program was little big
Hmm thanks! acutally i got struck in the main loop in taking the limits, u done write!
Thanks again!
hi Lionel Noronha nice code request would u like to be my friend over the net my email id is manish4mirth@gmail.com, i would like to exchange views over c language if u r intrested than mail me
or we can use orkut also
printf("\nEnter The Rows And Cloumns And Of The Second Matrix:");
scanf("%d %d",&p,&q);
this is wrong code for entering the values in array. here must be a loop needed.
printf("\nEnter The Rows And Cloumns And Of The Second Matrix:");
scanf("%d %d",&p,&q);
this is wrong code for entering the values in array. here must be a loop needed.
Thanxx.... I have just started coding in c++ and i got stuck in that... but you provided a better material... thanxxx alot...
thanks for the matrix example. It helped me a lot.
Thanks for this useful example.
why have you used a third variable "k" in the multiplication matrix?? why not use "j"??
kindly reply ASAP
Here im presenting a sample code to multiply two matrix a , b and the result will be stored in matrix C
void main()
int a[3][3] , b[3][3] , c[3][3];
int i , j , k;
cout<<"Enter Matrix A";
for( i = 0 ; i < 3 ; i++)
for( j = 0 ; j < 3 ; j++)
cout<<"Enter Matrix B";
for( i = 0 ; i < 3 ; i++)
for( j = 0 ; j < 3 ; j++)
for( i = 0 ; i < 3 ; i++)
for( j = 0 ; j < 3 ; j++)
c[i][j] = 0;
for( k = 0 ;k < 3 ; k++)
c[i][j] += a[i][k]*b[k][j];
cout<<"The resultant matrix is ";
for( i = 0 ; i < 3 ; i++)
for( j = 0 ; j < 3 ; j++)
cout<<a[i][j]<<" ";
there is a mistake in your code ...
cout<<"The resultant matrix is ";
for( i = 0 ; i < 3 ; i++)
for( j = 0 ; j < 3 ; j++)
here --> ////cout< ////cout<<c[i][j]<<" "; /////
hi,there is a mistake in your code ...
cout<<"The resultant matrix is ";
for( i = 0 ; i < 3 ; i++)
for( j = 0 ; j < 3 ; j++)
here --> ////cout< ////cout<<c[i][j]<<" "; /////
Thank you!!! was usefule
Good and easy
great job.... | {"url":"http://cppgm.blogspot.com/2007/07/matrix-multiplication.html","timestamp":"2014-04-16T13:02:29Z","content_type":null,"content_length":"77858","record_id":"<urn:uuid:3c39320f-8839-4c65-9853-0a2ef50faf39>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Mathematica on The Gauge Connection
Dyson-Schwinger equations and Mathematica
As always I read the daily arxiv sends to me and I have found a beatiful work due to Alkofer and collaborators. An important reason to mention it too here is that it gives an important tool to work
with that can be downloaded. This tool permits to obtain Dyson-Schwinger equations for any field theory. Dyson-Schwinger equations are a tower of equations giving all the correlators of a quantum
field theory so, if you know how to truncate this tower you will be able to get a solution to a quantum field theory in some limit.
The paper is here. The link to download the tool for Mathematica version 6.0 and higher is here.
I hope to have some time to study it and try a conversion for Maple. Currently I was not able to test it as on my laptop I have an older version of Mathematica but is just few hours away from testing
on my desktop.
Recent Comments
• mfrasca on New entries in my lab
• Rich Migliaccio on New entries in my lab
• mfrasca on Nailing down the Yang-Mills problem
Leave a Comment » | Physics | Tagged: Dyson-Scwhinger Equations, Mathematica | Permalink | {"url":"http://marcofrasca.wordpress.com/tag/mathematica/","timestamp":"2014-04-20T10:49:13Z","content_type":null,"content_length":"57799","record_id":"<urn:uuid:c53b2dd1-2a01-410e-80e7-b71e0d8904dd>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
a "fake" programming language often used in the design phase of writing a program. Consists of (usually) English words in steps that should translate easily to real code. There are no formal rules
for what pseudocode should look like.
Informal 'rules' of Pseudocode, as I have seen in practice:
Writing in pseudocode is a good idea for beginning programmers, because it allows them to express an algorithm or other method in a combination of natural language and computer language; however, as
the programmer becomes more experienced, it is easier for him or her (and in fact it makes more since to him or her) to express ideas in a programming language. This could lead to the problem of
talking in code.
(thing) by The Narrator Wed Mar 21 2001 at 2:19:29
When a programmer designs their own algorithm, they may tend to remember it in pseudocode, especially if it was designed arbitrarily in their head, as opposed to on a computer. An example might be
the pseudocode for a prime number finder:
begin function isprime(number) make counter var as longint make factorcount var as shortint begin loop from 1 to number check if modulus of number by counter is 0 if so, add one to factor count end
loop check if factor count is greater than 2 if so, return false else return true end function
pure How to say "else if" how to increase the size of an array baker's algorithm implementations
Reversing a linked list Donald Knuth Binary GCD algorithm pidgin
Treating registers as if they were variables no-wave the use of = in online communication :(){:|:&};:
Out-of-order execution Primality Testing Multiplication algorithm MMIX
Barrett Reduction How your brain codes knowledge algorithm trace Solving a system of equations by back substitution
How to tell when a journalist has no idea what they're talking about function pointer Tetris pseudocode C++: computing Fibonacci numbers at compile time
Log in or register to write something here or to contact authors. | {"url":"http://everything2.com/title/Pseudocode","timestamp":"2014-04-20T15:56:15Z","content_type":null,"content_length":"32902","record_id":"<urn:uuid:5f1e91f4-1956-48b3-bbf3-664bc3f8b726>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to make tax calculator in excel
In your OP, you added this:
If for Male total income exceed Rs.160000 to 300000 then tax will be 140000*10% if lower than 160000 then no tax and if more than 300000 to 500000 then tax will be 14000+200000*20% and in more than
500000 then 54000+morethan 5000000*30%
Let me take that apart and see if it makes sense.
If income is less than 160,000 then 0 tax.
If income is between 160,000 and 300,000 then tax is 140,000*10% = 14,000. Is that a flat tax?
If income is between 300,000 and 500,000 then tax is 14000+200000*20% = 54,000 Another flat tax?
If income is more than 500,000 then tax is 54,000 + more than 5,000,000*30%. I'm guessing you have an extra 0 in there.
Now, that's what you wrote, but I don't think that's what you meant.
I think you meant:
If income is less than 160,000 then 0 tax.
If income is between 160,000 and 300,000 then tax is 14,000 plus the amount greater than 160,000*10%.
If income is between 300,000 and 500,000 then tax is 14,000 plus the amount greater than 300,000*20%.
If income is more than 500,000 then tax is 54,000 plus the amount greater than 500,000*30%.
If that's right, then the formula to use is: | {"url":"http://www.computing.net/answers/office/how-to-make-tax-calculator-in-excel/11102.html","timestamp":"2014-04-17T01:08:22Z","content_type":null,"content_length":"39535","record_id":"<urn:uuid:50c10141-c62a-49d5-9150-cdcdb3b44f86>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
3D Homing Missile
How can I make a homing missile? I've tried working with quaternions a bit, but without success.
Currently, my missile has a vector for its current location, a float for speed, and a quaternion for orientation. How can I gradually rotate the quaternion to lock onto a specific target, given the
aforementioned variables?
Also, how would I apply the quaternion to openGL?
Posts: 28
Joined: 2003.10
Two answers, in reverse order.
You'll need to convert the Quaternion of the missile's orientation to an OpenGL 4x4 rotation matrix that you can use when drawing the missile.
To turn the missile towards its target, ou'll either need to calculate the minimum arc Quaternion that represents the angle between the direction vector of the missile's flight and the vector from
the missile's location to its target location, and apply a fraction of it to the missile's orientation each frame to turn the missile towards the target.
Or calculate the axis the missile has to turn about in order to face the target (the cross product of the missile's direction vector and the vector from the missile to the target), and turn the
missile a fraction of an arc about that axis each frame.
Code to calculate the Quaternions to perform each of these steps can be easily found. There's some in my craptacular source to Oolite (
), but I use a different algorithm for my homing missiles (it's more hokey - I wouldn't recommend it).
Posts: 832
Joined: 2002.09
One thing to remember - you only need to rotate on two axes, you can disregard the "spinning" axis. That should simplify the maths a bit.
Posts: 832
Joined: 2002.09
Actually, I think it does. I'm not very sure about it, though.
However, I stumbled upon another interesting way to go about it in the GameDev.net forums:
Take the missile's current heading vector, and the vector from the missile's position to the target's position. These two form a plane that intersects both ships. Take the cross product between the
vectors. You now have the normal of that plane. If you rotate around that normal, you are rotating on a plane that contains the missile and target. That is, the problem of steering towards the target
is reduced to one variable - the missiles angle on that plane! I haven't tried it, but it seems really really cool and definitely worth investigating. | {"url":"http://idevgames.com/forums/thread-5436.html","timestamp":"2014-04-20T14:03:51Z","content_type":null,"content_length":"22545","record_id":"<urn:uuid:5bce0c7f-fb6e-4745-b2d9-c8f34e331d3e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
computability and geometry
up vote 5 down vote favorite
I am looking for a discussion on computability and algorithms in relation to geometric constructions.
Does anyone know if the subject has been treated from the viewpoint of elementary euclidean geometry (ex. ruler and compass constructibility)?
Thank you
computability-theory geometry
1 This paper of Pippenger's is often cited in this context, but I haven't read it myself: "Computational complexity in algebraic function fields," 1979. portal.acm.org/citation.cfm?id=
1382433.1382606 – Joseph O'Rourke Jul 5 '10 at 15:55
I once asked T. Y. Lam a related question, he was firm in saying there is no canonical form possible for numbers in the "constructible numbers," meaning the smallest field extension of the
rationals such that the square root of any positive element is also in the field. – Will Jagy Jul 5 '10 at 18:50
@Will Jagy: Do you know what Lam meant? Here's a weak version. Let $E$ be the set of expressions for constructible numbers (you can use rational numbers, arithmetic operations, and square roots;
1 no division by zero or square roots of negative numbers), and let $e : E \to \mathbb{R}$ be the evaluation map. Then we want a computable function $f : E \to E$ such that for $\alpha,\beta in E$,
we have $e(\alpha) = e(\beta)$ if and only if $f(\alpha) = f(\beta)$, and $f(f(\alpha)) = f(\alpha)$. Such a function does exist, but I imagine Lam meant there's no canonical choice of canonical
form? – Henry Cohn Sep 2 '11 at 13:00
add comment
3 Answers
active oldest votes
I personally like this elegant but somewhat obscure (relatively) recent paper by Alekhnovich and Belov (MR1866477; Russian version is downloadable; it was written while Misha was
up vote 5 down still in Moscow). Enjoy!
add comment
Tarski's theorem on real-closed fields shows that there is a decision procedure to compute the truth or falsity of any first-order statement in the real closed field $\langle\mathbb{R},+,\
cdot,\lt,0,1\rangle$, which includes via Cartesian geometry many of the usual concepts of Euclidean geometry and more. For example, this language is expressive enough to speak of circles,
lines, paraboloids, and so on, real algebraic equations in any finite dimension, the $n$-dimensional metric, concepts of bounded or unbounded solution sets and so on. Tarski's algorithm
provably determines the truth of any statement expressible in this language, even when these statements involve complex alternations of quantifiers (for every circle, there are three lines
up vote such that for every parabola of a certain kind and so on...), which in other contexts often cause undecidability.
4 down
vote Unfortunately, the set of integers is not definable in this language (since this would immediate refute decidability by allowing the halting problem to be expressed), and it follows that
Tarski's theorem does not apply very well to questions about algorithms, which is the main focus of your question, since one seems to need the integers to express concepts of iterating a
procedure, a central consideration with algorithms.
1 Tarski's theorem doesn't address ruler and compass constructability, if that was the original question, since it allows for more general algebraic equations. – Peter Shor Jul 5 '10 at
Given this answer, I can recommend "Old and New Results in the Foundations of Elementary Plane Euclidean and Non-Euclidean Geometries" by Marvin Jay Greenberg, the M.A.A.'s American
Mathematical Monthly, March 2010, vol. 117, no. 3, pages 198-219; especially references to works of one Victor Pambuccian. If people cannot find it I have a pdf. I know from my own work
that one can prove something is constructible with compass and straightedge while having no clue of how to go about it in practice. – Will Jagy Jul 5 '10 at 22:33
Peter, it doesn't matter that Tarski's theorem allows for greater expressibility, since his decision algorithm works even for the weaker cases. For example, it seems to me that for any
fixed number $k$, the relation "$z$ is consructible by ruler-and-compass from $w$, $x$ and $y$ in $k$ steps or less" is expressible in the language of real-closed fields. So Tarski's
theorem provides a decision procedure for all such statements. One could easily also allow more complex construction methods, as long as they were algebraic. What does not seem to be
expressible are questions that quantify over $k$. – Joel David Hamkins Jul 6 '10 at 0:02
add comment
You might want to look at George Stiny's Shape: Talking about Seeing and Doing (MIT Press, 2006) and Dominic Widdows's Geometry and Meaning (CSLI Publications, 2004).
up vote 1 down vote
add comment
Not the answer you're looking for? Browse other questions tagged computability-theory geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/30631/computability-and-geometry/30684","timestamp":"2014-04-21T02:24:25Z","content_type":null,"content_length":"65971","record_id":"<urn:uuid:28c58d76-2c5a-4ae6-b8b2-abff0db2ab60>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Axiom of Choice and separation
K. P. Hart K.P.Hart at tudelft.nl
Thu Aug 16 07:06:13 EDT 2007
On Tue, Aug 14, 2007 at 08:56:30AM +0530, Saurav Bhaumik wrote:
> Dear Experts,
> 1. It is evident that if we assume Axiom of Choice, any linearly ordered
> space is Normal, not only that, it is monotonically normal. But in
> absence of Axiom of choice, however, a non-normal orderable space can be
> constructed.
> Consider the hierarchy: T_0<T_1<T_2<T_(2 1/2)<T_3<T_(3 1/2)<T_4 (or any
> denser separation hierarchy).
> Within ZF, how far can a general linear order might go?
> It is evidently T_2.
> Is it Completely Hausdorff? Is it regular?
> What if we assume that the order is dense? What if it is order complete?
> What if it is connected?
After some further searching:
author={L{\"a}uchli, H.},
title={Auswahlaxiom in der Algebra},
journal={Comment. Math. Helv.},
review={\MR{0143705 (26 \#1258)}},
(or http://www.ams.org/mathscinet-getitem?mr=143705 )
one finds a permutation model in which the rationals in in the interval [0,1]
form an ordered continuum that satisfies the T_4-axiom, yet every continuous
real-valued function defined on it is constant.
Therefore without sufficient choice Urysohn's Lemma need not hold and
linearly ordered topological spaces need not be completely regular.
Connectedness is the only stumbling block:
if no interval in the space is connected then there are sufficiently
many 0-1-valued functions to separate points as well as points and
closed sets so those linearly ordered spaces are completely Hausdorff
and completely regular.
KP Hart
E-MAIL: K.P.Hart at TUDelft.NL PAPER: Faculty EEMCS
PHONE: +31-15-2784572 TU Delft
FAX: +31-15-2787245 Postbus 5031
URL: http://fa.its.tudelft.nl/~hart 2600 GA Delft
the Netherlands
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-August/011825.html","timestamp":"2014-04-19T07:02:21Z","content_type":null,"content_length":"4676","record_id":"<urn:uuid:76f398f4-1d96-48ab-84bf-22598f46f58c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characterizations of Inner Product Spaces
Characterizations of Inner Product Spaces
Dan Amir
Birkhäuser Verlag, Jan 1, 1986 - Mathematics - 200 pages
From inside the book
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Introduction 1
Structure 3
Notation 4
26 other sections not shown
Common terms and phrases
2-dimensional space 2-dimensional subspace assume dim Banach spaces Birkhoff bounded chord closed maximal subspace condition conv convA convex set dimE dimensional ellipsoid embeds equivalent
Euclidean figure finite geometric Hahn-Banach theorem hence Hilbert spaces immediate implies strict convexity inequality inner product spaces intersection isoceles triangle isometry Lemma linear
isometries linear projection linear selection linearly independent Math Metric projections midpoint minimal nonexpansive norm-1 projection normed space numbered orthogonality is symmetric parallel
to uv parallelogram parallelogram equality plane points Proof properties proximinal rE(A satisfies segment sgnx sgny smooth strictly convex subspace F sup inf supporting hyperplane tangent twice
differentiable u,v e u,v e SE u,v e Sg unique unit sphere Vu,v e Vu.v e Vx,y e ZE(A
References to this book
The Volume of Convex Bodies and Banach Space Geometry
Gilles Pisier
Limited preview - 1999
Dirk Werner
No preview available - 2007
All Book Search results »
Bibliographic information
Characterizations of Inner Product Spaces
Dan Amir
Birkhäuser Verlag, Jan 1, 1986 - Mathematics - 200 pages
The Volume of Convex Bodies and Banach Space Geometry
Gilles Pisier
Limited preview - 1999 | {"url":"http://books.google.com/books?id=VVmqAAAAIAAJ&client=firefox-a&source=gbs_navlinks_s","timestamp":"2014-04-18T06:31:13Z","content_type":null,"content_length":"112037","record_id":"<urn:uuid:ed694acf-d7d7-462e-9907-7bf0edc50807>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
BlackBerry Forums Support Community
Originally Posted by
Are you talking specifically about WiFi or just being able to connect to the web? If you are just talking about WiFi then no, as mentioned you need a WiFi signal to recieve. If there's no signal
there's no WiFi.
If on the other hand you mean connecting the web regardless of how then you can access the web through your service provider and use the mobile network to access it.
Oh, I mean connecting to the web then.
Because, I just want to be able to check my e-mails (as I receive a lot frequently) when I am away from home etc.
Another thing... will I need to be on contract to do this? | {"url":"http://www.blackberryforums.com/general-8900-series-discussion-javelin/195633-couple-questions.html","timestamp":"2014-04-19T10:43:58Z","content_type":null,"content_length":"58892","record_id":"<urn:uuid:641b080b-ef98-4c3b-ae2c-68c94a9f4a6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
Hello! How would you integrate 2^x / ln2? Thanks already!
note that: $\frac{1}{\ln(2)} = \frac{\ln(2)}{(\ln(2))^2}$ you can take the denominator (which is just a constant) outside the integral, so: $\int \frac{2^x}{\ln(2)}\ dx = \frac{1}{(\ln(2))^2}\int \ln
(2)2^x\ dx = \frac{1}{(\ln(2))^2} \int e^{\ln(2)x} \ln(2)\ dx$ if you use the substitution $u = \ln(2)x$ what would $du$ be? | {"url":"http://mathhelpforum.com/calculus/210478-integral.html","timestamp":"2014-04-21T05:22:16Z","content_type":null,"content_length":"34976","record_id":"<urn:uuid:279e51d1-42e0-471e-9e6a-d9afa2cab54b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maxwell's Equations, Yet Again
Next: Quickie Review of Chapter Up: Radiation Previous: Radiation Contents
Suppose we are given a system of classical charges that oscillate harmonically with time. Note that, as before, this can be viewed as the special case of the Fourier transform at a particular
frequency of a general time dependent distribution; however, this is a very involved issue that we will examine in detail later in the semester.
The form of the charge distribution we will study for the next few weeks is:
The spatial distribution is essentially ``arbitrary''. Actually, we want it to have compact support which just means that it doesn't extend to infinity in any direction. Later we will also want it to
be small with respect to a wavelength.
Robert G. Brown 2013-01-04 | {"url":"http://www.phy.duke.edu/~rgb/Class/Electrodynamics/Electrodynamics/node74.html","timestamp":"2014-04-16T11:45:21Z","content_type":null,"content_length":"4582","record_id":"<urn:uuid:474d5882-0145-49bf-be1f-1f417591ea4b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounded operators and axiom of choice
up vote 4 down vote favorite
In the article below, it is shown that the proposition "Every linear operator defined on a whole Hilbert space is bounded" is consistent with the axioms of ZF + a weakened version of the axiom of
choice (called DC).
So, if I want to prove that an operator A defined on a Hilbert space H is bounded, is it enough to just check that the axiom of choice was not used to define it?
And a related question: To show that a set is measurable, is it enough to check that the definition of this set didn't use the axiom of choice? (As similarly, the statement "all subsets of R" are
Lebesgue measurable is consitent with ZF without the axiom of choice.)
~ Link: http://www.ams.org/journals/bull/1973-79-06/S0002-9904-1973-13399-3/S0002-9904-1973-13399-3.pdf
axiom-of-choice measure-theory unbounded-operators
Do note that it is often hard to escape the axiom of choice. E.g. almost every time you say "complete $A$ to a basis" you use the axiom of choice. – Asaf Karagila Jun 23 '12 at 11:51
add comment
3 Answers
active oldest votes
In the case of measurable sets, something like your statement is true, but your particular criteria is not literally correct. It is an over-simplification, since once one rises to a certain
level of complexity, the issue isn't whether the definition of your set "uses the axiom of choice", but rather the issue is a matter of the properties of the ambient set theoretic universe
in which you are defining your set.
The basic fact is that yes, sets of reals defined by particularly simple definitions are indeed automatically measurable. (One should assume at least a small amount of choice, say the DC
principle, in order to have a satisfactory theory of Lebesgue measure.)
To illustrate the lowest level of this, you probably know that every Borel set is Lebesgue measurable. This is an instance of the definability claim, because the Borel sets are precisely
those sets that have complexity $\Delta^1_1$ in the descriptive set-theoretic hierarchy, which means that they can be defined by a property involving quantification over finite objects plus
a single universal quantifier over the reals, and equivalently by a definition with a single existential quantifier over the reals.
More generally, the $\Sigma^1_1$ sets are the projections of Borel sets, and these are also measurable. These are the sets that can be defined with a single existential real quantifier,
followed by any number of quantifiers over a fixed countable realm. If one has Martin's axiom plus $\neg$CH, then this rises to the $\Sigma^1_2$ sets.
Above this, one has the very interesting phenomenon that the assertions that classes of sets are Lebesgue measurable begin to have large cardinal consistency strength. For example, Solovay
up vote proved that the assertion that every $\Sigma^1_3$ set is measurable--these are the sets definable with the quantifier structure $\exists x\forall y\exists z\ \varphi(\cdot,x,y,z)$, where $\
5 down varphi$ uses only arithmetic quantifiers---is equiconsistent with the existence of an inaccessible cardinal.
The existence of much stronger large cardinals have outright consequences for the measurability of projective sets. For example, the principle known as Projective Determinacy, which is
equiconsistent with and implied by the existence of infinitely many Woodin cardinals, implies that every projective set is determined. Thus, under PD, any set of reals that can be defined by
quantifying only over the reals and over the natural numbers will automatically be Lebesgue measurable.
Some of these issues were considered in other questions here on mathoverflow, such as ZFC plus every analytical set is measurable?.
Finally, to show that particular criterion you mention is not correct, observe that by forcing we can make any particular set of reals be the reals that are coded into a block of the GCH
pattern on the cardinals $\aleph_\alpha$. Thus, I can define a set of reals by saying "the set of reals whose binary expansion occurs as a part of the GCH pattern". This definition does not
use the axiom of choice, but it can be used to define any given set, which may or may not be measurable. There are numerous other ways to see a similar effect.
A similar observation applies the bounded operator case---in principle any set can be defined by reference to the GCH pattern, and these definitions do not in principle refer to the axiom of
This is the standard answer set theorists give to this type of question, and it is useful and interesting. Intuitively, it is a bit unsatisfying since one has the sense that one is never
at risk of defining up an unmeasurable set by accident, but it is not always obvious that particular definitions are $\Sigma^1_1$. For that matter, one might even define a set by
quantifying over sets of reals, but then there is no guarantee of measurability even given large cardinals, right? Also, just out of curiosity, is there any reason to think sets
constructed using DC will be sometimes be measurable? – Marian Jun 23 '12 at 11:46
I agree that the general lesson of these results is that if you define a set projectively (quantifying only over reals and integers), then indeed we should expect it to be measurable (and
the details of this involve large cardinals). And with practice, I think one does get a good sense of the complexity of the definitions one encounters. Descriptive set theorists are
typically quite quick to say a particular definition is $\Sigma^1_2$ or $\Pi^1_4$ or whatever, and one gets the knack of it. As for your final question, to my way of thinking, definitions
do not use axioms at all, ... – Joel David Hamkins Jun 23 '12 at 13:54
..but rather we use axioms when proving a theorem. So perhaps you have in mind a situation where you prove that there is a certain kind of set, and your proof uses DC but not AC. In this
case, for the reasons I explain in the last paragraphs of my answer, we cannot deduce that it must or must not be measurable. – Joel David Hamkins Jun 23 '12 at 13:56
I don't agree with that. When you "use the axiom of choice" in a construction, what you are really doing is providing a construction relative to a particular choice function (or
1 well-ordering or whatever). So you aren't defining a specific set, but rather defining a function from the choice-functions to the sets. The axiom of choice is needed in order to know
that this is not a vacuous construction. It is consistent with ZFC that there are projectively definable well-orderings of $\mathbb{R}$, so in those universes, even when you use AC in
this way, the resulting set is still simply definable. – Joel David Hamkins Jun 23 '12 at 14:19
Joel, you mentioned defining a set of reals in terms of the GCH pattern. If what you mean by GCH pattern is "for which $\alpha$ is $2^{\aleph_\alpha}=\aleph_{\alpha+1}$," then a good deal
1 of choice is needed to ensure that this makes sense and does hat you want. Even if this is not what you meant by GCH pattern, it would seem to be a reasonable definition, but only in the
presence of a good deal of choice. The bottom line is that uses of AC in defining sets of reals don't necessarily look like what you described in your comment (though they very often do).
– Andreas Blass Jun 23 '12 at 16:19
show 2 more comments
Joel has explained very well the situation in ZF+DC and the effect of large cardinals. Let me add that a consistency result due to Solovay comes very close to achieving what the question
hoped for, but (because it's a consistency result) only in certain models of set theory. Solovay's result says that the following theory is consistent (relative to ZFC plus an inaccessible
cardinal): ZFC plus Lebesgue measurability of all sets of reals that are definable (in the language of set theory) with a countable sequence of ordinals as a parameter. (Note that the
permitted parameters include reals and, via coding, countable sequences of reals.) I believe that this sort of definability is extensive enough to cover anything likely to arise in analysis
up vote (when no set theorists are involved).
3 down
vote I also believe that the analogous result holds, in the same model, for the part of the question about boundedness of Hilbert space operators, but I have not thought enough about this to make
any guarantees.
Andreas, automatic continuity holds in Solovay's model for Banach spaces and therefore for Hilbert spaces too. – Asaf Karagila Jun 23 '12 at 17:18
add comment
if I want to prove that an operator A defined on a Hilbert space H is bounded, is it enough to just check that the axiom of choice was not used to define it?
In a certain sense, the answer is "yes", taking into account the peculiarities that Joel notes. But if you actually have explicitly written down the definition of your operator on an
up vote 1 explicit Banach space: in every known case it will be easy to simply apply the closed graph theorem to conclude continuity. Far easier than doing the meticulous checking that the Axiom of
down vote Choice has been avoided.
If I recall correctly the common way of proving the closed graph theorem is through the Baire category theorem (in one way or another) which is equivalent to DC itself. Of course the
question assumes DC, but it is worth mentioning that methinks. – Asaf Karagila Jun 23 '12 at 14:19
add comment
Not the answer you're looking for? Browse other questions tagged axiom-of-choice measure-theory unbounded-operators or ask your own question. | {"url":"https://mathoverflow.net/questions/100440/bounded-operators-and-axiom-of-choice","timestamp":"2014-04-16T07:56:20Z","content_type":null,"content_length":"75115","record_id":"<urn:uuid:cdff1c8d-d54d-436d-90ec-e1f3d7bdf9e7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
OK need more help What is 3/8 +5/7? is it 8/15.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50fc70e7e4b010aceb334881","timestamp":"2014-04-20T00:44:20Z","content_type":null,"content_length":"111314","record_id":"<urn:uuid:473165cf-b59f-4489-901d-1e77876e997c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
The X-ray Spectrum For A Typical Metal Is Shown ... | Chegg.com
The X-ray spectrum for a typical metal is shown in Figure 31-22. Find the approximate wavelength of Kβ X-rays emitted by chromium. (Hint: An electron in the M shell is shielded from the nucleus by
the single electron in the K shell, plus all the electrons in the L shell.)
Z=atomic number of chromium (24)
for E[k ] n[1]=1
for E[m ] n[1]=3
Used E= -13.6eV(Z-1)^2/n[1^2]
Found E[k] , E[m ] Used these two values to find ΔE. Converted ΔE to J. Then plugged everything in to this formula λ=hc/ΔE
Came up with -1.94e-16 and that was wrong.
Where did I go wrong? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/x-ray-spectrum-typical-metal-shown-figure-31-22-find-approximate-wavelength-k-x-rays-emitt-q1339164","timestamp":"2014-04-16T12:03:49Z","content_type":null,"content_length":"21639","record_id":"<urn:uuid:09dbb6ea-15fe-41a3-849e-8045ff63e5cb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Repetitive I
Repetitive Intravenous Dosing
In repetitive dosing, the plasma concentration at the end of the first dosing interval is given by:
(1) C[P1]^T = C[P1]^Oexp(-k[el]T)
Immediately after the second dose is injected, the plasma [ ] will be,
(2) C[P2]^O = C[P1]^T + C[P1]^O = C[P1]^Oexp(-k[el]T) + C[P1]^O and so on.
We then need to define a certain parameter R, which is the fraction of the initial plasma concentration that remains at the end of any dosing interval. R is given as
(3) R = exp(-k[el]T) = 10^-(k[el]T /2.30)
Equations (1) and (2) are simplified into
C[P1]^T = C[P1]^OR
for the plasma concentration at the end of the first dosing interval, and
C[P2]^O = C[P1]^OR + C[P1]^O
for the plasma concentration at the beginning and end of the nth dosing interval. | {"url":"http://mips.stanford.edu/courses/pharmacokinetics/Lin_comp/MDR/RepDosing1a.html","timestamp":"2014-04-21T12:29:40Z","content_type":null,"content_length":"2321","record_id":"<urn:uuid:4526ac90-baf5-4c9c-9934-613154f0d03e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the number of permutations in the word “fission” In how many ways can a student arrange 6 textbooks on a locker shelf that can hold 4 books at a time? the number of permutations of the first 8
letters of the alphabet taking 5 letters at a time.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
number 1) use the formula : P = n!/(q1!*q2!) with n = the sum of elements q1 and q2 are the elements have the same elements what u get ?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bf65cee4b0231994ec71a0","timestamp":"2014-04-16T13:23:15Z","content_type":null,"content_length":"27867","record_id":"<urn:uuid:d373ee72-0e81-495b-9d69-87932806533a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example of an algebra finite over a commutative subalgebra with infinite dimensional simple modules
up vote 8 down vote favorite
Let $A$ be an algebra over an algebraically closed field $k.$ Recall that if $A$ is a finitely generated module over its center, and if its center is a finitely generated algebra over $k,$ then by
the Schur's lemma all simple $A$-modules are finite dimensional over $k.$
Motivated by the above, I would like an example of a $k$-algebra $A,$ such that:
1) $A$ has a simple module of infinitie dimension over $k,$
2) $A$ contains a commutative finitely generated subalgebra over which $A$ is a finitely generated left and right module.
Thanks in advance.
Is this a homework problem? – S. Carnahan♦ Jun 19 '10 at 23:36
4 No, but you could use it as a homework problem if you wish. – Bedini Jun 20 '10 at 0:25
1 Is there a reference for the statement in the first paragraph? And are there familiar (noncommutative) infinite dimensional algebras meeting these conditions? The motivation here needs some
reinforcement. For me the interesting examples are universal enveloping algebras of finite dimensional Lie algebras in prime characteristic, where Schur's lemma isn't enough to prove finite
dimensionality of all simple modules. (Ditto for quantized enveloping algebras or function algebras at a root of unity.) – Jim Humphreys Jun 20 '10 at 12:51
@Jim: I don't know the exact reference, but one could prove it as follows. Let V be a simple A-module. Without loss of generality we may assume that the annihilator of V is 0. Let Z denote the
center of A. It follows that V is f.g. Z-module and any nonzero element of Z acts invertibly on V (Schur's lemma), so Nakayama's lemma implies that Z is a field. But by the assumption Z is a
finitely generated algebra over an algebraically closed field k. Therefore Z=k and V is finite over k. – Bedini Jun 20 '10 at 20:53
I can't follow the last steps in your sketch and would prefer a reference. A textbook version I recall (following Jacobson's original line of proof) treats only universal enveloping algebras over
a field of prime characteristic, combining Schur's Lemma with a generalized version of Nakayama's Lemma and the Hilbert Nullstellensatz. Is there a simpler version? (And other natural examples
besides enveloping algebras where the same hypotheses are satisfied?) – Jim Humphreys Jun 20 '10 at 22:46
show 2 more comments
1 Answer
active oldest votes
Doc, this is a stinker. Your condition (2) forces your algebra to be finitely generated PI, and every little hare knows that simple modules over such algebras are
up vote 4 down vote finite-dimensional. See 13.4.9 and 13.10.3 of McConnell-Robson...
Very good! Thanks. – Bedini Jun 21 '10 at 17:37
add comment
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/28784/example-of-an-algebra-finite-over-a-commutative-subalgebra-with-infinite-dimensi/28816","timestamp":"2014-04-19T22:43:40Z","content_type":null,"content_length":"59060","record_id":"<urn:uuid:cefa4b7f-e0ca-4d23-a09d-63f1dea94eda>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pictographs Extras
5.19: Pictographs Extras
Created by: CK-12
Extras for Experts - Pictographs – Interpret Pictographs
Try these. Tell how you figured out the differences.
1. 9; There are 3 more symbols after Kyle’s name than after Jan’s name. Each symbol stands for 3 board games, so Kyle has $3 \times 3$
2. 20; There are 4 more symbols after Olya’s name than after Nat’s name. Each symbol stands for 5 puzzles, so Olya has $4 \times 5$
3. 40; There are 4 fewer symbols after Vinny’s name than after Toni’s name. Each symbol stands for 10 blocks, so Vinny has $4 \times 10$
4. 8; There are 2 more symbols after Bea’s name than after Dawn’s name. Each symbol stands for 4 toy cars, so Bea has $2 \times 4$
5. 6; There are 3 fewer symbols after Sue’s name than after Tara’s name. Each symbol stands for 2 golf balls, so Sue has $3 \times 2$
6. 60; There are 3 more symbols after Hap’s name than after Gus’s name. Each symbol stands for 20 marbles, so Hap has $3 \times 20$
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/book/Algebra-Explorations-Pre-K-through-Grade-7/r4/section/5.19/","timestamp":"2014-04-20T08:51:33Z","content_type":null,"content_length":"103079","record_id":"<urn:uuid:b144436f-c956-4923-8681-d94f2d459205>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find an Algebra Tutor
...I am also a native speaker of French and Spanish, which I tutor on a regular basis. finally I have a Master of Science in Computer Sciences. I am a hands-on facilitator. In math I never solve
the problem, Instead, I give a tip and let her think about the pb.
14 Subjects: including algebra 1, algebra 2, Spanish, physics
...I work comfortably with students in grades 4-12 and college students who are in maths up to calculus. I can provide substantial references for parents of the students I have tutored as well as
from faculty from the schools I have worked in. I am able to assist in basic Spanish courses as well as grade school subjects such as science.
24 Subjects: including algebra 2, algebra 1, calculus, geometry
...I scored in the 99th percentile on the GRE in Quantitative Reasoning (perfect 170,) and the 96th percentile in Verbal (166). I am a successful tutor because I have a strong proficiency in the
subject material I teach and a patient and creative approach that makes any subject simple to understand...
21 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have interned as a guidance counselor in a middle school and high school, utilizing and implementing techniques for students at all educational levels. These techniques include flashcards,
games, acronyms, mnemonics, time management and organization. With a BA in Psychology, and a Masters Degree in Education, I have learned extensively about ADD/ADHD.
14 Subjects: including algebra 1, algebra 2, reading, study skills
...I received a perfect score of 36 on the ACT Reading section. My work in AP English Language and Composition and AP English Literature and Composition have honed my writing abilities
tremendously. I received a 5 on the AP English Language and Composition exam, and I have yet to receive my score on the AP English Literature and Composition.
43 Subjects: including algebra 1, algebra 2, English, writing
Related Woodbridge, NJ Tutors
Woodbridge, NJ Accounting Tutors
Woodbridge, NJ ACT Tutors
Woodbridge, NJ Algebra Tutors
Woodbridge, NJ Algebra 2 Tutors
Woodbridge, NJ Calculus Tutors
Woodbridge, NJ Geometry Tutors
Woodbridge, NJ Math Tutors
Woodbridge, NJ Prealgebra Tutors
Woodbridge, NJ Precalculus Tutors
Woodbridge, NJ SAT Tutors
Woodbridge, NJ SAT Math Tutors
Woodbridge, NJ Science Tutors
Woodbridge, NJ Statistics Tutors
Woodbridge, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/detroit_algebra_tutors.php","timestamp":"2014-04-20T06:27:19Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:9271644b-a5cc-4cf9-9e2a-7f7f5006b018>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Population growth
Quantitative concepts: big numbers, exponential growth and decay
Population growth and resource depletion
^by Jennifer M. Wenner, Geology Department, University of Wisconsin-Oshkosh Jump down to: Resource Use
Exponential Growth
Examples & Exercises
Essential Concepts
There are 5 main concepts that our students struggle with when learning about population growth and the relationship of population to geological resource use:
1. overpopulation is a leading environmental problem,
2. exponential population growth and development leads to faster depletion of resources,
3. population grows exponentially,
4. why population prediction is difficult,
5. population is not evenly distributed throughout the world.
A leading environmental problem: Overpopulation
Students do not understand that overpopulation is the cause of many other environmental problems. To help students understand this, one of my colleagues asks her students to list three important
local and global environmental issues as part of a survey on the first day of class. During the following lecture, she presents overpopulation as the top environmental problem:
It may surprise many of you to find out that overpopulation is a leading global environmental problem. Remember on the first day of class, I asked you to list three important global environmental
problems. Here are the results of those surveys:
1. Pollution (unspecified):14.7%
2. Global warming:14.5%
3. Air pollution:13.5%
4. Habitat destruction:13.1%
5. Resource depletion/degradation:11.8%
6. Population growth/Overpopulation:7.9%
7. Natural disasters:6.2%
8. Water pollution:6.6%
9. fossil fuels (oil spills/ANWR):6.0%
10. Waste management:3.5%
11. Miscellaneous (famine, poverty, ignorance, etc):2.3% compiled from Dr. Maureen Muldoon's Environmental Geology course, Spring 2005, UW-Oshkosh
How many of these problems are the direct or indirect result of overpopulation? Would we have such a problem with the top three -- pollution, global warming and habitat -- if world population was not
so large? Other than some of the natural disasters (and even those are arguable), most of these other environmental problems are due to overpopulation.
Lifestyle affects resource use
The characterization of overpopulation as the cause of many environmental problems may be an oversimplification. Consumption of natural resources also plays an important role in straining the
environment. On a global scale, it is probably pretty intuitive to students that the presence of more people in the world causes a bigger strain on natural resources. What may not be intuitive is the
concept of sustainability. What does sustainability mean?
Friends of the Earth define sustainability as "the simple principle of taking from the earth only what it can provide indefinitely, thus leaving future generations no less than we have access to
ourselves." Many other organizations define it in differently; however, the crux of the definition is the same. Sustainability involves living within the limits of the resources of the Earth,
understanding connections among economy, society, and environment, and equitable distribution of resources and opportunities.
It is the last part of the definition that joins population growth, particularly in developed countries, and resource use. Developed countries, in general, have and use more of the Earth's resources.
Population growth in developed countries puts a greater strain on global resources and the environment than growth in less developed nations. For example, in 1997, the U.S. generated 27.5% of the
world's total CO
emissions; more than five times that of India (5% of the world's total), a country with 4-5 times the population of tht U.S (Texas A&M's LABB). In fact, the way of life in the United States, on
average, requires approximately 5 times the resources available on Earth today (Earthday Network).
To emphasize the disparate effects of population and lifestyle in developed vs. undeveloped countries have your students complete the
"Ecological Footprint quiz"
Earthday Network
. This quiz shows the participants how many "Earths" would be needed if everyone lived the way that they do. It is likely that students in the United States will find that they need approximately 5
planets to sustain their lifestyles! It may surprise them to learn this. If you want to reinforce (or contrast) the impact of undeveloped nations on resources, have your students take the quiz for an
undeveloped nation. You may wish to tell them the choices to make or you may want them to make decisions about how they think people in that country live. The results may shock them.
The above makes developed nations out to be the bad guys but that is not entirely true. Undeveloped countries with large (and growing) populations also put a strain on the local environment and the
limited resources that they have. Countries that struggle to meet growing demands for food, fresh water, timber, fiber and fuel can alter the fragile ecosystems in their area, putting a great strain
on the limited resources that they have to draw from (ICTSD.org).
More people = More babies
Students may have a hard time understanding that population growth is controlled not only by birth and death rates but also by the present population. The
mathematics of exponential growth
govern the prediction of population growth. In some cases, you may want to point out that students may have heard of exponential growth in other contexts, such as compound interest or the spread of
viral disease. The rate of population growth at any given time can be written:
• r is the rate of natural increase and is usually expressed as a percentage (birth rate - death rate)
• t a stated interval of time, and
• N is the number of individuals in the population at a given instant.
The equation above is a differential equation and may not be appropriate for some introductory courses - - but most students in entry-level courses can handle the algebraic solution presented below.
The algebraic solution to this differential equation is
• N[0] is the starting population
• N is the population after
• a certain time, t , has elapsed,
• r is the rate of natural increase expressed as a percentage (birth rate - death rate) and
• e is the constant 2.71828... (the base of natural logarithms).
A plot of this equation looks something like the plot on the right. Population grows exponentially - if the rate of natural increase (
) doesn't change. The variable
is controlled by human behavior as described
Essential to understanding the mathematics of population growth is the concept of doubling time. Doubling time is the time it takes for population to double and it is related to the rate of growth.
When the population doubles, N = 2N[0]. Thus the equation becomes
ln 2/r = t
0.69/r = t; where r is the rate and t is the doubling time.
In many ways, it is similar to half-life. But instead of the time it takes for half the isotopes to decay, it is the time it takes for a known quantity to double.
Population prediction models: Subject to change
Students (especially those in introductory classes) may have a difficult time understanding why predictions of population growth are difficult to make and constantly debated. To help them understand
the difficulty of prediction have them think about the complex variables that must be considered when predicting population growth.
It may be fairly obvious to students that calculation of the rate of population growth can be expressed in the following equation:
Birth rate (b)- death rate (d) = rate of natural increase (r)
Thus, population growth is directly related to:
• current population - the number of people today has implications for future population
• birth rate - this number is usually reported in number of births per 1,000 people per year and combined with the death rate influences the growth of population
• death rate - this number is usually reported in number of deaths per 1,000 people per year and combined with birth rate influences the growth of population
And it may be quite intuitive to students that
b - d = r
. However, students may not have considered the factors that can influence both birth and death rates.
Let's think about some of the factors that may modify the birth and death rates in a region (or in the world). Do you think these things are constant throughout time? What other "variables" could
change them?
• age structure of the population - the number of women of child bearing age affects the rate of population growth.
• total fertility rate - Total fertility rate (TFR) is the average number of children that each woman will have in her lifetime and affects the birth rate.
• health care - the quality and availability of health care in an area can affect both death rate (by increasing average life expectancy) and birth rate (babies are more likely to survive past
childhood). Access to immunizations, family planning and birth control are also important to the overall picture of population growth.
• education - Birth rates tend to fall in countries where the population has access to education
• jobs - Birth rates also fall off when unemployment is low.
• standard of living - Birth rates are lower where standards of living and quality of life are high. Unfortunately, standards of living are difficult to raise in areas where population growth is
high - this creates a negative feedback loop that is difficult for some countries to get out of.
• immigration/emigration - the number of people entering or leaving a country (area) actually changes the N[0] and changes population in a more complex way than by altering birth rate or death
• development and industrialization - these two factors alter population growth in complex ways. They can affect an area's income and, thus, its access to many of the factors listed above. Higher
income/more developed countries have lower birth and death rates
• disease - in a given year (or even decade) epidemics of infectious diseases can increase death rate dramatically, particularly for a specific area. For example, the bubonic plague decimated
Europe in the 14th century - the population of Europe was cut nearly in half by 1400.
• war/political upheval - War and political upheval can also increase death rates.
• climate - Natural disasters such as drought or flooding can affect food resources and the population will be affected accordingly.
modified from UNESCO - World Bank
There are many more variables that can affect change in the population and its growth - have your students brainstorm about other factors that affect the rate and prediction of population growth.
UNESCO and World Bank have a website with a number of learning modules on population related topics.
Wide open spaces can be hard to find
The concept of population density is sometimes difficult for students to grasp. Population density can be calculated by dividing the total population of a city (or country) by its area.
Total population / area = population density
My students mostly come from small towns and cities in Northeast Wisconsin and may not comprehend that other places in the world are far more crowded than where they live. To give them a sense of
perspective, I try to give them a sense of what it is like to live in other places. For example, I tell them that in Winnebago county (the county UW-Oshkosh is situated) has a population density of
138 people/km
. On the other hand, in 1990, Kowloon (a walled part of Hong Kong) had a population density 1,924,563 people/km
^2 (demographia.com)
! Other places of interest include: Bombay (Mumbai), India, with 39,860 people/km
, Manhattan (New York City), United States, with 25,849 people/km
, London, England, with 4,700 people/km
and Sydney, Australia, with about 2,500 people/km
(wikipedia.com). Estimates of population density by city vary considerably but the general idea is that most small cities in the U.S. are not very densely populated.
I also use a story about a friend of mine who moved from China to the U.S. about 5 years ago. My friend Gong Yan moved to Atlanta from Wuhan, China, where he grew up and went to university. When he
got to Atlanta, he was very uncomfortable because he felt there was so much open space. In Wuhan, when he was in a public place, he was always surrounded by people - people bumping into him, people
talking to him, people streaming along the street. He would often go to a mall in Atlanta just to be around people. In contrast, many Americans become uncomfortable when in large crowds. A friend of
mine traveled to Japan and tells a story of standing in line at the airport with the Japanese gentleman behind her pushing her with her body while she strained not to touch the person in front of her
in line.
Culturally, we deal with population density problems by changing our concept of "personal space". In many parts of the U.S., we have the luxury of significant amounts of personal space; other
developing and highly urban places do not.
Examples and Exercises
Student Resources
• The UN Population division has a website about World Population Prospects that has downloadable data regarding many important population variables.
• Mark Francek at Central Michigan University provides links to several other sites with population data (more info) .
• The Population Reference Bureau (more info) has all sorts of great information on almost every country; birth rates, death rates, GDP, health, environment, etc. | {"url":"http://serc.carleton.edu/quantskills/methods/quantlit/popgrowth.html","timestamp":"2014-04-17T04:56:05Z","content_type":null,"content_length":"45257","record_id":"<urn:uuid:c4246201-78ba-4c3d-8948-d294a86ccdf7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
TTTPPP's Puzzles
Re: TTTPPP's Puzzles
« Reply #30 on: August 25, 2007, 05:22:46 PM »
Re: TTTPPP's Puzzles
« Reply #31 on: August 28, 2007, 09:01:38 PM »
« Last Edit: August 28, 2007, 11:18:13 PM by Rene »
Re: TTTPPP's Puzzles
« Reply #32 on: August 29, 2007, 12:21:46 AM »
The "Under and Over" series were easier than I'd planned, so I've currently given up trying to come up with situations which require distinctly different up-pipe draws.
Well done Rene with MAAN9, I've been struggling to get a solution to fit in the window for quite a while now! I came up with one which would work if the play area was about 5 or 6 times wider! Also
nice work with MAAN16 - . I can't see a good approach to MAAN puzzles with less crates than 9.
Re: TTTPPP's Puzzles
« Reply #33 on: September 05, 2007, 02:46:11 PM »
Right, a new puzzle. Apologies for the fact that I can't solve it!
I came up with an idea of trying to implement a Rubik's cube-esque puzzle, which only allowed permutations of the crates. The spoiler below gives a description of how to use the machine (which I
wouldn't really consider part of the puzzle).
My progress so far:
Re: TTTPPP's Puzzles
« Reply #34 on: September 05, 2007, 09:56:44 PM »
I got further than TTTPPP did. I almost didn't fit what I have, and am not relishing handling the remaining logic in the remaining space.
EDIT:I'm even closer now. I just need to
EDIT: Sooo close!
« Last Edit: September 05, 2007, 11:25:52 PM by Bucky »
That is the most ingenious method of solving an impossible puzzle that I have ever seen.
Re: TTTPPP's Puzzles
« Reply #35 on: September 05, 2007, 11:33:27 PM »
« Last Edit: September 06, 2007, 01:50:16 AM by jnz »
Re: TTTPPP's Puzzles
« Reply #36 on: September 06, 2007, 10:53:29 AM »
jnz, that is quite brilliant! I've filled a couple of sheets of paper with diagrams and equalities, and was nowhere near that solution.
Bucky, I'm afraid I still don't understand quite what you're up to!
Re: TTTPPP's Puzzles
« Reply #37 on: September 06, 2007, 08:57:43 PM »
jnz, that is quite brilliant! I've filled a couple of sheets of paper with diagrams and equalities, and was nowhere near that solution.
Why thank you.
), and a Perl script (
Re: TTTPPP's Puzzles
« Reply #38 on: September 06, 2007, 09:22:11 PM »
Rubik's-on is a very original and well designed puzzle; it really reminds of Rubik's Cube. I liked it very much.
For Rubik's-on (
My strategy:
BTW: the Rubicon identifier for the puzzle is quite funny: "bagabug"
EDIT: just looked at jnz's solution. Pretty smart. It is faster than mine, partly because you
« Last Edit: September 07, 2007, 04:12:43 PM by Rene »
Re: TTTPPP's Puzzles
« Reply #39 on: September 07, 2007, 08:08:30 PM »
Rene: I'd originally thought about doing something similar to your solution but then I realized that I'd have to keep track of . That seemed complicated so I moved on to other ideas. You've managed
to handle that problem in a quite simple and compact manner though. Kudos!
Re: TTTPPP's Puzzles
« Reply #40 on: September 10, 2007, 11:11:09 AM »
Another nice solution from Rene as well! I enjoyed the challenge of making this puzzle, so I'm glad it was interesting.
Re: TTTPPP's Puzzles
« Reply #41 on: November 04, 2007, 08:48:52 PM »
for waiting for a fad to pass:
Yugioh duel; i play exodia, you lose. Nope, I play Rubicon contractor! *gets run over by dozer pushing a 4 crate*
Re: TTTPPP's Puzzles
« Reply #42 on: November 06, 2007, 07:37:35 AM »
for waiting for a fad to pass:
This fails to solve.
Doubling Chamber vs. Template Copier:
« Last Edit: November 06, 2007, 08:05:21 AM by Bucky »
That is the most ingenious method of solving an impossible puzzle that I have ever seen.
Re: TTTPPP's Puzzles
« Reply #43 on: January 02, 2008, 11:10:08 PM »
For Rubik's-on (
I've been working on this one for quite some time, on and off, as I have found time to spend on it. That was quite an interesting puzzle!
Now that I've completed my solution, I've looked at the spoilers and the other solutions to compare. I came up with the same general design as Rene, implemented differently, and it works like this:
I was also working on a
, which I'll finish up as I have time.
« Last Edit: January 03, 2008, 03:15:54 AM by jf »
Re: TTTPPP's Puzzles
« Reply #44 on: January 03, 2008, 09:17:49 PM »
Here is my alternate design for Rubik's-on (
This solution uses
. This was also quite difficult. I kept alternating between this design and my other design as I ran into problems with each. | {"url":"http://kevan.org/rubicon/forums/index.php?topic=281.30","timestamp":"2014-04-23T21:01:32Z","content_type":null,"content_length":"76100","record_id":"<urn:uuid:801ad8c0-eb78-4664-be48-ab5260e4fa74>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Air Temperature Effects On Muzzle Velocity
Air Temperature Effects On Muzzle Velocity
By Gustavo F. Ruiz
Long Range is all about Ballistics. Beyond that, no more than luck can be expected without it.
It’s a well established scientific fact that air temperature influences muzzle velocity, and under some field conditions that variation can be quite significant, in particular for taking a long range
Recent tests showed a rate of change of about 2.5 to 4.0 feet/sec per 1°C (1.8°F) depending on how sensitive the load’s powder is to air temperature.
Just to give a basic perspective, a muzzle velocity variation of +/- 30 feet/sec can introduce a change in the trajectory’s path of about 1.0 MOA at 1000 yards, (+/- 0.5 MOA) and of course, there is
uncertainty that must be accounted for.
So, there is enough “statistical significance” to relate muzzle velocity changes to changes in air temperature, since there is enough statistical evidence that there is a variation, not implying that
the difference is necessarily large.
It’s important to realize that the focus of this analysis is on the very first shot, from a cold barrel. Then air temperature is the only meaningful and readily available parameter that any shooter
can easily take a reading of.
Powder temperature is the real and crucial factor that determines muzzle velocity, as it’s related to the Maximum Average Peak Pressure. From a cold barrel, it’s closely statistically correlated to
Air temperature.
Now, the problem is how can we estimate the predicted muzzle velocity (for a given system, comprised of a particular firearm and load) when facing those variations in air temperature…bearing in mind
that not all temperature values can be covered during the data collection process.
Mathematical support
In engineering applications, data collected from the field are usually discrete and the physical meanings (relationship among the observed variables) of the data are not always well recognized.
To estimate the outcomes and, eventually, to have a better understanding of the physical phenomenon, a more analytically controllable function that fits the field data is desirable as well as
The mathematical process of finding such a fitting function is called “Data Regression”, also known as “Curve Fitting”.
On the other hand, the method of estimating the outcomes in between sampled data points is called “interpolation”, while the method of estimating the outcomes beyond the range covered by the existing
data is called “extrapolation”.
Data Regression is a vital part of statistics. It refers to techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one (or
more) independent variables.
For our purposes, the independent variable is the air temperature and the corresponding dependent variable is the muzzle velocity. This is the basic relationship we are interested in.
The goal of regression analysis is to determine the values of parameters for a function that cause the function to best fit a set of data observations that you provide by taking measurements of the
involved variables.
It’s also of interest to typify the deviation of the dependent variable (muzzle velocity) around the regression function, which can be described by a probability distribution, especially for the case
of a “Linear” regression.
Most commonly, regression analysis estimates the conditional expectation of the dependent variable (Muzzle velocity) given the independent variable (air temperature). That is, the “standard value” of
the dependent variable when the independent variable(s) are held fixed.
Both the method and procedure presented here can focus on “quantiles” (points taken at regular intervals), or other location parameters of the conditional distribution of the dependent variable given
the independent variable. This is a very important aspect to take into consideration.
Regression analysis is widely used for prediction (including forecasting of time-series data). Under controlled circumstances, Regression analysis can be used to infer fundamental relationships
between the independent and dependent variables.
The Solution
A large number of both methods and techniques for carrying out regression analysis have been developed during the last 300 years, so we can hardly call this a “new” branch of technology in general
terms. However research continues as new challenges are tackled every day requiring novel approaches.
Essentially, a user gathers field data in the form of Known Data Points (KDPs), which are MV/Temp data pairs. And from that data, different methods will try to make a “best fit”. As can be expected,
the better a method fits the KDPs, the better and more trustable will be the predicted values.
Common methods such as Linear Regression and ordinary Least Squares Regression can provide fair results, if and only if, the gathered data shows a good response to a linear representation. In other
words, it correlates well with a “straight line”.
This linear approach is the most common in use, especially by some ballistics programs that incorporate a way to estimate muzzle velocity based on predefined changes of some independent variable (air
temperature is the usual one)
The first problem we find with a linear approach is that it rarely fits the KDP pairs, since a perfect correlation is very difficult to observe in the dataset (field data)
This means that if your log shows that at 65°F your measured MV is 3000 feet/sec, then a linear method will not yield that value at the same air temperature of 65°F.
The Goal
Clearly, we need to define objective criteria to help us in the selection of the best Regression method, and only then we can make our mind which one best fits our field data.
One of the criteria is well understood (as it’s obvious) for everyone dealing with Regression. That is to find the best function that matches as closely as possible our field data. In short, that
“correlates well”.
The second criteria is how good the methods are for predicting both interpolated and extrapolated values. Why? Because that’s where regression will show its value and potential to us shooters as a
predictive tool
Interpolating is a must since it’s impossible to have a log of all possible intermediate values that are within the limits of our data. Extrapolating is, by the same token, an essential capability,
because we need to know what’s going to happen with values of the independent variable (muzzle velocity) when we have no data outside of our log limits.
In order to understand and visualize the strengths and weakness of the used methods, it’s interesting to see how they execute under two clearly different and alternative scenarios.
Home | Next Page>
Featured Give Away Contests | {"url":"http://www.longrangehunting.com/articles/ballistics-temperature-velocity-1.php","timestamp":"2014-04-18T11:55:54Z","content_type":null,"content_length":"38552","record_id":"<urn:uuid:4a934212-bbe0-4fe9-9aab-3b5bbd2c1584>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much do stat-raising moves power them up?
The stats raise and lower in 6 stages each. Some moves raise the stat by one stage, but ones where it says "sharply increased" raise it by two stages.
The stages are: x1.5, x2.0, x2.5, x3.0, x3.5, x4.0. So, for example using Sword Dance which sharply increases Attack, will double your attack stat the first time. The second time it will be 3 times
the original value.
The same stages apply for lowering stats, you just divide by those numbers instead. Two stages down and the stat is half of what it was. | {"url":"http://pokemondb.net/pokebase/1400/how-much-do-stat-raising-moves-power-them-up?show=1450","timestamp":"2014-04-16T08:39:25Z","content_type":null,"content_length":"24238","record_id":"<urn:uuid:9bee41e6-9a43-45fb-8446-555398ab29b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Gas Absorption Computed Using the Gauss-Seidel Method
Consider a tray absorption column used to remove an impurity from a gas stream. A pure solvent is used for this absorption operation. The solvent molar flow rate and the gas molar flow rate are
considered constant (i.e., the dilute system hypothesis holds). You can set the number of equilibrium stages, the value of the slope of the equilibrium line (), the solvent-to-gas molar flow rate
ratio (, and the mole fraction of the impurity in the gas fed to the absorption column. This Demonstration computes the McCabe–Thiele diagram. The horizontal green lines represent the theoretical
equilibrum stages in the absorption column. The percent removal of the impurity for the specified conditions is displayed on the plot.
The approach used in this Demonstration is based on the solution of a system of linear equations, which can be written in matrix form for , where is the number of stages of the absorption column, as
= .
Taking advantage of the banded form of the matrix, one can use the Gauss–Seidel iterative technique to get the liquid compositions in all stages with .
[1] P. C. Wankat,
Separation Process Engineering: Includes Mass Transfer Analysis
, 3rd ed., Upper Saddle River, NJ: Pearson, 2012. | {"url":"http://demonstrations.wolfram.com/GasAbsorptionComputedUsingTheGaussSeidelMethod/","timestamp":"2014-04-21T15:04:40Z","content_type":null,"content_length":"44387","record_id":"<urn:uuid:d6e9b761-b72f-4325-8b9f-ae34513175d1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATLAB/Simulink-Based Grid Power Inverter for Renewable Energy Sources Integration
1. Introduction
The main objective of the chapter is the development of technological knowledge, based on Matlab/Simulink programming language, related to grid connected power systems for energy production by using
Renewable Energy Sources (RES), as clean and efficient sources for meeting both the environment requirements and the technical necessities of the grid connected power inverters. Another objective is
to promote the knowledge regarding RES; consequently, it is necessary to bring contribution to the development of some technologies that allow the integration of RES in a power inverter with high
energy quality and security. By using these energetic systems, the user is not only a consumer, but also a producer of energy. This fact will have a direct impact from technical, economic and social
point of view, and it will contribute to the increasing of life quality.
The chapter intends to integrate itself into the general frame of the EU energy policies by imposing the global objectives of reducing the impact upon the environment, and promoting the RES for the
energy production. At the same time, the chapter is strategically oriented towards the compatibility with the priority requirements from some European programmes: the wide-spread implementation of
the distributed energy sources, of the energy storage technologies and of the grid connected systems.
The chapter strategy follows two directions: the first, is thedevelopment of knowledge (a study and implementation of a high performance grid-power inverter; the fuel cells technology as RES; the
control methods; specific modelling and simulation methods); the second focuses upon the applicative research (a real time implementation with dSPACE platform is provided).
The interdisciplinarity of the chapter consists of using specific knowledge from the fields of: energy conversion, power converters, Matlab/Simulink simulation software, real time implementation
based on dSPACE platform, electrotechnics, and advanced control techniques.
2. The grid power converter
The increased power demand, the depletion of the fossil fuel resources and the growth of the environmental pollution have led the world to think seriously of other alternative sources of energy:
solar, wind, biogas/biomass, tidal, geothermal, fuel cell, hydrogen energy, gas micro turbines and small hydropower farms.
The number of distributed generation (DG) units, including both renewable and nonrenewable sources, for small rural communities not connected to the grid and for small power resources (up to 1000 kW)
connected to the utility network has grown in the last years. There has been an increase in the number of sources that are natural DC sources, for instance fuel cells and photovoltaic arrays, or
whose AC frequency is either not constant or is much higher than the grid frequency, for instance micro gas-turbines. These generators necessarily require an DC/AC converter to be connected to the
grid. Although some generators can be connected directly to the electric power grid, such as wind power driven asynchronous induction generators, there is a trend to adopt power electronics based
interfaces which convert the power firstly to DC and then use an inverter to deliver the power to the 50Hz AC grid.
At the international level, SMA Technologies AG (www.sma.de) promotes the innovative technology based on the renewable sources. The following results can be mentioned: the stand-alone or grid
connected systems by using either a single type of source (Sunny Boy 5000 Multi-String inverter based on the modular concept, Hydro-Boy and Windy Boy) or combined (Sunny Island) including the
interconnection of wind turbines, photovoltaics, micro-hydro and diesel generators. It is well-known that for systems efficiency increasing, the inverter is the answer of the problem. With this
respect, Sunways (www.sunways.de) adopted the HERIC concept (from the Fraunhofer Solar and Energetic Systems Institute), by using a tranformerless inverter, obtaining a 97,33% high efficiency of the
inverter for low powers (www.ise.fraunhofer.de). The Master-Slave and Team concepts are embedded in SunnyTeam and Fronius inverters in order to increase the efficiency in the partial load conditions.
At world level, the implementation of energetic policies (with respect to renewable sources) has been carried out by performing systems based on a single renewable source. There are such examples in
countries all over the world: in Europe, Dewind, Vestas, Enercon, Fronius International GmbH, SMA Technologies AG; Renco SpA, Ansaldo Fuel Cells SpA; in North America-Nyserda, Beacon Power, Magnetek
Inc., Sustainable Energy Technology, Logan Energy Corp., IdaTech; Australia-Conergy Pty Ltd, Rainbow Power Company Ltd; and Asia-Nitol Solar, Shenzen Xinhonghua Solar-Energy Co Ltd).
In EU, the implementation of the energetic policies is based upon a legal document, Norm 2001/77/EC regarding the promotion of the electrical energy produced from renewable sources on the Energy
Single Market. The objectives of the Norm provide that till 2020, a contribution of 20% of the total energy consumption shall be covered by energy produced from renewable sources. The monitoring of
this Norm implementation is managed by the EU Energy General Directorship which presents periodical reports on the European researching and development stages. Considering these reports, under the
conditions of implementing the DER concept (Distributed/Descentralized Energy Resources), it is obvious that the futurer research activities will be based upon the hybrid systems (wind-photovoltaic,
wind-biomass, wind-diesel generator) having the target of the energetic security by removing the disadvantages of using a single renewable source.
The consolidation of the objectives proposed by Norm 2001/77/EC and the extension in more geographical areas are possible only by using hybrid systems.
The EU gives a great importance to the improvement of the energy efficiency and to the promotion of the renewable sources. Related to the above mentioned issues, the objectives of the EU are to
produce at least 20% ofthe gross energy consumption from renewable sources until 2020 (COM, 2006) and to increase the energy efficiency by 20% until 2020 (EREC, 2011). As far as the energy efficiency
is concerned, an EU norm aims at a reduction of 9% of the energy losses until 2020 (EREC, 2008)
2.1. A generic topology of the RES integration
In the context of the international scientific studies related to the development of new alternative solutions for electrical energy production by using renewable energy sources (RES), the aim of
this chapter is to contribute to these studies by evaluating and working out the possible concepts for stand-alone and grid connected operating interface of these hybrid systems and the efficient and
ecologic technologies to ensure an optimal use of the sources (solar energy, wind energy, hydrogen energy by using fuel cells, hydro-energy, biomass) in industry and residential buildings.The battery
and the fuel cells are also meant to be reserve sources (which ensure the additional energy requirements of the consumers and the supply ofboth the residential critical loads and the critical loads
of the hybrid system-auxiliary circuits for fuel cell start-up and operating), increasing the safety of the system. The fuel cell integration is provided by using a unidirectional DC/DC converter (to
obtain regulated high voltage DC), an inverter and a filter in order to accommodate the DC voltage to the required AC voltage (single phase or three phase). The bidirectional DC/DC converter (double
arrow, Fig.1) is used in order to charge/discharge the batteries (placed in order to increase the energy supply security and to improve load dynamics). The unidirectional DC-DC converter prevents the
negative current going into the fuel cell stack. Due to the negative current, the cell reversal could occur and damage the fuel cell stack. The ripple current seen by the fuel cell stack due to the
switching of the boost converter (unidirectional DC/DC converter) has to be low.
2.2. Three-phase versus single phase
Firstly,the problem of choosing the number of phase for the front end converter is a matter of power. In this case, the three-phase line should be used for a 37kVA power converter.
Secondly, in case of using balanced three phase AC loads, the possibility that low frequency components to occur in the fuel cell input current is reduced.
2.3. System description
The proposed way of the efficient integration of RES is illustrated in Fig.1. With this respect only one inverter is used in DC-AC conversion for interfacing the stand-alone or grid-connected
consumer (Gaiceanu, et al., 2007b). By its control, the inverter can ensure the efficient operation and the accomplishment of the energy quality requirements related to the harmonics level. The
hybrid system can ensure two operation modes: the normal one, and the emergency one (as backup system).
2.3.1. Operation modes
There are four modes of operation:
Direct Supply from Utility: consists of direct supply of the residential consumers from Utility via the Static Switch;
Precharge Operation: The DC capacitors in the inverter part of the Power Conditioning System (PCS) can be precharged from the AC busutility. After the DC capacitors are charged, the inverter can be
switched on. As soon as it is running, the inverter by itself will keep the DC capacitors charged to a DC level higher than the No-Load level of the Solid Oxide Fuel Cell (SOFC). During the precharge
operation, the residential consumers will still be supplied from Utility;
Normal Operation: The PCS converts the DC energy from the SOFC into AC and feeds the Utility and the eventual residential consumers.
Island Operation(failure operation mode): If the Utility goes out of tolerance during normal operation, the PCS will change to island operation. The PCS converts the DC from SOFC and battery and
supplies the critical loads.
For the sake of simplicity and for chapter length limit reason, only the grid-connected hybrid system with fuel cell generator and battery pack will be investigated.
2.3.1.1. The failure operation mode (Island Operation)
In this operation mode, the power system must assure the power supply for the critical loads (as alarms, auxiliary power systems for fuel cell, reformer and so on, depending on the consumer
requirements). In the first stage, the power supply is assured from the battery pack, followed by the SOFC. Therefore, by knowing the critical load power, an adequate Simulink file is designed (
Fig.2).A simple and effective energetic model has been considered. This model puts in evidence the inverter power losses at critical load conditions. A repeating table block from the Simulink tool is
used in order to implement the load cycle of the residential consummer (critical load cycle, Fig.4). The available acquisition time for the load cycle was at 10 s for each sample, and two days as
time interval length. By knowing the critical load power cycle, it is possible to size the required battery pack. Therefore the output inverter power, Pout,inv, is the same with the critical load
power (Pacload=Pcritical_load, Fig.2).
The losses of the Power Conditioning System (PlossPCS=P[loss,inv]) are modelled by using a quadratic function(Metwally, 2005), which is the most used one. The power loss function requires three
parameters extracted from the experimental data by the least-squaresmethod.
The first parameter, p[0], takes into account the load independent losses[W]. The second parameter, p[1], represents the voltage drops in semiconductors as load linear proportional losses. The last
term, p[2], includes the magnetic losses [1/W], known as load ohmic losses. The PCS model has been implemented in Simulink (Fig.3) based on the following function:
in which the coefficients of the approximated function are as follows: p0 = 0.0035*Prated, p1=0.005, p2=0.01/Prated.
In the Fig.3. the energetic component of the PCS block is presented. By knowing the total inverter losses at critical power, Ploss,invtot, the corresponding DC power can be obtained:
The input and the output inverter powers are related to the inverter efficiency:
By taking into account the power requirements of the auxiliary circuits, which are supported only by the battery pack in critical load case during the fuel cell start-up, the corresponding DC power
is (Fig.4):
The necessary energy of the battery pack is obtained as:
The blowers have been considered as main auxiliary loads; the value of ηaux=0.7for the equivalent efficiency of the auxiliary power circuits has been considered. In the Fig.4c, the power losses of
the auxiliary power circuits have been deducted.
PCU. Matlab/Simulink simulator results
Based on the PCUMatlab/Simulink simulator (Fig.2, Fig. 5a), the required capacity and energy of the battery have been obtained (Fig. 5b)
A. The Fuel Cell Power Conditioning System
The fuel cell power conditioning system consists of fuel cell stack and DC power converter.The fuel cell is an electrochemical device which produces DC power directly, without any intermediate stage.
It has high power density and zero emission of green house gases. Fuel cell stacks were connected in series/parallel combination to achieve the desired rating. The main issue for the fuel cell power
converter design is the fuel cell current ripple reduction. The secondary issue is to maintain a constant DC bus voltage. The former is solved by introducing an internal current loop in the DC/DC
power converter control. The latter design requirement is solved by DC voltage control.
A.1. The Fuel cell stack Matlab/Simulink based model
The polarization curve of the SOFC is based on Tafel equation. The output voltage of the SOFC is built taking into account the Nernst instantaneous voltage equationE0+aln(PH2PO21/2), the activation
overvoltagebln(i), the voltage variation due to the mass
transport losses cln(1−iilim)and the ohmic voltage drop Ri(Candusso D., et al. 2002). The first three terms are multiplied by No, number of series cells, in order to obtain the fuel cell stack
mathematical model.
The parameters of the Tafel equation are the load current, the temperature and the pressures of the hydrogen and oxygen. The demanded current of the fuel cell system is limited between ±Ifclimitat a
certain hydrogen flow qH2invalue (Padulles, et al., 2000):
The Simulink model of the FC Power System before starting must be initialized (based on Ifc_init.m file, Fig.6) from the Fuel Cell data initialization block (Fig.5). In order to obtain the demanded
current between certain limits, an adequate Matlab function has been created(Fcn).
A.2. Simulation results
By using the implemented Simulink model (Fig.5), the output voltage and the output power have been obtained, as shown in Figure 7.
A.3. Mathematical modeling of the DC-DC power converters for fuel cells and energy storage elements integration: Boost and Buck-Boost power converters
In order to obtain a constant DC voltage, a boost power converter has been taken into consideration (Figure 9), operating in continuous conduction mode (CCM).
The method of the time averaged commutation device is applied to the unitary modeling of the power converters presented in Fig. 9 (Ionescu, 1997).
During the DTsperiod, the active device is ON and the passive device is OFF. During the (1−D)Tsperiod, the active device is OFF and the passive device is ON, while the passive terminal p is connected
to the common terminal c. The duty factor is denoted D and Tsis the switching period. Taking into consideration the above mentioned hypotheses, the following instantaneous currents can be deducted:
In the similar manner, the specific instantaneous voltages are obtained:
If averaging is carried out over a period of switching time, equations (9) - (10) will assume the equivalent form of the currents
and of voltages, respectively:
where, for the sake of convenience, values such as i[a] are still considered as time averaged values for a period of switching time.
To demonstrate the validity of the time-averaged commutation device model, the mathematical models for DC –DC converters, boost and boost-buck are considered.
A.4. The Boost converter
From the Fig.11a, the following equivalent relations are obtained:
By applying the first Kirchhoff's theorem to the Fig.11b, the first differential equation that characterizes the output voltage dynamic statev˙0is obtained:
or, in the final form
By applying the second Kirchhoff's theorem, the second differential equation that characterizes the inductor current dynamic state, i˙L, is obtained:
or, in the form
The commutation mathematical model in state space form will be as following:
The voltageu0 is considered controlled output.
By vanishing the differential terms, the steady-state regime is obtained from the above deducted dynamic state-vectorx˙=[i˙Lu˙0]T,:
B. Battery Power Conditioning System
The Battery Power Conditioning Systemconsists of a battery pack and a DC-DC power converter. The NiMH battery produces a variable DC power. The battery pack has as main task to deliver the critical
load power (Fig.1).
Therefore, individual batteries are connected in series/parallel combination to achieve the desired rating. The Matlab/Simulink battery model from the Mathworks has been used. The main issue for the
battery power converter is to charge/discharge battery according to the available flow power. The problem is solved by introducing an internal current loop (Fig.12) in the DC/DC power converter
control (Fig.13).
B1. Simulink implementation of the SMC control diagram for DC-DC Boost Power Converter
In (Gulderin Hanifi, 2005) it is shown that the existence condition of the SMC is that the output voltage must be greater than the input one.
The DC-link voltage control is based on the Proportional Integral (PI) controller having k[p]=0.00001 and k[i]=0.01 as parameters. The circuit parameters of the boost converter are L[boost]=80*1e-6
[H], C[boost]=3240*1e-6 [F], R[boost]=20[Ω].
The current loop is based on the sliding mode control (Fig.14); theMatlab Simulink implementation is shown in Fig.13.
The sliding mode surface S consists of the current error:
which vanishes (S=0) in order to force the system to enter the sliding surface. The sliding mode controller has two functions: the control function and the modulator one. Therefore, the output
control of the SMC is the duty cycle, D, of the boost power converter.
In order to follow the current reference, the output DC voltage must be greater than the following limit:
where the RMS grid voltage isVgridmax, the maximal grid current isIgridmax, ωis the frequency (rad/s), and Linvis the phase inductance (Candusso D, et al., 2002).
After the simulation, results have confirmed the benefits of SMC control (Fig.15a). The output voltage of the converter reaches and stabilizes at the reference value of 690 V at a time of 2⋅10−2 s(
Fig.15b), a very short time in comparison with other control methods, while the error voltage is zero (Fig.15c).
The fundamental purpose of using this converter is to raise the voltage from the fuel cell generator. Thus, the battery pack delivers 380Vdc, being the input voltage of the boost converter; the
output voltage must be compatible with that of the three-phase voltage source inverter input, i.e. 690Vdc.
The advantages of this type of control are: stability, robustness and a good dynamics.
B1. The mathematical model of the Buck-Boost power converter
From the Fig.16a, the following equivalent relations are obtained:
Following the above procedure applied to the boost power converter, the dynamic model of the buck boost converter is deducted :
and the steady state regime, respectively:
B.2. The Simulink model of the Buck Boost power converter
Both the buck boost power converter and the current controller have been implemented and simulated in Simulink (Fig.17).
B.2.1. The Current Controller
By imposing the inductor current reference (ILref, Fig.18), the current controller will assure the fast reference tracking at the same time with delivering the appropriate duty cycles (D). By
introducing an anti-parallel diode for each active power device, a bidirectional buck-boost converter is obtained.
The buck-boost converter is necessary to connect the battery stack (U[d]=U[dc]) to the power inverter system and it comes into operation when the electrical power demanded by consumers is higher than
the electrical power obtained from the fuel cell generator. Another reason for the use of the buck-boost converter is to recharge the batteries from the other available sources. The circuit
parameters of the buck boost power converter are L=100*1e-5[H], C=500*1e-8[F], R=50[Ω].
Thanks to the buck boost current controller, the actual inductor current follows the reference current. In the output current a delay of 0.001 s could be found (Fig.19a). The input voltage of the
buck-boost converter, U[d], is about 390 Vdc and it is delivered from the battery stack, while the output voltage, U[o], is boosted at 690 Vdc (Fig.19b).
C. Power Source Management (PSM) (Fig.20)
The purpose of the PSM is to assure an adequate DC-link voltage to the power inverter from both power source generators: the solid oxide fuel cell stack and the battery pack.
The final DC link voltage (VDC_inverter, Fig.21) is delivered to the Voltage Source Inverter (VSI) by the Power Source Management block (Fig.20).
3. Inverter modelling and control
The fundamental types of control can be classified into two categories: current control and voltage control. When the inverter is connected to the network, the network controls the amplitude and
frequency of the inverter output and the inverter operates in current control mode. The classical current control can lead to other control methods can be obtained such as active and reactive power
control/voltage control. If the network being power injected is not available due to improper network parameters, the inverter will autonomously supply the load; consequently it adequately supplies
the alternative voltage in amplitude and frequency and it is not affected by network black outs. In this case, the inverter will control the voltage. The 50 Hz frequency is assured by a phase-locked
loop (PLL) control. The grid converter is a full-bridge IGBT transistor-based converter and it normally operates in inverter mode such that the energy is transferred from the hybrid source to the
utility grid and/or to the load. When the system is operating in grid-connected mode, the PLL tracks the grid voltage to ensure synchronization; but when the system enters in islanding mode of
operation, the VSI can no longer track the grid characteristics. As seen in Fig. 22, the PLL for the VSI changes the frequency which is sent to the pure integrator for angle calculation by switching
between the frequency from the filter and that from another fixed reference. In the islanding mode of operation the VSI needs to have an external frequency reference provided, ω[fixed] (Fig.22). The
PLL for the VSI is the main catalyst for the re-synchronization and re-closure of the system to the Utility once disturbances have passed. The frequency from the filter is used during the
grid-connected mode.
The Grid Power Inverter for Renewable Energy Sources Integration is of 37kVA and delivers the power to the grid (simulated as three-phase programmable voltage source in Fig. 23) and the necessary
power to the consumers (simulated as three-phase parallel RLC load in Fig. 23). There is an adequate boost inductor (three-phase series RL branch, Fig. 23) between the grid and the inverter. In order
to calculate the dq components of the grid current, (ID, IQ), the Feedback Signals Acquisition block is used (Fig. 24). Through the implemented Simulink blocks (Figs.25a, 25b), the active power of
the load is known.
3.1. The grid inverter control
The grid inverter control block delivers the corresponding duty-cycles to the Power Inverter (Gate_Pulses in Fig.23 or SW^*[ABC] in Fig. 26). To achieve full control of the utility-grid current, the
DC-link voltage must be boosted to a level higher than the amplitude of the grid line-line voltage. The power flow of the grid side inverter is controlled in order to keep the DC-link voltage
constant. The structure of the DC/AC converter control system is shown in Fig. 27. The control structure of the power inverter is of vector control type and it uses the power balance concept (Sul and
Lipo, 1990). Therefore, the load current feedforward component was introduced in order to increase the dynamic response of the bus voltage to load changes.
On the basis of the DC voltage reference V^*[DC], DC voltage feedback signal (V[DC]), AC input voltages (E[ab] and E[bc]), current feedback signals (I[a], I[b]), and the load power signal (got
through a load power estimator) (Gaiceanu, 2004a), the Digital Signal Processor-based software operates the control of the power inverter (DC link voltage and current loops) system and generates the
firing gate signals to the PWM modulator (Fig.27). The grid connected PWM inverter supplies currents into the utility line by maintaining the system power balance. By controlling the power flow in
power conditioning system, the unidirectional DC-link voltage can be kept at a constant value. Using the synchronous rotating frame the active power is controlled independently by theq-axis current
whereas the reactive power can be controlled by the d-axis current.
The control of the grid inverter is based on the minor current loop in a synchronous rotating-frame with a feedforward load current component added in the reference, completed with the DC voltage
control loop in a cascaded manner.The outer loop controller consists of two parts: the phase-locked loop (PLL) and the DC link voltage controller. The former, the PLL, is used to extract the
fundamental frequency component of the grid voltages and it also generates the corresponding quadrature signals in d-q synchronous reference frame, E[d]-E[q], which are necessary to calculate the
active and reactive power of the grid. The latter monitors the power control loop. The power control of the PWM inverter, is based on the power detection feedforward control loop and the DC-voltage
feedback control loop (Fig.27). The main task of the voltage controller is to maintain the DC link voltage to a certain value. Another task is to control the grid converter power flow. The task of
the DC link voltage and of the current regulation has been accomplished by means of the Proportional-Integral (PI) controller type, because of its good steady-state and dynamic behavior with the
power inverter. It is important to underline that the PI controller performances are parameters sensitive, because of its design procedure, based on the DC bus capacitor and inductor values. However,
in these specific applications, the system parameters values are known with reasonable accuracy. The design of the linear control systems can be carried out in either the time or the frequency
domain. The relative stability is measured in terms of gain margin, and phase margin. These are typical frequency-domain specifications and should be used in conjunction with such tools as Bode plot.
The transfer function of the PI controller (Gaiceanu, 2007b) has the form:
The calculation of the PI controller coefficients, K[pc] (proportional gain) and T[ic] (integral time), is done imposing the phase margin ϕ[mc] (in radian) and the bandwidth,ω[c], (in radian per
second). Imposing these two conditions, the following relations for K[pc] and T[ic] are obtained(Gaiceanu, 2007b):
3.2. The Phase Locked Loop (PLL)
A phase locked loop (PLL) ensures the synchronization of the reference frame with the source phase voltages by maintaining their d component at zero (E[d]=0) through a PI controller; the grid
frequency is delivered by knowing the line-line grid voltages (EBA, EBC), as in Figs.28, 29.
3.3. The current controllers
By using a decoupling of the nonlinear terms, the cross coupling (due to boost input inductance) between the d and q axes was compensated. To decouple current loops, the proper utility voltage
components have been added (Gaiceanu, 2004b) (Fig 32).
Fig.33 shows the Simulink implementation of the reverse transformation from synchronous reference frame (d,q) tofixed reference frame (A,B,C) through the (alfa,beta) transformations.
4. The DClink current estimator
The load power (P[load]=P[out]) is calculated from the load inverter terminals. Another method is to estimate the load power from the DC link, indirectly, through a first or second order DC load
current estimator (Gaiceanu, 2004a). The power feedforward control (Uhrin, 1994) allows the calculus of the input current reference based on the generated power, and it satisfies the power balance in
a feedforward manner. By using the load feedforward control, the input reference of the current is changed with load, thus it is obtained a better transient response. The increase in the power
response of the DC-AC inverter leads to the possibility of reduction the size of the DC link capacitor by maintaining the stability of thesystem.
The block diagram of the second degree estimator is presented in the Fig. 34, where the input needs the measure of the DC link voltage V[dc](p) and the calculus of the input ac load current component
I[q]. The output of the estimator is the estimated DC link load power P^^[dcout](p).
The estimator (Fig. 34), after some manipulations (Fig.35), gets the form presented in the Fig. 36.
Using Laplace transform the DC link voltage equation gets the form
(24) or:
This means that the block diagram from Fig.35 can be redrawing as in Fig. 36.
4.1.Calculus. of the estimator parameters
The problem consists of the calculation of the parameters k and τ such that the error between the estimated DC load current I^^[dcout](p)and the actual DC load current I[dcout](p) to be
insignificant. The closed loop transfer function of the estimator, (Fig.37), derived from Fig.36, is given by:
Considering a step variation for the I[dcout](p), by setting:
the estimated DC load current gets the form
The usual form of the equation (28) is given by
where the damping factor is
and the pulsation factor
The parameters k and τ are chosen such that the response I^^[dcout](p) to have an acceptable overshoot
a small step time response
and a minimum output noise.
An advantage of the estimated method is that there is no ripple presence in the feed-forward reference current of the source side. The small reference current ripple is delivered from the output of
the DC link voltage controller.
4.2. Simulation results
Fig. 41 shows the comparison of the grid voltage and the converter voltage (Ed,Vdref): the Ed component is 0 and the voltage Vdref is calculated as Vdref =om*L[in]*I[qN]+E[d] in steady state regime (
Gaiceanu, 2007a). The voltages Eqref and Vqref have the same value of 326.55[V].
The reference and actual d axis current waveforms (Id-Idref) are shown in Fig.41 proving the cancellation of the reactive power.
The DC link voltage step response was obtained by using a DC link voltage test generator (Fig.42) under a load current variation between [0.65, 1.15]×I[N] (Fig.41), I[N] being the rated value of the
line current.
The performances of the active current inverter control are shown in Fig.41. The actual active current, I[q], accurately follows the reference I[qref] (Fig.41). In Fig. 42 the performances of DC-link
voltage controllers are shown. The trace of the A phase of the line current is in phase with A phase of the grid voltage, which clearly demonstrates the unity power factor operation (Fig. 43).
Comparative waveforms showing unity power factor operation during regeneration obtained from DC-AC power converter are shown in Fig. 43. For all three methods of active and reactive power deduction (
Fig.44) the steady state values are the same, however the first method is more accurate in transient regime (Fig.44) (Gaiceanu, 2004b).
The 2nd degree DC link current estimator was implemented for a 37kVA power inverter. The dynamic performances of the DC load current estimator are presented (Gaiceanu, 2004a).
By an adequate choice of the estimator parameters an acceptable step response can be obtained (Fig.45).
Through simulation (Figs. 45-46) the real and the estimated DC link currents are obtained.
The power semiconductor active devices operate with a switching time T[s]=125μs, and a 2μs dead time. The converter specifications are given as follows: Supply voltage (line-to-line): 400V; Main
frequency: 50Hz, Line current: 69A, Line inductance: 0.5 mH, DC bus capacitor: 1000 μF, Ambient temperature 40^0C,DC voltage reference: 690V.
5. dSpace implementation
The PI Current Control in Synchronously Reference Frame is shown in Fig.47. The current regulators have two tasks: the error cancelling, and the modulation (the appropriate switching states are
For an adequate tuning of the current regulators, the actual load current, iA, accurately follows its reference i*A (Fig. 49), despite of an inappropriate tuning of the current controllers (Fig.48).
6. Conclusions
The main outcomes of the chapter:
The chapter pursuits to increase the awareness of public regarding the renewable energy technologies through the open access, and to determine the researchers to implement renewable energy projects.
The chapter will contribute the promotion of RES through the formation of experts so that these experts can later carry out RES projects with outstanding results.
The implicit longer term outcomes are related to:
Accurate models for fuel cells power systems.
New design of the adequate controllers for integrated systems, which will enable the efficient operation of such power inverters connected to the grid, with high stability in service and power
The rapid prototyping through dSpace real time platform can prove very useful in medium and longer term for further modelling/investigation/development of similar systems.
The chapter will also bring contributions to the development of the theoretical knowledge if the following aspects are taken into account: the complexity of the issue, its interdisciplinary, the
performance of an experimental model and the necessary theoretical knowledge of the interface solutions for the renewable system, in particular for fuel cells.
Through a proper control sinusoidal input current, a nearly unity power factor (0,998), bi-directional power flow, small (up to 5%) ripple in the DC-link voltage in any operated conditions,
disturbance compensation capability, fast control response and high quality balanced three-phase output voltages were obtained. By using the load feed-forward component the input reference of the
current is changed with load so that a better transient response is obtained. The proposed control was successfully implemented by the author on quasi direct AC-AC power converter (Gaiceanu M., 2004b
) and based on the Matlab/Simulink software the simulation test has been performed for the modified topology of the grid power inverter. The experimental results (Figs. 48, 49) have been obtained by
using dSpace platform (Fig.47). The second-degree DC load current estimator for DC-AC power converter system is developed in this chapter. Since the DC-AC power converter control by means of
pulse-width modulation (PWM) is based on the power balance concept, its load power should be known. In order to overcome the measuring solution with well-known disadvantages, the load power can be
estimated from the DC side by using the DC load current estimator. Thus, it is mandatory to have the information regarding the DC load current. The DC voltage regulation with good dynamic response is
achieved even if DC capacitance is substantially reduced. This implies also the good accuracy of the DC link load current estimation. | {"url":"http://www.intechopen.com/books/matlab-a-fundamental-tool-for-scientific-computing-and-engineering-applications-volume-3/matlab-simulink-based-grid-power-inverter-for-renewable-energy-sources-integration","timestamp":"2014-04-21T01:10:32Z","content_type":null,"content_length":"182623","record_id":"<urn:uuid:1d2192ce-b0f5-49a4-befd-a55310000b72>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weibull Analysis of Fracture Test Data on Bovine Cortical Bone: Influence of Orientation
International Journal of Biomaterials
Volume 2013 (2013), Article ID 639841, 6 pages
Research Article
Weibull Analysis of Fracture Test Data on Bovine Cortical Bone: Influence of Orientation
^1Department of Engineering and Physics, University of Central Oklahoma, Edmond, OK 73034, USA
^2Department of Mechanical Engineering, Texas Tech University, Lubbock, TX 79409, USA
Received 30 June 2013; Revised 3 October 2013; Accepted 3 October 2013
Academic Editor: Abdelwahab Omri
Copyright © 2013 Morshed Khandaker and Stephen Ekwaro-Osire. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
The fracture toughness, , of a cortical bone has been experimentally determined by several researchers. The variation of values occurs from the variation of specimen orientation, shape, and size
during the experiment. The fracture toughness of a cortical bone is governed by the severest flaw and, hence, may be analyzed using Weibull statistics. To the best of the authors’ knowledge, however,
no studies of this aspect have been published. The motivation of the study is the evaluation of Weibull parameters at the circumferential-longitudinal (CL) and longitudinal-circumferential (LC)
directions. We hypothesized that Weibull parameters vary depending on the bone microstructure. In the present work, a two-parameter Weibull statistical model was applied to calculate the plane-strain
fracture toughness of bovine femoral cortical bone obtained using specimens extracted from CL and LC directions of the bone. It was found that the Weibull modulus of fracture toughness was larger for
CL specimens compared to LC specimens, but the opposite trend was seen for the characteristic fracture toughness. The reason for these trends is the microstructural and extrinsic toughening mechanism
differences between CL and LC directions bone. The Weibull parameters found in this study can be applied to develop a damage-mechanics model for bone.
1. Introduction
Bone is anisotropic material. The fracture toughness of bone varies depending on sampling site and the initial crack orientation of fracture test samples with respect to the applied load. Researchers
prepared specimen in different orientations to measure fracture toughness [1–4]. The orientation of the specimen used in the fracture test was based on fracture propagation direction with respect to
the long axis of the bone. According to crack orientation, there were two types of specimen considered for the measurement of fracture toughness: longitudinal cracking specimen and transverse
cracking specimen. In the longitudinal cracking specimen, a crack propagates parallel to the long axis that is, along the collagen fiber through the bone matrix, whereas in the transverse cracking
specimen, a crack propagates normal to the long axis, that is, across the long axis. In this study, circumferential-longitudinal (CL) and longitudinal-circumferential (LC) direction specimen are
categorized as longitudinal cracking and transverse cracking specimen, respectively (Figure 1). Crack orientation of a specimen was classified according to crack plane during fracture toughness
testing. The first letter represents a normal direction to the created crack plane; the second letter represents the expected direction of crack propagation [5].
There are many reports of plane strain mode I fracture toughness, , in different bone materials (cortical and trabecular bones) obtained under both quasistatic and dynamic loading conditions [6–9].
Typically, there is a large scatter in values because of nonuniformity in specimen microstructure, size, shape, and initial crack length. Since the standard deviation, in the measurement of fracture
toughness, has a similar magnitude to the average value, a single value fracture toughness is not a reliable parameter to predict fracture toughness for bone materials [10]. Fracture of a bone occurs
where it is structurally weakest (at the Haversian and Volkmann [10] canals). Hence, it is appropriate to use Weibull statistics [11, 12] to analyze fracture test data. Assuming that the fracture
toughness distribution is describable using the two-parameter Weibull equation, the probability distribution function is given by [13] where is the fracture toughness and is the failure probability.
In (1), is the Weibull modulus, which indicates scatterness of test data, and is the characteristic fracture toughness or scale parameter (fracture toughness value that includes 63.2% of the test
data, when ). The objective of the present work was to determine the influence of the orientation of a cortical bone from which the test specimen was extracted on the values of the Weibull modulus
and the scale parameter of .
2. Materials and Methods
2.1. Specimen Configuration
Power analysis was conducted to determine the appropriate test samples using the mean and standard deviation of fracture toughness of cortical bone in longitudinal and transverse direction found from
Lucksanasombool et al. [1]. The analysis found that over 24 samples provide a 95% confident estimate of fracture toughness for CL and LC specimens with an error of 0.04 from their population mean.
Therefore, more than 24 samples of CL and LC specimens were made during this study. Fifty-three 20mm × 4mm × 2mm dimension single edge-notch bend (SENB) specimens were prepared from cortical bone
harvested from the femur of an animal of 18 months old or less, with 28 and 25 cuts from the CL and LC orientations, respectively (Figure 1). The initial notch of the specimens was 2mm. The
dimensions of the specimens conformed to ASTM E 399 standard [14]. Table 1 shows the dimensions of CL and LC specimens with their mean and standard deviation. The initial crack length was created at
the center of the specimens by a Buehler Isomet 11-1180-160 diamond saw cutter with a thickness of 0.125mm. Nikon’s SMZ-2500 stereo microscope was used to view the topography of the specimens and
measure the dimensions of the specimens.
2.2. Experiment
A custom made three-point bending test apparatus (see Figure 2) was used in this study. The bone specimens were supported horizontally by two rollers located 16mm apart, which is the spun length, ,
for the SENB test. A load cell (FUTEK, model L2920) with a digital display (FUTEK, model IBT 500) was used to measure the indention load that was applied at the midspan of the bone. A digital
displacement gage (Mitutoyo, Series 543) was used to measure the load point displacement. The data obtained was used to generate force versus displacement graphs. The crack initiation and propagation
were observed using a stereomicroscope (Nikon’s SMZ-2500), focused at the midpoint of the bone during the tests.
2.3. Data Analysis
The critical loads for rapture, , for the CL and the LC samples were extracted from the load-displacement graph. The value of was calculated using the following expression [15]: where is the specimen
thickness, is the width, and is the shape function at the initial crack length, . For three-point bend testing, the values of were calculated using [16] where is the normalized crack length defined
as . The Weibull method was used to calculate the Weibull parameters ( and ) of the fracture toughness, , data for CL and LC specimens. According to the method, (1) can be expressed as
After sorting and numbering the values of all the CL and LC specimens in ascending order, the failure probabilities of were estimated by , where is the rank number of the specimen’s fracture
toughness and is the specimen sample size ( for CL specimen and for LC specimen). The and data were used to generate the Weibull statistic graphs [ versus ]. Univariate regression analyses of the
against the data was conducted to calculate and using Microsoft Excel Weibull analysis add-on [17]. The Weibull modulus was calculated directly from the slope of the Weibull statistic graphs. A
one-way ANOVA analysis was used to compare Weibull modulus values as a function of orientation for bone specimens. The estimate for the scale parameter was calculated using where was calculated as ,
which is the intercept of the Weibull graph. The Weibull statistical model equation (1) was applied to the fracture toughness () to obtain the value of probability of failure of fracture toughness.
3. Results and Discussion
The results are presented in Figures 3 to 5 and Tables 2 to 3. The comparison of the load-displacement plots of a CL and LC specimen is shown in Figure 3. This figure shows that the slope of the load
versus the displacement value is steeper at the initial stage of the experiment than at failure condition. This result means a higher load is required at the initial stage than later near failure for
the same load point displacement. The increment of the load with displacement behaved linearly until the onset of cracking which is observed in the figure by the nonlinearity (fracture initiation)
point of the load-displacement curve. Also, Figure 3 shows that for the same notch depth ratio, the LC specimen requires a higher initial fracture load than CL load. Figure 4 shows the Weibull
statistic graphs [ versus ]. The regression model coefficients for the least square fit of Weibull graphs for both specimens are given in Table 2. Results show that specimen direction had significant
regressions of the transformed failure probability ((4)) against (, Figure 4), demonstrating that the two-parameter Weibull model is applicable for the analysis.
Figure 5 shows the cumulative distribution functions of the fracture toughness value for the CL and LC specimens. Figure 5 also shows a wider range of fracture stress for the LC specimens compared to
the CL specimens. Individual Weibull strength moduli, , and characteristic fracture toughness, , were calculated for each group (Table 3). The Weibull strength moduli were significantly different
(ANOVA, ) between the specimen groups, as expected because the slopes are related to the specimens’ material failure properties. The Weibull modulus of the CL specimens is higher than that of the LC
specimen. This means that the scatterness of the fracture toughness value was lower for the CL specimen compared to the LC specimen. Since the scale parameter of the LC specimens is higher than that
of the CL specimens, one expects to find that the fracture toughness is higher for the LC specimens than that for the CL specimens for a particular failure probability (Figure 5).
Different distribution of structural types in front of the propagation crack in CL and LC directions specimens can be the reason for the difference of Weibull parameters of between the two groups.
This explanation is in agreement with Lipson and Katz [18]. The authors reported a different distribution of the structural types within different locations of the same bovine femur. They found that
a different level of osteonal remodeling is directly related to the pattern of mechanical stress at the various fracture sites. The crack propagation in the CL specimens was parallel to the collagen
fibers through the bone matrix, whereas in the LC specimen, the crack propagated across the collagen fibers. For LC specimens, the variation of the mechanical stress at the tip of the crack is due to
high variation of the bone matrix and/or mineralized collagen fibrils and/or osteonal systems in the transverse direction. On the other hand, for CL specimens, the variation of the mechanical stress
at the tip of the crack is due to variation of structural differences of the cement line, the boundary between secondary osteons and the surrounding lamellar matrix. The difference between the
extrinsic toughening behavior for the LC and the CL specimens may be the source of a larger characteristic fracture toughness value in the LC specimens compared to the CL specimens. According to
Ritchie et al. [5], extrinsic toughening in front of the propagating crack tip in LC specimen occurs due to the collagen fiber bridging, crack deflection due to the cement line, uncracked ligament
bridging, and microcracking, whereas extrinsic toughening in front of the crack tip in CL specimen occurs mainly by microcracking and uncracked ligament bridging.
There are no publications on Weibull parameters of fracture toughness data for bovine cortical bone in CL and LC to compare the results of this investigation. However, Pithioux et al. [8] applied the
Weibull distribution to bovine compact bone tension failure under quasistatic loading. Dog bone shape tensile specimen samples were used for the study. The direction of fracture of the specimen was
the same as the CL specimen used in this study. The Weibull tensile strength modulus (scatterness of tensile strength) was found to be 5.77 by Pithioux et al. [8], which is in close agreement with
the Weibull modulus of fracture toughness in this study (5.48). This difference was reasonable since the bone type, age, size, loading, shape, and measurement properties used in this study were
different from those of the test specimen used by Pithioux et al. [8].
The results of this study can be utilized to quantify the role of bone toughening mechanism on the failure probability of extremely diverse range of biomedical and nonbiomedical applications and has
potential use for orthopedic bone cement. Also, a failure model that considers the physiological process like remodeling, adaptation can be investigated. This study will enable biomedical researchers
to predict more effectively the microdamage associated with bone damage and design bone-implant systems.
4. Conclusion
The aim of this paper was to compare the bovine cortical bone fracture toughness in two different directions: CL and LC. Weibull statistical law was used to analyze the fracture failure variation and
characteristic fracture toughness values of CL and LC specimens. The results of this study are as follows.(1)The individual fracture toughness data for each direction of specimen tested follow the
two-parameter Weibull distribution. Each distribution was characterized by a different Weibull modulus.(2)LC direction cortical bone possesses greater fracture toughness than LC direction specimen
for a particular failure probability level.(3)There is a statistically significant difference of Weibull modulus and characteristic fracture toughness for specimens of different orientation. The
observed variability in the fracture of bovine cortical bone in CL and LC directions was explained by microstructural variables depending on the direction.
1. P. Lucksanasombool, W. A. J. Higgs, R. J. E. D. Higgs, and M. V. Swain, “Fracture toughness of bovine bone: influence of orientation and storage media,” Biomaterials, vol. 22, no. 23, pp.
3127–3132, 2001. View at Publisher · View at Google Scholar · View at Scopus
2. R. K. Nalla, J. S. Stölken, J. H. Kinney, and R. O. Ritchie, “Fracture in human cortical bone: local fracture criteria and toughening mechanisms,” Journal of Biomechanics, vol. 38, no. 7, pp.
1517–1525, 2005. View at Publisher · View at Google Scholar · View at Scopus
3. H. Peterlik, P. Roschger, K. Klaushofer, and P. Fratzl, “From brittle to ductile fracture of bone,” Nature Materials, vol. 5, no. 1, pp. 52–55, 2006. View at Publisher · View at Google Scholar ·
View at Scopus
4. G. Pezzotti and S. Sakakura, “Study of the toughening mechanisms in bone and biomimetic hydroxyapatite materials using Raman microprobe spectroscopy,” Journal of Biomedical Materials Research A,
vol. 65, no. 2, pp. 229–236, 2003. View at Scopus
5. R. O. Ritchie, J. H. Kinney, J. J. Kruzic, and R. K. Nalla, “A fracture mechanics and mechanistic approach to the failure of cortical bone,” Fatigue and Fracture of Engineering Materials and
Structures, vol. 28, no. 4, pp. 345–371, 2005. View at Publisher · View at Google Scholar · View at Scopus
6. J. D. Currey, “What determines the bending strength of compact bone?” Journal of Experimental Biology, vol. 202, no. 18, pp. 2495–2503, 1999. View at Scopus
7. R. M. Pidaparti, U. Akyuz, P. A. Naick, and D. B. Burr, “Fatigue data analysis of canine femurs under four-point bending,” Bio-Medical Materials and Engineering, vol. 10, no. 1, pp. 43–50, 2000.
View at Scopus
8. M. Pithioux, D. Subit, and P. Chabrand, “Comparison of compact bone failure under two different loading rates: experimental and modelling approaches,” Medical Engineering and Physics, vol. 26,
no. 8, pp. 647–653, 2004. View at Publisher · View at Google Scholar · View at Scopus
9. D. Taylor and J. H. Kuiper, “The prediction of stress fractures using a “stressed volume” concept,” Journal of Orthopaedic Research, vol. 19, no. 5, pp. 919–926, 2001. View at Publisher · View at
Google Scholar · View at Scopus
10. R. Sadananda, “Probabilistic approach to bone fracture analysis,” Journal of Materials Research, vol. 6, no. 1, pp. 202–206, 1991. View at Scopus
11. C. I. Vallo, “Influence of load type on flexural strength of a bone cement based on PMMA,” Polymer Testing, vol. 21, no. 7, pp. 793–800, 2002. View at Publisher · View at Google Scholar · View at
12. J. P. Morgan and R. H. Dauskardt, “Notch strength insensitivity of self-setting hydroxyapatite bone cements,” Journal of Materials Science: Materials in Medicine, vol. 14, no. 7, pp. 647–653,
2003. View at Publisher · View at Google Scholar · View at Scopus
13. M. Staninec, G. W. Marshall, J. F. Hilton et al., “Ultimate tensile strength of dentin: evidence for a damage mechanics approach to dentin failure,” Journal of Biomedical Materials Research, vol.
63, no. 3, pp. 342–345, 2002. View at Publisher · View at Google Scholar · View at Scopus
14. ASTM, Annual Book of ASTM Standards, Section 3, E399-90 Metals Test Methods and Analytical Procedures, 2001.
15. D. Vashishth, K. E. Tanner, and W. Bonfield, “Experimental validation of a microcracking-based toughening mechanism for cortical bone,” Journal of Biomechanics, vol. 36, no. 1, pp. 121–124, 2003.
View at Publisher · View at Google Scholar · View at Scopus
16. F. M. Haggag and J. H. Underwood, “Compliance of a three-point bend specimen at load line,” International Journal of Fracture, vol. 26, no. 2, pp. R63–R65, 1984. View at Publisher · View at
Google Scholar · View at Scopus
17. W. W. Dorner, “Using Microsoft Excel for Weibull analysis,” 1999, http://www.qualitydigest.com/jan99/html/body_Weibull.html.
18. S. F. Lipson and J. L. Katz, “The relationship between elastic properties and microstructure of bovine cortical bone,” Journal of Biomechanics, vol. 17, no. 4, pp. 231–240, 1984. View at Scopus | {"url":"http://www.hindawi.com/journals/ijbm/2013/639841/","timestamp":"2014-04-16T12:29:00Z","content_type":null,"content_length":"103915","record_id":"<urn:uuid:0eac6d7a-80e8-44af-8858-32f11aeace8d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Smoothing temporally correlated data
Something I have been doing a lot of work with recently are time series data, to which I have been fitting additive models to describe trends and other features of the data. When modelling temporally
dependent data, we often need to adjust our fitted models to account for the lack of independence in the model residuals. When smoothing such data, however, there is an additional problem that needs
to be addressed when we are determining the complexity of the fitted smooths as part of the model fit.
Unless we specifically tell the software that the data aren’t independent it will perform smoothness selection assuming that we have (( n \) independent observations. The risk then is that
too-complex a smooth term is fitted to the data — it is no-longer a case of updating the fitted model, the model itself will be over-fitted. In this post I want to illustrate the problem of smoothing
correlated data with an example from a chapter in a text book that a reviewer alerted to me to some time back.
The example comes from Kohn, Schimek and Smith (2000) that I have cooked up using R. Kohn et al consider the model \( f(x_{t}) = 1280 x_{t}^4 (1 - x_{t})^4 \), where \( t = 1, 2, \ldots, 100 \), and
\( x_{t} = t/100 \). To this, errors \( e_{t} \) are generated from a first-order auto-regressive (AR(1)) process with \( \phi_{1} = 0.3713\) to produce a random sample from the model such that \( y_
{t} = f(x_{t}) + e_{t}\). We can generate a sample of data from this model with the following R code
n <- 100
time <- 1:n
xt <- time/n
Y <- (1280 * xt^4) * (1- xt)^4
y <- as.numeric(Y + arima.sim(list(ar = 0.3713), n = n))
The arima.sim() function is used to generate the appropriate AR(1) errors. A plot of this sample of data and the true function are shown below
To these data, I will fit a cubic smoothing spline via smooth.spline() and an additive model via gam() in package mgcv. In addition, let us assume that we don’t know the exact nature of the
dependence in the data but we know that they are temporally correlated so that we can fit a model that includes a plausible correlation structure. For that, I will use an additive model with an AR(1)
correlation structure, fitted using a linear mixed effects representation of the additive model via the gamm() function, also from the mgcv package. gamm() uses the lme() function from the nlme
package. I will arrange for the value of \( \phi_{1}\) be estimated as one of the model parameters, whilst the degree of smoothness is being estimated during fitting. The three models are fitted with
the following three lines of R code:
m1 <- smooth.spline(xt, y)
m2 <- gam(y ~ s(xt, k = 20))
m3 <- gamm(y ~ s(xt, k = 20), correlation = corAR1(form = ~ time))
The three model fits are shown in the figure below
Both the cubic smoothing spline and the additive model over fit the data, resulting in very complex smooth functions using 34.25 and 16.82 degrees of freedom respectively. The additive model with AR
(1) errors does a very good job of retrieving the true function from which the data were generated, only really deviating from this function at low values of \( x_{t}\) where there are few data to
constrain the fit. The code used to produce the figure is shown below
edf2 <- summary(m2)$edf
edf3 <- summary(m3$gam)$edf
plot(y ~ xt, xlab = expression(x[t]), ylab = expression(y[t]))
lines(Y ~ xt, lty = "dashed", lwd = 1)
lines(fitted(m1) ~ xt, lty = "solid", col = "darkolivegreen", lwd = 2)
lines(fitted(m2) ~ xt, lty = "solid", col = "red", lwd = 2)
lines(fitted(m3$lme) ~ xt, lty = "solid", col = "midnightblue", lwd = 2)
legend = c("Truth",
paste("Cubic spline (edf = ", round(m1$df, 2), ")", sep = ""),
paste("AM (edf = ", round(edf2, 2), ")", sep = ""),
paste("AM + AR(1) (edf = ", round(edf3, 2), ")", sep = "")),
col = c("black", "darkgreen", "red", "midnightblue"),
lty = c("dashed", rep("solid", 3)),
lwd = c(1, rep(2, 3)),
bty = "n", cex = 0.8)
The intervals() function can be used to extract the estimate for \( \phi_{1} \) and a 95% confidence interval on the estimate:
> intervals(m3$lme, which = "var-cov") ## edited for brevity
Correlation structure:
lower est. upper
Phi 0.1705591 0.4032966 0.5934125
[1] "Correlation structure:"
Despite being somewhat imprecise, the estimate, \( \hat{\phi}_{1} = 0.4033 \), is very close to the known values used to generate the sample of data.
Whilst being a little contrived (I purposely increased the basis dimension on the basic additive model to k = 20 [otherwise the fit with the default k is close the model with AR(1) errors!], and use
GCV smoothness selection rather than the better performing ML or REML methods available in gam()), the example shows quite nicely the problems associated with smoothness selection when fitting
additive model to dependent data. If you know something about the system under study and the sort of variation in the data one might expect to observe, an alternative approach to fitting an additive
model to dependent data would be to fix the smoothness at an appropriate, low value. To perform any subsequent inference on the fitted model, we would have to estimate a correlation matrix from the
residuals of that model using a time series model and use that to update the covariance matrix of the fitted additive model. I’m still working on how to do that last bit with gam() and mgcv.
Kohn R., Schimek M.G., Smith M. (2000) Spline and kernel regression for dependent data. In Schimekk M.G. (Ed) (2000) Smoothing and Regression: approaches, computation and application. John Wiley &
Sons, Inc.
comments powered by Disqus
By Gavin Simpson
21 July 2011
Time series
Additive models | {"url":"http://www.fromthebottomoftheheap.net/2011/07/21/smoothing-temporally-correlated-data/","timestamp":"2014-04-21T09:37:14Z","content_type":null,"content_length":"31270","record_id":"<urn:uuid:209bb381-5a29-4284-9231-1222f091505a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
how do you know if a polynomial is irreducible
February 17th 2009, 03:13 PM #1
Junior Member
Apr 2008
how do you know if a polynomial is irreducible
namely x^4+x+1 is irreducible in polynomials with integer coefficients mod 2.
i was able to find a field of 16 elements...
{0,1,x,x+1,...., x^3+x^2+x+1}
What is a primitive element for this field?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/number-theory/74146-how-do-you-know-if-polynomial-irreducible.html","timestamp":"2014-04-18T00:18:46Z","content_type":null,"content_length":"28876","record_id":"<urn:uuid:2c1c7342-d8e1-4cec-81f1-689e194a554f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic Problem Need Solve Urgent
June 18th 2008, 09:27 PM #1
Jun 2008
Algebraic Problem Need Solve Urgent
I am bothered with
a3+b3 = 91
find the value of and "a" and "b".
where 3 and two is in form of cube and square
Please help me to solve this problem
Last edited by gautam; June 18th 2008 at 10:13 PM. Reason: Just to Set acutal placement of 2 and 3
If you mean $a^3 + b^3 = 91, a^2 + b^2 = 61$, Are a and b integers? Or are they real numbers?
By trial and error you can get one set though: a = -5 and b = 6
put $u=a+b$ and $v=ab$, then the equations become:
Dividing gives:
$<br /> v=\frac{61u-91}{u}<br />$
Now substitute this back into $u^2-2v=61$ to get:
Which is a cubic in $u$, solve this, then use these solutions to find the corresponding $v$'s and so the $a$'s and $b$'s. (solution of the cubic will be eased by using Isomorphism's result).
Last edited by CaptainBlack; June 19th 2008 at 05:01 AM.
I just wanted to mention that those two solutions posted by Captain Black were excellent.
June 18th 2008, 10:06 PM #2
June 18th 2008, 10:17 PM #3
Jun 2008
June 18th 2008, 11:25 PM #4
Grand Panjandrum
Nov 2005
June 19th 2008, 04:52 AM #5
Grand Panjandrum
Nov 2005
June 19th 2008, 06:24 AM #6 | {"url":"http://mathhelpforum.com/algebra/41954-algebraic-problem-need-solve-urgent.html","timestamp":"2014-04-16T21:08:44Z","content_type":null,"content_length":"53636","record_id":"<urn:uuid:76c0412f-4cbb-4472-b45b-bb54fe17c18b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inegrated Algebra
Course Information
Instructor: Mrs. Gregorius
40 Week Course (2 Semesters) - 1 High School Credit
Algebra is the first of three math courses designed to satisfy a sequence toward the Regents Diploma with Advanced Designation. The main area of study is algebra with some coordinate geometry, right
triangle trigonometry, probability and statistics. Students will take the Algebra Regents examination in June. | {"url":"http://www.wccsk12.org/wingspan/algebra.html","timestamp":"2014-04-18T16:27:06Z","content_type":null,"content_length":"3432","record_id":"<urn:uuid:48ba4fd9-dfe1-4034-bc82-9d895f7f3503>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information-Theoretic Sequence Alignment
Information-Theoretic Sequence Alignment
Technical Report 98/14
© L. Allison,
School of Computer Science and Software Engineering,
Monash University,
Australia 3168.
15 June 1998
│ NB. A related, and much more detailed paper on this topic appears in: │
│ L.Allison, D.Powell & T.I.Dix, Compression and Approximate Matching, the Computer Journal, [OUP], 42(1), pp1-10, 1999. │
│ Also: [Demonstration] │
Abstract: The sequence alignment problem is reexamined from the point of view of information theory. The information content of an alignment is split into two parts, that due to the differences
between the two sequences and that due to the characters of the sequences. This allows the appropriate treatment of the non-randomness of sequences that have low information content. Efficient
algorithms to align sequences in this way are given. There is a natural significance test for the acceptability of alignments. Under the new method, the test is unlikely to be fooled into believing
that unrelated sequences are related simply because they share statistical biases.
Keywords: data compression, edit distance, information content, sequence alignment, sequence analysis.
An alignment of two (or more) sequences is here treated as a hypothesis; it is just a possible way in which characters of one sequence may be matched with characters of another sequence. It shows one
way, out of many, to `edit' one sequence into the other. All other things being equal, an alignment corresponding to fewer mutations, i.e. insertions, deletions and changes, is more probable. Here it
is shown how to include the probabilities of the characters themselves in the alignment process. These probabilities can vary from character to character in low information content sequences. Using
this knowledge in alignment allows a more reliable assessment of whether or not two sequences are (probably) related.
It is a well known difficulty that alignments of unrelated but low information content sequences can give unreasonably low costs or, equivalently, high scores. This can result in `false positive'
matches when searching against a sequence database, for example. It can also lead to poor alignments even in cases of true positives. A partial solution is to mask-out low information content regions
altogether before processing, as described by Wootton (1997). This is drastic and cannot be used if one is interested in the low information content regions. Wootton uses `compositional complexity'
defined as a moving average of the complexity under the multi-state distribution (Boulton and Wallace 1969).
An alignment is formed by padding out each sequence with zero or more null characters `-', until they have the same length. The sequences are then written out, one above the other.
│ e.g. ACGTACGTA-GT │
│ || | ||| || │
│ AC--ATGTACGT │
Each column or `pair' contains two characters. The pair <-,-> is not allowed. A pair <x,x> represents a match or copy, <x,y> represents a change, <x,-> a deletion and <-,x> an insertion. The pairs
represent point-mutations of the sequences or equivalently `operations' on characters to edit one sequence into the other. Given a cost or score function on alignments one can search for an optimal
alignment. The cost (score) usually takes the form of costs (scores) for the individual operations.
There are a great many sequence alignment methods. In general terms they attempt to maximise the number of matches, or to minimise the number of mutations, or to do both in some combination
(Needleman and Wunsch 1970). The longest common subsequence (LCS) problem (Hirschberg 1975) is to maximise the number of matching characters in an alignment. The edit distance problem (Levenshtein
1965, Sellers 1974) is to minimise the number of mutations - insertions, deletions and changes - in an alignment. If probabilities are assigned to mutations (Allison 1993) then one can talk of, and
search for, a most probable alignment.
Most existing alignment algorithms make the tacit assumption that all character values are equally likely in all positions of a sequence and are independent of each other. Equivalently the sequences
are assumed to be `random', that is, generated by a `uniform' zero-order Markov model. For DNA this amounts to giving each base a 2-bit code in data-compression terms. It is difficult to compress
typical DNA much below 1.9 bits/base or 1.8 bits/base, if the word `typical' has any meaning for DNA. Sophisticated models can yield compression to below 1.75 bits/base for human Haemaglobin region,
HUMHBB, for example (Loewenstern and Yianilos 1997, Allison et al 1998). However some subsequences are highly compressible, e.g. poly-A, (AT)*. Common subsequences, such as the Alu's (Bains 1986),
could also be given special codes to make them compressible.
This paper argues that any compressibility, i.e. non-randomness, in sequences should be used in their alignment. First, there is a natural null hypothesis that two given sequences, S1 and S2, are
unrelated. Its information content is the sum of the information content of each sequence:
│ P(S1&S2 |unrelated) = P(S1).P(S2), so │
│ -log2(P(S1&S2 |unrelated)) = -log2(P(S1)) + -log2(P(S2)) │
If the sequences are compressible then the null hypothesis is simpler in consequence. The null's converse is the hypothesis that the sequences are related. An alignment is a sub-hypothesis of the
latter: it shows one particular way in which the sequences could be related by mutation. In the extreme case that S1=S2, the optimal alignment has an information content only slightly greater than
half that of the null hypothesis, assuming that the sequences are reasonably long. As S1 and S2 grow further apart, the information content of even an optimal alignment increases until it exceeds the
null's, at which point the alignment is not `acceptable'. Thus an alignment can only compete fairly with the null hypothesis if it can take advantage of any compressibility of the sequences. Later
sections show how to realise this.
If sequences S1 and S2 are drawn from a (statistical) family of compressible sequences, e.g. they are AT-rich, then a simple algorithm may find a `good' alignment and conclude that S1 and S2 are
`related' when the only sense in which this is true is that they share certain statistical biases. Algorithms described below use the compressibility of such sequences and give more reliable
inferences of the true picture. The examples are of DNA sequences but the method and algorithms are obviously applicable to other alphabets and sequences.
Alignment of Random Sequences
This section recalls the information-theoretic alignment of random, incompressible sequences. The following section shows how the compressibility of non-random sequences can be treated.
Figure 1 shows a variation of the well-known dynamic programming algorithm (DPA) to calculate the probability of a most probable alignment of sequences S1 and S2. Illustrated is the algorithm for
simple point-mutations but linear gap-costs and other gap-costs (Gotoh 1982) can be given a similar treatment. The alignment itself is recovered by a trace-back of the choices made by the `max'
function. In practice, algorithms work with the -log2s of probabilities, i.e. information content, and therefore `+' replaces `*' and `min' replaces `max'. This gives values in a better range for
computers to handle.
│ M[0,0] = 1 │
│ │
│ M[i,0] = M[i-1,0]*P(<S1[i], ->) i = 1..|S1| │
│ │
│ M[0,j] = M[0,j-1]*P(<-, S2[i]>) j = 1..|S2| │
│ │
│ M[i,j] = max(M[i-1, j-1]*P(<S1[i], S2[j]>) │
│ M[i-1, j ]*P(<S1[i], - >) │
│ M[i, j-1]*P(< -, S2[j]>)) i=1..|S1|, j=1..|S2| │
│ │
│ where P(<x,x>) = probability of match / copy │
│ P(<x,y>) = probability of mismatch / change │
│ P(<x,->) = probability of deletion │
│ P(<-,x>) = probability of insertion │
│ │
│ Figure 1: Dynamic Programming Algorithm (DPA), most probable alignment. │
A `pair' from an alignment, e.g. <x,x> is given a probability, say 0.8*1/4 reflecting the chance of a copy, 0.8 say, and the probability of the character `x', 1/4 say. In previous work (Allison et al
1992) the explicit assumption was made that all DNA bases were equally likely, i.e. a character `x' had probability 1/4, and that two differing characters had probability 1/12. The probabilities of
copy, change, insertion and deletion were estimated by an expectation maximisation process (Baum and Eagon 1967, Baum et al 1970).
Alignment of Non-Random Sequences
A family of sequence is called `non-random', or equivalently of `low information content', if there is some statistical model and an associated compression algorithm that, on average, compresses
members of the family more than does the uniform model. It is being realised that compression is important in sequence analysis (e.g. Grumbach and Tahi 1994) not so much to save on data communication
costs or on disc space as to advance understanding of the nature of molecular sequences. The following terms are all aspects of the same notion: repetition, structure, pattern, signal, motif,
non-randomness, compressibility, low information content. Informally, there are two kinds of `boring' sequence: (i) random sequences with no structure as `typed by a monkey' which, counter to
intuition, have high information content and (ii) very regular sequences, such as AAAAAA....., which have almost zero information content. `Interesting' sequences have some structure, i.e. have
moderate information content, perhaps containing a mixture of low and high information content subsequences.
Assuming that runs of repeated characters are common, figure 2 shows local alignments of low (i) and high (ii) information content subsequences.
│ ...AAAA... ....ACGT.... │
│ |||| |||| │
│ ...AAAA... ....ACGT.... │
│ │
│ (i) (ii) │
│ │
│ Figure 2: Alignment of Low and High Information Content Regions. │
Both partial alignments are good, but it is intuitively obvious that (ii) is better than (i) because ACGT is less likely to occur twice by chance than AAAA, under the model. If an alignment of long
sequences containing the fragments could contain (i) or (ii) but not both, it would be better for it to contain (ii). However, if alphabetically ordered runs were common and repeated runs were not
then the roles would be reversed! In examples below, it is often assumed that runs of repeated characters are common. This model is only used because it is the simplest possible example to give low
information content sequences. The alignment method that is being described is not limited to this case and the use of `repeated-runs' is not meant to imply that it is a good model of DNA, say.
Alignments give an indication of how much information one sequence gives about another:
│ P(S1&S2 |related) = P(S1).P(S2 |S1 & related) │
│ = P(S2).P(S1 |S2 & related) │
If S1 and S2 are compressible, we could encode them both by first encoding S1, compressing it of course, and then encoding S2 given S1, e.g. by encoding the editing operations. However, there are now
two sources of information about S2: first, the `alignment' information from its presumed relative S1 and, second, information from the compressibility of S2 itself. It is not obvious how to combine
these two sources. If we used only the alignment information on the second sequence, it is likely that the resulting calculated values for P(S1).P(S2|S1&related) and P(S2).P(S1|S2&related) would
differ which is clearly unsatisfactory. We therefore seek a more symmetrical method.
Algorithmic Considerations
It is important that any new cost function on alignments should lead to an efficient search algorithm for optimal alignments. The line of attack is to average S1 and S2 in a certain way; an alignment
can be thought of as a `fuzzy' sequence representing all possible intermediate sequences `between' S1 and S2. A good strategy is to avoid committing (strongly) to S1 or to S2 at a step (i,j) because
this would introduce nuisance parameters. We also avoid introducing hidden variables because we should (ideally) integrate over all their possible values. Doing only a bounded amount of computation
for each entry M[i,j] is desirable to maintain the O(n**2) time complexity of the DPA. These objectives are met if the calculation of character probabilities for S1[i] and S2[j] is not conditional on
the choice of alignment from M[0,0] to M[i,j] but depends only on S1[1..i-1] and on S2[1..j-1].
Insertions and deletions are the easiest operations to deal with: At some point (i,j) in the DPA the probability of deleting S1[i] is evaluated. This is defined to be the product of P(delete) and the
probability of the character S1[i] at position i in S1, i.e. P(S1[i]|S1[1..i-1]). What form the last term takes depends entirely on the compressibility of S1. Insertions are treated in a similar way:
│ deletion: P(<S1[i],->) = P(delete).P(S1[i] |S1[1..i-1]) │
│ insertion: P(<-,S2[j]>) = P(insert).P(S2[j] |S2[1..j-1]) │
A consequence of these definitions is that insertions (and deletions) are no longer all equal. For example, in figure 3 and assuming that repeated runs are common, deletion (i) costs less than
deletion (ii) because the base G is more surprising in its context. Hence, there would be a greater benefit if the G in S1 could be matched with a G in S2 than if the unmatched A at (i) could be
│ S1: ....AAAAAAAAAGAAAA.... │
│ |||| |||| |||| │
│ S2: ....AAAA-AAAA-AAAA.... │
│ ^ ^ │
│ (i) (ii) │
│ │
│ Figure 3: Example Insertions. │
Copies and changes are a little more complex to deal with than insertions and deletions because the former involve a character from each sequence, S1[i] and S2[j], and this gives three sources of
information to reconcile: (i) the fact that the characters are the same, or not, (ii) the compressibility of S1 and (iii) the compressibility of S2. The approach taken with copies is to average the
predictions from S1 and from S2 of the character involved, x=S1[i]=S2[j].
│ copy: P(<S1[i],S2[j]> & S1[i]=S2[j]=x) │
│ = P(copy) │
│ .(P(S1[i]=x|P(S1[1..i-1]))+P(S2[j]=x|S2[1..j-1]))/2 │
For example in figure 4, the effect is to place more emphasis on the prediction from S2 for copy (i), and more emphasis on S1 for copy (ii), assuming that runs of repeats are common.
│ S1: ....AAAAGAAAA....AAAAAAAAA.... │
│ | | │
│ S2: ....GGGGGGGGG....GGGGAGGGG.... │
│ ^ ^ │
│ (i) (ii) │
│ │
│ Figure 4: Example Copies/ Matches. │
Changes are treated in a similar way to copies, complicated by the fact that S1[i] and S2[j] differ. Briefly, the prediction of x=S1[i] from S1 is multiplied by that of y=S2[j] from S2 after the
latter has been renormalised, because y cannot not equal x, and this is averaged with the mirror calculation that swaps S1 and S2:
│ change: │
│ P(<S1[i],S2[j]> & S1[i]=x & S2[j]=y) where x~=y │
│ │
│ = P(change) │
│ .( P(S1[i]=x|S1[1..i-1]).P(S2[j]=y|S2[1..j-1]) │
│ / (1-P(S2[j]=x|S2[1..j-1])) │
│ + P(S2[j]=y|S2[1..j-1]).P(S1[i]=x|S1[1..i-1]) │
│ / (1-P(S1[i]=y|S1[1..i-1])) )/2 │
│ │
│ = P(change).P(S1[i]=x|S1[1..i-1]).P(S2[j]=y|S2[1..j-1]) │
│ .( 1/(1-P(S1[i]=y|S1[1..i-1])) │
│ + 1/(1-P(S2[j]=x|S2[1..j-1])) )/2 │
A result of this treatment is, for example, that the change in figure 5 is mainly, but not totally, treated as a case of S2 having an `anomaly' (C) (as always, assuming that repeated runs are
│ S1: ....AAAAA... │
│ || || │
│ S2: ....AACAA... │
│ ^ │
│ │
│ Figure 5: Example Change/ Mismatch. │
There may be other reasonable ways of combining the predictions of character values from S1 and from S2 but the above method is symmetrical with respect to S1 and S2 and it meets the requirement of M
[i,j] only depending on S1[1..i-1] and S2[1..j-1], not on a choice of alignment to M[i,j], and thus leads to an O(n**2) time DPA. The basic algorithm uses O(n**2) space for M[,] but Hirschberg's
(1975) technique can be used to reduce this to O(n) space while maintaining the O(n**2) time complexity. Linear gap costs and other gap costs can clearly be given the same treatment as the point
mutations above.
The modified DPA described above only requires a method of obtaining the probability of each character value occurring at each position of sequences S1 and S2:
│ P(S1[i]=x|S1[1..i-1]), i=1..|S1| and │
│ P(S2[j]=y|S2[1..j-1]), j=1..|S2| │
│ │
│ where x and y range over the alphabet. │
Many statistical models of sequences and most data compression algorithms, for example order-k Markov Models and the well-known Lempel-Ziv (1976) model, can easily be adapted to deliver such
probabilities. The probabilities can be obtained in a preliminary pass over S1 and one over S2 and stored for later use by the DPA. The passes also give the information content of S1&S2 under the
null hypothesis. Since the DPA has O(n**2) time complexity, the passes can take up to a similar amount of time without worsening the overall complexity.
Figure 6 shows an example first-order Markov model, MMg, that was used to generate AT-rich artificial DNA sequences; this is not meant to imply that it is a model of real DNA.
│ MMg: A C G T │
│ +------------------- P(S[i]|S[i-1]) │
│ A |1/12 1/12 1/12 9/12 │
│ | │
│ C |9/20 1/20 1/20 9/20 │
│ S[i-1] | │
│ G |9/20 1/20 1/20 9/20 │
│ | │
│ T |9/12 1/12 1/12 1/12 │
│ │
│ Figure 6: AT-rich Generating Model MMg. │
100 pairs of sequences were generated from the model. Each sequence was of length 100 and was unrelated to, and independent of, the other 199 sequences, except in sharing their general statistical
biases. Each pair of sequences was aligned under three different assumptions: that the sequences were drawn from (i) a uniform model, (ii) the known, fixed model MMg, and (iii) an adaptive order-1
Markov model where the parameter values were not known in advance. In each case, the information content of the null hypothesis and of an optimal alignment were calculated under the appropriate
model. The difference of these quantities gives the -log-odds ratio of the hypotheses. These values and the number of pairs that were inferred to be related `+' or unrelated `-', are shown in table
│ -log odds ?related? │
│ method null:alignment inference │
│ (i) uniform/uniform 13.3 +/- 13.3, 15-, 85+ │
│ (ii) MMg/MMg -44.1 +/- 7.7, 100-, 0+ │
│ (iii) MM1/MM2 -30.4 +/- 7.3, 100-, 0+ │
│ │
│ Table 1: Unrelated Sequences, Alignment with 3 Models. │
Under assumption (i), 85 out of 100 pairs are inferred to be related under an optimal alignment, because it is easy to find alignments of unrelated sequences having a high number of matches just
because of the AT-richness. Alignment with knowledge of the true model (ii) reveals the true picture; the null hypothesis has low information content and alignments cannot better it. Assumption (iii)
also implies that the pairs are unrelated, but is 14 bits less sure than (ii) on average because the model's parameter values must be inferred from the data.
Similar tests were done using other generating models and with related and unrelated pairs of sequences. The unsurprising conclusion is that alignment using the best available model of the sequences
gives the most reliable results.
There is yet another approach to aligning low information content sequences: It is assumed that the given sequences are different noisy observations of one `true' sequence which is treated as a
hidden variable. This is close to the situation in the sequence assembly problem also known as the shortest common supersequence (SCSS) problem. An O(n**2) time alignment algorithm is possible for
`finite-state' models of sequences (Powell et al 1998), e.g. order-k Markov models, although k is limited to 0, 1 or 2 in practice.
There is a common technique that is used to correct partially the alignment costs (equivalently scores) of repetitive sequences in a non-information-theory setting: The sequences are aligned and get
a certain cost (score). Each sequence is then permuted at random and the jumbled sequences are aligned and get a different cost (score). The original alignment is not considered to be significant
unless its cost (score) is significantly better than that of the second. A jumbled sequence has the same zero-order Markov model statistics as the original but has different high-order statistics. It
is hard to imagine how to jumble a sequence while preserving its statistics under an arbitrary statistical model. In contrast, the information theoretic alignment method described here compares the
information content of the unaltered sequences under two hypotheses: null (unrelated) and alignment (related as shown). Absolutely any statistical model of sequences can be used with the new method,
provided only that it can give the information content of a sequence character by character.
Finally, this paper begs the question of what is a good model of sequences to use in the new alignment algorithm. The only possible answer is that `it just depends' on the nature of what is being
aligned. Provided that there is enough data, i.e. pairs of related sequences from the same source, then two or more models of sequences can be compared: alignment with the better model will give the
greater compression on average. Statistical modelling of DNA sequences, in particular, is still a research area and a detailed discussion of it is beyond the scope of this paper. As noted, the
alignment algorithm can be used with a wide range of sequence models. For example, an approximation to Wootton's compositional complexity is a model that bases the prediction of the next symbol on
the frequencies of symbols in the immediately preceding `window'; such models are common in data compression and are sometimes described as `forgetful' or `local'. Using whatever model, the new DPA
does not mask-out low information content subsequences but rather gives them their appropriate (low) weight. If something is known about the source of the data then this knowledge should be built
into the models. If proteins are being aligned the model should at least include the bias in the use of the alphabet. If little is known about the source of the data then a rather `bland' model can
be used. It can have parameters that are fitted to the data, but the number of parameters should be small compared to the lengths of the sequences.
┃ See Also: ┃
┃ L.Allison, D.Powell & T.I.Dix, ┃
┃ Compression and Approximate Matching. ┃
┃ Computer Journal, [OUP], 42(1), pp1-10, 1999 ┃
[All93] L. Allison (1993). Normalization of affine gap costs used in optimal sequence alignment. Jrnl. Theor. Biol. 161 263-269.
[All92] L. Allison, C. S. Wallace and C. N. Yee (1992). Finite-state models in the alignment of macromolecules. Jrnl. Molec. Evol. 35 77-89.
[All98] L. Allison, T. Edgoose and T. I. Dix (1998). Compression of strings with approximate repeats. Proc. Intelligent Systems in Molecular Biology, ISMB98, AAAI Press, 8-16.
[Bai86] W. Bains (1986). The multiple origins of the human Alu sequences. J. Mol. Evol. 23 189-199.
[Bau67] L. E. Baum and J. E. Eagon (1967). An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model of ecology. Bulletin of AMS 73
[Bau70] L. E. Baum, T. Petrie, G. Soules and N. Weiss (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Annals Math. Stat. 41
[Bou69] D. M. Boulton and C. S. Wallace (1969). The information content of a multistate distribution. J. Theor. Biol. 23 269-278.
[Got82] O. Gotoh (1982). An improved algorithm for matching biological sequences. Jrnl. Molec. Biol. 162 705-708.
[Gru94] S. Grumbach and F. Tahi (1994). A new challenge for compression algorithms: genetic sequences. Inf. Proc. and Management 30(6) 875-886.
[Hir75] D. S. Hirschberg (1975). A linear space algorithm for computing maximal common subsequences. Comm. Assoc. Comp. Mach. 18(6) 341-343.
[Lem76] A. Lempel and J. Ziv (1976). On the complexity of finite sequences. IEEE Trans. Info. Theory IT-22 783-795.
[Lev65] V. I. Levenshtein (1965). Binary codes capable of correcting deletions, insertions and reversals. Doklady Akademii Nauk SSSR 163(4) 845-848 (trans. Soviet Physics Doklady 10(8) 707-710,
[Loe97] D. Loewenstern and P. N. Yianilos (1997). Significantly lower entropy estimates for natural DNA sequences. Data Compression Conference DCC '97, 151-160.
[Nee70] S. B. Needleman and C. D. Wunsch (1970). A general method applicable to the search for similarities in the amino acid sequence of two proteins. Jrnl. Mol. Biol. 48 443-453.
[Pow98] D. Powell, L. Allison, T. I. Dix and D. L. Dowe (1998). Alignment of low information sequences. Proc. 4th Australasian Theory Symposium, (CATS '98), Perth, 215-229, Springer Verlag.
[Sel74] P. H. Sellers (1974). An algorithm for the distance between two finite sequences. Jrnl. Combinatorial Theory 16 253-258.
[Woo97] J. C. Wootton (1997). Simple sequences of protein and DNA. In DNA and protein Sequences Analysis, M. J. Bishop and C. J. Rawlings (eds), IRL Press, 169-183.
See Also:
• L. Allison, C. S. Wallace & C. N. Yee. Inductive Inference over Macro-Molecules, TR90/148, Department of Computer Science, Monash University, Australia, November 1990.
• L. Allison, T. Edgoose & T. I. Dix. Compression of Strings with Approximate Repeats, (HTML version), Intell. Sys. in Mol. Biol., pp8-16, Montreal, 28 June - 1 July 1998.
• L. Allison, D. Powell, & T. I. Dix. Compression and Approximate Matching, Computer Journal, 42(1), pp1-10, 1999.
• L. Allison, L. Stern, T. Edgoose & T. I. Dix. Sequence Complexity for Biological Sequence Analysis, Computers and Chemistry 24(1), pp43-55, Jan' 2000.
• General [Computational Biology] page.
• [Bibliography]. | {"url":"http://www.csse.monash.edu.au/~lloyd/tildeStrings/Compress/1998TR98-14-align.html","timestamp":"2014-04-19T14:32:38Z","content_type":null,"content_length":"29763","record_id":"<urn:uuid:25e1f336-3ea2-4e4b-b6ca-9413ecce0746>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
volume of cone proving
This works just like calculus, except you don't actually use an integral.
The strategy is to find the ratio of the volumes of a cone and it's respective cylinder, since once you find that, that remains constant no matter how you scale them.
Let the radius of the base = r, and without loss of generality (you'll see why), assume that r is an integer. Let the height = r also (to spare us of complexity)
We can approximate the volume of the cone by adding the volumes of r discs, where each disk has an integral radius from 1 to r.
(Ex, the first disk has radius 1, the next has 2, the next has 3 etc... the last has radius r)
Let the height of each disk be 1.
The volume is therefore:
summation of k as k ranges from 1 to r of
times the sum of the first r perfect squares
, by the formula for the sum of the first r perfect squares.
The volume of a cylinder with the same base and height is
The ratio of the area of the cone to the cylinder is therefore
Note that if we had used a height other than r, the height would've canceled out anyway once we divided by the volume of the cylinder.
Obviously, the more disks we have (the bigger r is), the more accurate the ratio will be.
As r gets very very big, get's very very small, and get's very very small.
At this point, note that since r is getting very very big, the assumption that it is an integer get's less and less important, because we are taking an infinte number of "samples".
So effectively, the ratio is , and the volume of the cone is the volume of the cylinder.
Volume of a cylinder is Area of base x Height, so the volume of a cone is 1/3 x Area of Base x Height
Again, these kinds of approximations work just like calculus and are the foundation of calculus, but you should be able to understand it without calculus.
PS - what is the latex tag on this forum? I'm sick of writing without latex...
PPS - Thanks John
Last edited by God (2006-01-29 03:14:08) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=25719","timestamp":"2014-04-18T16:25:24Z","content_type":null,"content_length":"15125","record_id":"<urn:uuid:bf2fa38b-f5c4-4d40-8e30-96e83c0b13d2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Nightmare of an Unsolved Problem
Back in the 1980s when I first met the Collatz conjecture in a number theory textbook it was stated this way: Start with any whole number n : If n is even, reduce it by half, obtaining n/2. If n is
odd, increase it by half and round up to the nearest whole number, obtaining 3n/2 + 1/2 = (3n+1)/2. Collatz' conjecture asserts that, no matter what the starting number, iteration of this
increase-decrease process will each time reach the number 1. For example, we have 13 → 20 → 10 → 5 → 8 → 4 → 2 → 1 (7 steps) 23 → 35 → 53 → 80 → 40 → 20 → 10 → 5 → 8 → 4 → 2 → 1 (11 steps) On the
other hand, when we use 27 as a starting number, it requires 70 steps before the sequence reaches 1; after climbing as high as 4616, it drops below 27 for the first time -- to 23 -- on the 59th step.
The Collatz conjecture (first proposed by Lothar Collatz in 1937) remains unsolved. During the last 20 years, the conjecture has been known by several names and is often called "the 3n + 1 conjecture
." Because I have spent many many sleepless hours working on the Collatz conjecture, when I describe it in a poem, my title is "A Mathematician's Nightmare." (Also in the background of my poem is
myself as unskilled bargain hunter -- getting a best buy rarely and always by accident. In the poetic case, at least, my mathematical knowledge helps me to know the right amount of time to wait for a
good price.) A Mathematician's Nightmare by JoAnne Growney Suppose a general store — items with unknown values and arbitrary prices, rounded for ease to whole-dollar amounts. Each day Madame X,
keeper of the emporium, raises or lowers each price — exceptional bargains and anti-bargains. Even-numbered prices divide by two, while odd ones climb by half themselves — then half a dollar more to
keep the numbers whole. Today I pause before a handsome beveled mirror priced at twenty-seven dollars. Shall I buy or wait for fifty-nine days until the price is lower? Randall Munroe at his webcomic
site, xkcd.com, has a cartoon that also describes the nightmare of the Collatz Conjecture. Whereas my version of the conjecture calculates (3n+1)/2 when n is odd, Munroe's drawing refers to a version
that calculates (3n+1) when n is odd; then, in an additional step, divides that result by 2. In this "3n+1" version of the conjecture, the sequence that starts with 13 now has 9 steps instead of 7
and looks like this: 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1 (9 steps)
No comments: | {"url":"http://poetrywithmathematics.blogspot.com/2011/03/nightmare-of-unsolved-problem.html","timestamp":"2014-04-16T21:52:31Z","content_type":null,"content_length":"72316","record_id":"<urn:uuid:8f68ca39-677b-4a86-92bb-04f6620afc44>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 2315
Calculus III
Math 2415
• State Approval Code: 2701015919
• Semester Credit Hours: 4
• Lecture Hours per Week: 3
• Contact Hours per Semester: 64
Catalog Description
Conic sections, polar equations, and their graphs, parametric equations, vector calculus, multivariable calculus, partial differentiation, double and triple integrals and applications of “Green’s
Theorem” and “Stoke’s Theorem.” Lab fee (2701015919) 4-3-3
TSIP complete and Math 2414
Course Curriculum
Basic Intellectual Compentencies in the Core Curriculum
• Reading
• Writing
• Speaking
• Listening
• Critical thinking
• Computer literacy
Perspectives in the Core Curriculum
• Establish broad and multiple perspectives on the individual in relationship to the larger society and world in which he/she lives, and to understand the responsibilities of living in a culturally
and ethnically diversified world.
• Stimulate a capacity to discuss and reflect upon individual, political, economic, and social aspects of life in order to understand ways in which to be a responsible member of society.
• Recognize the importance of maintaining health and wellness.
• Develop a capacity to use knowledge of how technology and science affect their lives.
• Develop personal values for ethical behavior.
• Develop the ability to make aesthetic judgments.
• Use logical reasoning in problem solving.
• Integrate knowledge and understand the interrelationships of the scholarly disciplines.
Core Components and Related Exemplary Educational Objectives
Communication (composition, speech, modern language)
• To participate effectively in groups with emphasis on listening, critical and reflective thinking, and responding.
• To apply arithmetic, algebraic, geometric, higher-order thinking, and statistical methods to modeling and solving real-world situations.
• To represent and evaluate basic mathematical information verbally, numerically, graphically, and symbolically.
• To expand mathematical reasoning skills and formal logic to develop convincing mathematical arguments.
• To use appropriate technology to enhance mathematical thinking and understanding and to solve mathematical problems and judge the reasonableness of the results.
• To interpret mathematical models such as formulas, graphs, tables and schematics, and draw inferences from them.
• To recognize the limitations of mathematical and statistical models.
• To develop the view that mathematics is an evolving discipline, interrelated with human culture, and understand its connections to other disciplines.
Instructional Goals and Purposes
Panola College's instructional goals include 1) creating an academic atmosphere in which students may develop their intellects and skills and 2) providing courses so students may receive a
certificate/an associate degree or transfer to a senior institution that offers baccalaureate degrees.
General Course Objectives
Successful completion of this course will promote the general student learning outcomes listed below. The student will be able
1.To apply problem-solving skills through solving application problems.
2.To demonstrate arithmetic and algebraic manipulation skills.
3.To read and understand scientific and mathematical literature by utilizing proper vocabulary and methodology.
4.To construct appropriate mathematical models to solve applications.
5.To interpret and apply mathematical concepts.
6.To use multiple approaches - physical, symbolic, graphical, and verbal - to solve application problems
Specific Course Objectives
Major Learning Objectives
Essential Competencies
Upon completion of MATH 2415, the student will be able to demonstrate:
1)Competence in solving problems related to vectors in 2- and 3- dimensions and their applications
2)Competence in determining and writing equations of surfaces in space
3)Competence in solving problems related to functions in several variables
4)Competence in problems related to limits and continuity
5)Competence in determining the derivatives of various functions and using these to solve problems in maxima, minima, curvature, graphics, velocity, and acceleration
6)Competence in determining single, double, and triple integrals of various functions and using these to solve problems in area, volume work, fluid pressure and mass moments
7)Competence in solving problems related to vector fields
8)Competence in determining line integrals and using these to solve problems related to work and mass
9)Competence in applying Green’s and Stoke’s theorems
General Description of Each Lecture or Discussion
After studying the material presented in the text(s), lecture, laboratory, computer tutorials, and other resources, the student should be able to complete all behavioral/learning objectives listed
below with a minimum competency of 70%.
1)Find the component form of a vector.
2)Use the properties of vector operations.
3)Identify the direction cosines and angles for a vector.
4)Calculate the projection of one vector onto another.
5)Solve application problems using the dot and cross products.
6)Determine the standard, parametric, and symmetric equations for a line in space.
7)Determine the distance between a point and a line in space.
8)Identify and sketch quadric surfaces.
9)Convert equations and points between rectangular, cylindrical, and spherical coordinate forms.
10)Determine derivatives and integrals of vector-valued functions.
11)Solve application problems involving velocity and acceleration using vector-valued functions.
12)Solve application problems involving arc length and curvature using vector-valued functions.
13)Determine tangent and normal vectors to a surface in space.
14)Calculate limits and continuity for functions of several variables.
15)Determine partial derivative and differentials.
16)Use the chain rule for functions of several variables.
17)Calculate directional derivatives and gradients.
18)Determine tangent planes and normal lines.
19)Determine extrema and saddle point for functions of several variables.
20)Determine Lagrange multipliers.
21)Solve application problems involving area and volume using iterated integrals.
22)Solve application problems involving center of mass, moments of inertia, and surface area.
23)Solve application problems using triple integrals.
24)Determine triple integral using cylindrical and spherical coordinates.
25)Determine double integrals using a change of variables and the Jacobian.
26)Use the properties of vector fields.
27)Determine the curl of a vector field.
28)Determine line integrals.
29)Solve application problems for line integrals using independence of path.
30)Determine surface integrals.
31)Apply Green’s theorem and Stokes’ theorem to certain line and surface integrals.
Methods of Instruction/Course Format/Delivery
Methods employed will include lecture/demonstration, discussion, problem solving, analysis, and reading assignments. Homework will be assigned.
Faculty may choose from, but are not limited to, the following methods of instruction:
(1) Lecture
(2) Discussion
(3) Internet
(4) Video
(5) Television
(6) Demonstrations
(7) Field trips
(8) Collaboration
(9) Readings
Faculty may assign both in- and out-of-class activities to evaluate students' knowledge and abilities. Faculty may choose from – but are not limited to -- the following methods
•Book reviews
•Class preparedness and participation
•Collaborative learning projects
•Library assignments
•Research papers
•Scientific observations
•Student-teacher conferences
•Written assignments
Letter Grades for the Course will be assigned as follows:
A: 90 < Average < 100
B: 80 < Average < 90
C: 70 < Average < 80
D: 60 < Average < 70
F: 00 < Average < 60
Text, Required Readings, Materials, and Supplies
For current texts and materials, use the following link to access bookstore listings: http://www.panola.edu/collegestore.htm | {"url":"http://www.panola.edu/syllabi/math2315.html","timestamp":"2014-04-21T14:43:33Z","content_type":null,"content_length":"59272","record_id":"<urn:uuid:d45fa3b1-f55e-4260-910f-d08a1aaff046>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
word problem with derivatives
September 26th 2013, 05:40 PM #1
Sep 2013
word problem with derivatives
A 1000 L tank loses water so that, after t days, the remaining volume is
v(t) = 1000[1-(t/10)]^2 for 0<t<10.
How rapidly is the water being lost when the tank is half full?
I got the derivative fine. -200(1-(t/10))
but then I am having troubling solving.
if the volume is half than v(t) will equal 500, correct? because half of 1000 is 500.
so I solve v(t) = 500 for t?
then input this value of t into the v'(t)?
your help is appreciated!
Last edited by Jonroberts74; September 26th 2013 at 05:44 PM.
Re: word problem with derivatives
Yes that is exactly what you do
September 26th 2013, 05:50 PM #2 | {"url":"http://mathhelpforum.com/calculus/222316-word-problem-derivatives.html","timestamp":"2014-04-18T14:46:44Z","content_type":null,"content_length":"34877","record_id":"<urn:uuid:b141ce11-d296-4647-bfd3-b924375749b8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum mechanics the way I see it
• Axiomatic approach of quantization
Quantum mechanics has been developed as a theory describing microscopic (atomic and subatomic) processes. In textbooks it is sometimes presented as being derivable from classical mechanics by a
substitution -referred to as quantization- like
p → −iħ∂/∂q, ħ = h/2π,
in which h = Planck's constant (compare). However, in such a "derivation" there always is a certain arbitrariness: there are many different ways to quantize classical quantities. A quantization
scheme can be justified only by the fact that its result is yielding a useful description of microscopic reality. For this reason it seems better to introduce quantum mechanics in an axiomatic
way, as is done in the following, letting experiment decide about its applicability.
• Measurement in the microscopic domain
Quantum mechanics (as well as relativity theory) distinguishes itself in a fundamental way from classical theories by hinging on the notion of measurement. Whereas in classical theories reference
to `measurement' is dispensable, this no longer is the case in quantum mechanics. Here the so-called measurement problem has been a long-standing source of discomfort. In particular the
suggestion of the `standard formalism of quantum mechanics' that `in measurement processes nature would behave differently from non-measurement processes (compare)' has been a bone of contention.
Far from concluding from this that measurement should be exorcized from the foundations of quantum mechanics, I will abide in the following with the spirit of Bohr's fundamental insight (however,
not with the specific way Bohr implemented that insight) that the only way of obtaining knowledge about microscopic reality is by performing measurements that are sensitive to the microscopic
information, and that are able to amplify this information to the macroscopic dimensions we are able to observe. Unless we find ways to compensate for the influence of the particular way we have
performed our measurements, the knowledge obtained on the microscopic object must be dependent on it. It seems to me that Bohr's conclusion is justified that `Einstein's ideal of having an
objective description of microscopic physical reality' is simply unattainable, and that the suggestion -both in quantum mechanics textbooks as well as in the scientific literature- that
`(standard) quantum mechanics can be seen as such an objective description' is misleading.
In the following this view will be illustrated by
□ i) duly taking into account the `influence of measurement on the experimentally obtained empirical data';
□ ii) learning from a thorough analysis of measurement procedures `which elements of the mathematical formalism are essential, and which should be dispensed with', thus demonstrating a
necessity to generalize the formalism;
□ iii) performing an analogous analysis of `interpretations of the mathematical formalism' by trying to evade any wishful thinking in estimating the `relation between the mathematical formalism
and physical reality', thus being able to prevent paradoxical conclusions based on too ``classically realist'' interpretations, without being obliged to resort to the vagueness and ambiguity
of instrumentalist ones.
• Physics and philosophy
Physicists can learn from philosophers, and vice versa. The following issues will be particularly important:
□ Ontology versus epistemology
The distinction will have to be observed between `what is' and `what we know'^1, dealt with by the philosophical disciplines of ontology and epistemology, respectively. Failure to take into
account this distinction has caused much confusion with respect to the meaning of quantum mechanics. In particular has it led in too uncritical a way to the general acceptance of a `realist
interpretation of the mathematical formalism of quantum mechanics', to be criticized in the following. An alternative interpretation, referred to as empiricist interpretation, is proposed,
being able to better take into account the distinction.
□ `Microscopic reality' versus the `phenomena'
Logical positivism/empiricism has had a large influence during the years quantum mechanics was being developed. It, in particular, has boosted an empiricist attitude, to the effect that `only
the phenomena' were deemed worthy of scientific attention. The acceptance of a realist interpretation can be seen as a physicist's revolt against this empiricism, to the effect that quantum
mechanics is thought to describe microscopic reality itself rather than `just the phenomena'.
However, it may very well be that by this revolt valuable philosophical insights are put aside rather too drastically. Even though logical positivism/empiricism is an obsolete philosophy by
now (compare), its influence has been appreciable in selecting the standard formalism of quantum mechanics as a means of "understanding" the experimental phenomena of the day. Although we no
longer require that quantum mechanics should describe `just the phenomena', is it not unreasonable to surmise that quantum mechanics `just describes the phenomena in a de facto way, simply
because the theory has been developed to do so in the first place'.
From the history of science we can learn that it is often just like that: when a new domain of physics is entered, the first thing to do is `to yield a description of the phenomena';
subsequently, questions about `causality' often induce speculations about the `reality behind the phenomena'. In order to describe that `reality behind the phenomena' we need to develop (sub)
theories (for instance, the classical theory of rigid bodies describes a billiard ball only as far as it behaves as a rigid body; we need an `atomic solid state sub-rigid body theory' to
include in the description also atomic vibrations).
In contrast to `logical positivism' an `empiricist interpretation of quantum mechanics' leaves open an analogous possibility of subquantum theories describing a `reality behind the phenomena
described by quantum mechanics'.
The plausibility of these ideas has been one reason (next to `indications into the same direction stemming from a generalization of the mathematical formalism of quantum mechanics') to
develop an empiricist interpretation of quantum mechanics as an alternative to the (generally accepted) realist one.
□ Empiricist versus rationalist influences
In the present account of quantum mechanics its relation to empiricism will play an important role. As is well known, it was empiricism that induced Heisenberg's conclusion that quantum
mechanics had to describe `just the phenomena' rather than `intrinsic properties of microscopic objects' (thus starting `matrix mechanics'). For Heisenberg measurement result a[m] did not
refer to an `(allegedly unobservable) property of the microscopic object possessed prior to the measurement', but to a `property of that object observed in the final stage of the measurement'
(e.g. a phenomenon like a flash on a scintillation screen, or a track in a Wilson chamber).
However, as a consequence of Heisenberg's anti-empiricist thesis that `theory decides what can be measured' (purporting to understand one of the key issues of quantum mechanics, viz.
complementarity) it is questionable which part empiricism has really played in Heisenberg's (and others's) contributions to the development of quantum mechanics. It seems that rationalist
influences like the availability of a useful mathematical theory (viz. the theory of matrices) have been equally influential. Probably the development of quantum mechanics was above all a
pragmatic rather than a purely empiricist affair.
□ Theory-ladenness of observation
One purpose of the present account is to demonstrate that empiricism does play an important role in the interpretation of quantum mechanics, but that this empiricism cannot be the logical
positivist/empiricist one. It is here that physicists can learn from philosophers, the latter having realized that it is impossible to completely base a theory on observation, thus opening
doors for anti-empiricist theses like Heisenberg's one. The insight that measurement in quantum mechanics needs to be described by quantum mechanics itself rather than by classical mechanics
(as advocated by the Copenhagen interpretation), can be seen as an application within physics of the philosophical discovery of theory-ladenness of observation.
On the other hand, philosophers can learn from physicists as well, since quantum mechanics offers a splendid opportunity to figure out in a concrete example how `theory-ladenness of
observation' can be implemented into a physical theory without succumbing to the danger of metaphysics involved in the concomitant circularity.
• Inadequacy of the standard formalism
`Simultaneous or joint nonideal measurements of incompatible standard observables' are examples of `measurements that are not described by the `standard formalism of quantum mechanics'. Such
joint measurements were studied during the early stages of the development of quantum mechanics in a rather informal way as `thought experiments' like e.g. the double-slit experiment^0 or
Heisenberg's γ-microscope^0.
At present such experiments are being carried out as real experiments. They turn out to defy the `standard formalism', requiring a generalized formalism for their description, the latter
formalism corroborating the early insights about `mutual disturbance in a joint measurement of incompatible standard observables' based on the `thought experiments'.
In the formal analysis based on the `generalized formalism of quantum mechanics' an important part is played by the Martens inequality, being an adequate mathematical expression of the `mutual
disturbance in a joint measurement of incompatible standard observables' predicted by the informal analyses of the `thought experiments'. This result clarifies a long-standing mystification, to
the effect that it is not the Heisenberg-Kennard-Robertson inequality (derived from the standard formalism) but the Martens inequality that should be seen as an expression of the Copenhagen
notion of `complementarity as mutual disturbance in a joint measurement of incompatible observables'.
Standard formalism of quantum mechanics
• Quantum mechanical standard observable (restricting ourselves to discrete non-degenerate spectra):
A `quantum mechanical standard observable' is mathematically represented^37 by a Hermitian operator A = ∑[m] a[m] E[m], with eigenvalues a[m], and E[m] the projection operator on its eigenvector
|a[m]>, satisfying E[m]^2 = E[m]. The set of projection operators {E[m]} is called the spectral resolution of A. The set {E[m]} is also referred to as an `orthogonal resolution of the identity
operator', satisfying
∑[m]E[m] = I, E[m]E[m′]= E[m] δ[mm′].
The eigenvalues a[m] are the possible measurement results of the standard observable, the latter being referred to by its mathematical representation A, or rather by the corresponding orthogonal
resolution of the identity {E[m]} (in the generalized formalism also non-orthogonal resolutions will be allowed).
• Quantum mechanical state, the superposition principle:
A `quantum mechanical state' is mathematically represented by a `state vector |ψ> (pure state)' or `density operator (or statistical operator) ρ (mixture)';
state vectors and density operators are normalized according to <ψ|ψ> = 1 and Tr ρ = 1, respectively,
Tr ρ (the trace of ρ) being defined according to
Tr ρ = ∑[m] <a[m]|ρ|a[m]>.
The vector character of quantum mechanical pure states is in agreement with the superposition principle, asserting the additivity of state vectors to the effect that
c[1]|ψ[1]> + c[2]|ψ[2]>, if suitably normalized, is a possible state vector if |ψ[1]> and |ψ[2]> are.
An important application of the `superposition principle' is the possibility of representing an arbitrary state vector |ψ> as a superposition of eigenvectors of an observable A according to
|ψ> = ∑[m]c[m]|a[m]>, ∑[m] |c[m]|^2 = 1.
State vectors are elements of a linear vector space (more particularly a Hilbert space^0).
The `density operator corresponding to the state vector |ψ>' is given by
|ψ><ψ| = ∑[m]|c[m]|^2|a[m]><a[m]| + ∑[m≠m′] c[m]c[m′]^* |a[m]><a[m′]|,
the latter terms being referred to as `cross terms' or `interference terms'.
□ Entangled states
The existence of `entangled states' is a consequence of the `superposition principle' as applied to a system of two (or more) particles (or degrees of freedom)^88. Let |ψ[1a]> and |ψ[1b]> be
states of particle 1, and let |ψ[2a]> and |ψ[2b]> be states of a second particle. Then the superposition principle allows to form the state |ψ[12]> = c[a]|ψ[1a]>|ψ[2a]> + c[b]|ψ[1b]>|ψ[2b]>
(c[a] and c[b] complex numbers) from the product states |ψ[1a]>|ψ[2a]> and |ψ[1b]>|ψ[2b]>. Such a superposition is called an `entangled state'. A two-particle state |ψ> is entangled if it can
not be represented as a product of a state of particle 1 and a state of particle 2. Hence, entangled states are correlated states.
It should be noted that, although entangled states are correlated states, not all correlated states are entangled states. Thus, a state described by the density operator
ρ = p[a]ρ[1a] ρ[2a] + p[b]ρ[1b] ρ[2b], ρ[ix] = |ψ[ix]><ψ[ix]|, i=1,2, x=a,b, p[x] probabilities,
is correlated but not entangled, as in this state the correlation is `classical correlation'. The notion of `entanglement' is restricted to `quantum correlation', stemming from typically
quantum mechanical properties of the `cross terms' (compare) in the density operator |ψ[12]><ψ[12]| corresponding to a pure state |ψ[12]>.
Entangled states are often seen as manifestations of a certain inseparability^87 of the particles that are involved, because in an entangled state it is impossible to attribute a well-defined
state to each of the particles separately.
• Probability distribution of measurement results of a measurement of standard observable A performed in state |ψ> or ρ:
p[m] = |<a[m]|ψ>|^2 or p[m] = Tr ρE[m].
This is often referred to as the Born rule.
The quantity <a[m]|ψ> is referred to as a probability amplitude.
• Expectation value of measurement results of a measurement of standard observable A performed in state |ψ> or ρ:
<A> = ∑[m]p[m] a[m] = <ψ|A|ψ> or <A> = Tr ρA.
• Time evolution:
□ Pure states: Schrödinger equation:
iħd|ψ>/dt = H |ψ>,
H the Hamilton operator. As is usual, in the following I put ħ = 1.
If H is not explicitly time-dependent, then the solution of the Schrödinger equation can be written according to
|ψ(t)> = e^−iHt |ψ(0)>.
The linearity of the Schrödinger equation warrants general validity of the superposition principle.
□ Mixtures: Liouville-von Neumann equation:
idρ/dt = [H,ρ][−], [H,ρ][−] = Hρ−ρH,
having for time-independent H the solution
ρ(t) = e^−iHtρ(0) e^iHt.
• Remarks on the standard formalism:
This is essentially all there is to the standard formalism of quantum mechanics. Of course, for practical applications the formalism has to be implemented in ways appropriate for that particular
application. I restrict myself here to a small number of such applications.
□ Schrödinger and Heisenberg pictures
The time dependence of quantum mechanical measurement results can be expressed in two equivalent ways, known as the `Schrödinger picture' and the `Heisenberg picture', respectively.
☆ Schrödinger picture:
<A>(t) = <ψ(t)|A|ψ(t)> or <A>(t)= Tr ρ(t)A,
|ψ(t)> and ρ(t) solutions of the Schrödinger equation and the Liouville-von Neumann equation, respectively.
☆ Heisenberg picture:
<A>(t)= <ψ|A(t)|ψ>, |ψ> = |ψ(0)>, or <A>(t)= Tr ρA(t), ρ = ρ(0),
observable A(t) being the solution
A(t) = e^iHtA(0) e^−iHt, A(0) = A,
of the equation
idA(t)/dt = −[H,A(t)][−],
in which [H,A(t)][−] = HA(t)−A(t)H. The sign difference with the Liouville-von Neumann equation may serve as a warning that the density operator is not an ordinary quantum mechanical
observable, even though it is Hermitian: its time development is different.
□ Von Neumann's projection postulate
Often also von Neumann's projection (or reduction) postulate is taken as part of the standard formalism of quantum mechanics, to the effect that during the measurement the state changes in a
way assumed not to be described by a Schrödinger equation. Von Neumann's projection postulate may be encountered in a strong or in a weak form:
☆ Strong von Neumann projection:
It is assumed that during a `measurement of standard observable A yielding measurement result a[m]' the state vector |ψ> undergoes a discontinuous transition
|ψ> = ∑[m]c[m]|a[m]> → |a[m]>.
☆ Weak von Neumann projection:
In `weak von Neumann projection' all possible measurement results a[m] (rather than just a single one) are taken into account in the transition from the initial to the final state. Thus
|ψ>= ∑[m]c[m]|a[m]> → ρ = ∑[m]|c[m]|^2 |a[m]><a[m]|,
the final state now being described by a density operator ^24.
☆ Objections to the projection postulate
Since it assumes `measurement processes' to differ in an essential way from `non-measurement processes', `strong von Neumann projection' is often seen as a strange element within quantum
Although I do not consider this a convincing argument because it neglects the very special requirements to be met by the `interaction between object and measuring instrument' in order
that a process may function as a `measurement', I agree with its conclusion: by invoking the `human observer as an active principle' (compare) `strong von Neumann projection' has exerted
a pretty harmful influence on our understanding of the meaning of quantum mechanics^55.
I consider von Neumann's projection postulate (either strong or weak) to be neither a necessary (i) nor a useful (ii) property of a quantum mechanical measurement:
i) It is not a necessary property, because it is a consequence of a certain `interpretation of the quantum mechanical state vector' (viz. a realist version either of an
individual-particle interpretation (in case of strong projection), or of an ensemble interpretation (in case of weak projection)) that is dispensable.
ii) It is not useful since by most practical experimental measurement procedures it is not even satisfied in the weak sense. Although to a certain extent `weak projection' can be
justified by means of an explicit account of the interaction of object and measuring instrument (compare), it turns out that `weak von Neumann projection' is too restrictive to encompass
even the most common measurement procedures.
Thus, the Stern-Gerlach experiment (often presented as a paradigm of measurements satisfying von Neumann projection) does not exactly satisfy the `weak projection postulate', `deviation
from projection' even being a crucial precondition of its functioning as a `measurement of spin' (e.g. Publ. 37).
Another example is the ideal photon counter which detects photons by absorbing them. Hence, ideally the final state of the electromagnetic field is the vacuum state |0> rather than the
`eigenvector |n> of the photon number observable corresponding to the number n of detected photons'. Since photons that have survived the detection process are not registered at all, it
follows that also the functioning of a photon counter depends crucially on not satisfying von Neumann's projection postulate: it is operating better to the extent it is violating the
projection rule.
☆ Remarks on the projection postulate:
○ Its experimental origin
It must be realized that `von Neumann's projection postulate' stems from the early days of quantum mechanics in which experiments were mostly scattering experiments (compare),
measuring differential scattering cross sections^0 by determining the relative numbers of particles scattered into solid angles supported by a scattering sample.
Von Neumann's projection postulate was inspired in particular by the Compton-Simon experiment ^0, in which conservation of energy and momentum is tested in a collision of an electron
and a photon. Using the conservation laws (found in the experiment to hold), von Neumann was able to infer photon momentum from a measurement of electron momentum, thus concluding
that, as a consequence of the measurement of electron momentum, the photon wave function must have collapsed to `the eigenstate of its momentum observable agreeing with the value
found experimentally for the electron'.
○ Nondisturbing character of the Compton-Simon experiment
It is important to note here that in von Neumann's application of the Compton-Simon experiment the disturbing influence of the measurement interaction is evaded because the
measurement is performed on a particle (the electron) that is different from the object (the photon) the wave function of which is assumed to collapse.
Here the similarity with the EPR problem should be noticed, in which Einstein interpreted a `measurement of an observable of particle 1' as a `measurement of a (strictly correlated)
observable of particle 2', thus assuming the measurement procedure not in any way to interact with the measured object. Although application of `strong von Neumann projection' to this
particular experiment is justified (cf. Publ. 57), is it impossible to generalize this feature to a property of an arbitrary quantum measurement.
By supposing his `projection postulate to be valid for any quantum mechanical measurement' von Neumann committed an `unjustified generalization': most measurements are not of the
Compton-Simon/EPR type in which the influence of the interaction of the measuring instrument with the microscopic object can be ignored. In general, by the measuring instrument a
disturbing influence is executed, causing the measurement to be of the second kind rather than satisfying von Neumann's prescription. This explains the frequent `deviation from von
Neumann projection' found in actually performed measurements.
Probably von Neumann has been trapped into his unjustified generalization by the classical paradigm, suggesting that if the outcome of a measurement is a[m] then after the measurement
the object must have that same value with certainty (and consequently must be described by the corresponding eigenfunction). The failure of von Neumann's projection postulate may be
seen as a first indication of the inadequacy of the realist interpretation of quantum mechanics, to be criticized here.
○ Von Neumann projection is a `preparation principle' rather than a `measurement principle'
Von Neumann projection is a `preparation principle' rather than a `measurement principle': it is a procedure describing conditional preparation in measurements of the first kind^26.
`First kind measurements', however, are virtually nonexistent due to the influence exerted on the microscopic object by the measurement. Only in very special cases like the
Compton-Simon experiment and the EPR experiment, in which care is taken that the preparation is not influenced by the measurement, may von Neumann projection be applicable (compare).
Consistency problems of `quantum mechanical measurement theory' arising because of von Neumann projection, could better be dealt with by `abandoning the postulate as a measurement
principle', rather than by ignoring the existence of well-tried measuring instruments like photon counters (compare).
□ `Von Neumann projection' versus `faithful measurement'
☆ `Von Neumann's projection postulate' should be distinguished from a principle of `faithful measurement', to the effect that a faithful measurement of an observable would reveal the `value
the measured observable had immediately preceding the measurement'^2 (rather than `preparing that value after the measurement', as required by von Neumann projection).
☆ Remarks on `Von Neumann projection versus faithful measurement':
○ Contrary to `von Neumann projection', `faithful measurement' is a `measurement principle', inspired by the idea, implicit in the classical paradigm, that a measurement should reveal
`reality as it objectively was prior to measurement'.
The concept of `faithful measurement' is in agreement with the possessed values principle (implying that quantum mechanical measurement result a[m] is found because it can be
attributed to the microscopic object as an `objective property, possessed prior to and independent of measurement'). Hence, the impossibility to implement the possessed values
principle into quantum mechanics makes `faithful measurement' impossible within that theory.
○ However, this need not be the end of `faithful measurement'. If the possibility of subquantum theories is acknowledged, the `faithful measurement principle' may refer to `properties
that are not described by quantum mechanics' (e.g. hidden variables or subquantum properties). In the following this will be considered a possibility, thus attributing to the
`faithful measurement principle' an applicability as a measurement principle that is not reflected by quantum mechanics.
○ `Von Neumann projection' and `faithful measurement' are sometimes combined into a view of quantum measurement as a filter or sieve which selects the microscopic object into a cell
corresponding to measurement result a[m] without changing that value. This, actually, was Einstein's position in his controversy with the Copenhagen interpretation over the
`completeness of quantum mechanics'. It is interesting to notice that within the discussion of this controversy there is an interplay between `Einsteinian application of faithful
measurement in the initial stage of the experiment' (which is in disagreement with the `Copenhagen idea that before the measurement an observable does not have a well-defined value')
and `Copenhagen application of von Neumann projection in the final stage' (preparing the final state of a distant object in a nonlocal way not appreciated by Einstein). By Einstein
this interplay is used to propose a trade-off between `completeness' and `locality'.
• Relative frequencies and probability distribution
Quantum mechanical probability p[m] is connected with experiment by comparing it with the relative frequency N[m]/N that value a[m] is obtained when a measurement of observable A is performed a
large number (N) of times. Thus,
p[m] = lim[N→∞]N[m]/N.
We have
p[m] ≥ 0, ∑[m] p[m] = 1.
□ Remarks on `relative frequencies and probability distribution':
☆ In actual practice relative frequencies N[m]/N are always determined for finite N. For this reason the limit N →∞ is not to be taken in the mathematical sense because this limit is not
attainable in actual practice. As a practical criterion it is sufficient to require that the number N be large enough to allow for `sufficient stability of the relative frequencies N[m]/N
if the number N is increased' so as to allow a determination of the limit with sufficient accuracy.
☆ It is far from self-evident that the limit exists in the above sense. Its existence requires that the experiment be repeatable so as `to make individual preparations identical in a
certain sense' (thus, consecutive position measurements of an electron freely traveling into outer space will not yield such a limit). A physical condition, comparable to the `ergodicity
condition^0 of statistical thermodynamics', will probably be necessary. In general it is taken for granted that the physical conditions for the existence of the limit are satisfied, i.e.,
that present-day measurements on microscopic systems are within the domain of application of quantum mechanics (compare).
• (In)compatibility of observables
□ The difference between classical and quantum mechanics is embodied by the possibility that, contrary to classical quantities, two quantum mechanical standard observables A and B may be in
compatible, i.e. the Hermitian operators do not commute, their commutator [A, B][−] ≡ AB − BA satisfying [A, B][−] ≠ O.
Position operator Q and momentum operator P constitute an example of a pair of incompatible observables, satisfying [Q, P][−] = iI.
Observables satisfying a similar maximal amount of incompatibility (in the sense that their eigenfunctions are maximally different) are sometimes called canonically conjugate or
`complementary' observables (however, see footnote 48).
For compatible standard observables (for which [A, B][−] = O) the definition of a probability distribution can be generalized to a `joint probability distribution' according to
p[mn] = |<a[mn]|ψ>|^2 or p[mn] = Tr ρE[m]F[n],
where |a[mn]> are the joint eigenvectors of A and B (the observables having spectral resolutions {E[m] = |a[m]><a[m]|} and {F[n] = |b[n]><b[n]|}, respectively). Compatible observables are
`jointly or simultaneously measurable', the `joint measurement' yielding p[mn] = Tr ρE[m]F[n] as `joint probability distribution' expressing the `correlation of the measurement results a[m]
and b[n] obtained in the joint measurement of A and B'. The observable AB is a correlation observable.
Joint measurements of compatible observables are `mutually nondisturbing' in the sense that the marginal distributions ∑[n]p[mn] and ∑[m]p[mn] yield the results Tr ρE[m] and
Tr ρF[n], respectively, obtained if A or B is measured separately.
According to the principle of local commutativity `observables that are measured in causally disjoint regions of space-time' are mutually compatible, and, hence, are `mutually nondisturbing'.
Note that, as a consequence, only observables measured in the same region can be incompatible.
According to Gleason's theorem^0 within the domain of quantum mechanics a `probability distribution' is a linear functional of the density operator. This implies that within the domain of
application of the `standard formalism of quantum mechanics' a `simultaneous or joint measurement of incompatible observables' is impossible. Although nonlinear functionals exist that could
serve as joint probability distributions of incompatible standard observables, these are generally thought to be unacceptable^51.
Note that within the generalized formalism of quantum mechanics linear functionals of the density operator do exist, and have a physical meaning, also if incompatible observables are involved
□ The Heisenberg-Kennard-Robertson inequality
The Heisenberg-Kennard-Robertson inequality (the so-called uncertainty relation) of standard observables A and B is given by
ΔAΔB ≥ ½|<[A,B][−]>|,
in which ΔA and ΔB are standard deviations of measurement results, defined according to
(ΔA)^2 = <(A − <A>)^2> = ∑[m] p[m]a[m]^2 − (∑[m] p[m]a[m])^2
(and analogously for B).
□ Entropic uncertainty relation
A useful alternative to the Heisenberg-Kennard-Robertson inequality is the following entropic uncertainty relation
H[{E[m]}](ρ) + H[{F[n]}](ρ) ≥ −ln(max[m,n]|<a[m]|b[n]>|^2),
in which H[{E[m]}](ρ) is a `von Neumann entropy', defined by
H[{E[m]}](ρ) = −∑[m] p[m] ln p[m], p[m] = Tr E[m]ρ,
{E[m]} the spectral resolution of standard observable A (and analogously for B). An advantage of the entropic uncertainty relation is that it is `independent of the (eigen)values of the
observables', and therefore can be used in an empiricist interpretation (compare).
• Constants of the motion
□ Assuming the Hamiltonian H to be time-independent, then an observable A commuting with H (thus, [A, H][−] = O) is a `constant of the motion' or `conserved quantity', satisfying
A(t) = A.
A state vector, written in the `representation of the constant of the motion' according to
|ψ(t)> = ∑[m] c[m](t)|a[m]>, c[m](t) = <a[m]|ψ(t)>,
|c[m](t)|^2 = |c[m](0)|^2.
□ Remarks on `constants of the motion':
☆ Note that, unless the state |ψ(t)> is an eigenstate of a constant of the motion A, conservation of that observable is implemented in the quantum mechanical formalism in a statistical
sense only: the formalism only predicts that a measurement of A at a later time will yield the same probability |c[m]|^2; it does not predict that the same value a[m] will be found at
times 0 and t.
☆ It might be thought that `conservation of a constant of the motion in a deterministic sense' could be proved by considering consecutive measurements of observable A. Indeed, since A(t) =
A, the observables A(t) and A are compatible, and the theory of joint measurement of compatible observables might seem to be applicable, yielding the joint probabilities
p[mn] = <ψ(0)|E[m]E[n]|ψ(0)> = |c[m](0)|^2 δ[mn],
δ[mn] the Kronecker delta. Hence, a constant of the motion seemingly is not conserved just in the statistical sense expressed by |c[m](t)|^2 = |c[m](0)|^2, but in the deterministic sense
in which consecutive measurements of the constant of the motion, performed on an individual object, yield identical results a[m].
However, this reasoning ignores the failure of the `possessed values principle', telling us that a[m] cannot be an objective (i.e. measurement-independent) property of the microscopic
object. Indeed, the first measurement may disturb the microscopic object as a consequence of the interaction with its measuring instrument. In general the commutator [A, H[int]][−] of A
and the `Hamiltonian H[int] of the interaction between object and measuring instrument' does not vanish (for instance, for the Stern-Gerlach measurement nonvanishing of this commutator is
even necessary for the experimental arrangement to do its job properly, cf. Publ. 37). In general, if the interaction is properly taken into account, observable A(t) will not even be
compatible with A, thus rendering the theory of joint measurement of compatible standard observables inapplicable.
☆ Nevertheless, the meaning of quantum probabilities as `relative frequencies of measurement results obtained whenever a measurement is performed' seems to indicate that during free
evolution (i.e. when there is no interaction with a measuring instrument) a constant of the motion might satisfy a deterministic conservation law, leaving the `individual measurement
result' independent of the time the measurement is carried out. If not, it would be inexplicable how it is possible that the relative `probabilities p[m] found if a measurement would be
carried out' either at time 0 or t are the same notwithstanding measurement results of individual particles might have changed between 0 and t. This circumstance may invoke different
i) declare the problem a metaphysical one because it cannot be experimentally decided (since an experimental test would need carrying out measurements at both 0 and t, and, hence, would
be confronted with a `disturbance of the object by the first measurement');
ii) try to "solve" the problem by assuming von Neumann projection to take place during the first measurement, applicability however being seriously limited;
iii) consider the theoretical result p[mn] = |c[m](0)|^2 δ[mn], although not experimentally verifiable, as an incentive to try to understand what quantum mechanics tells us about reality,
taking seriously not only observational data but also general ideas like conservation laws, which have always been fruitful principles in the development of physical science.
☆ Reaction iii) is consistent with the `empiricist interpretation of quantum mechanics' I prefer. It acknowledges the possibility that quantum mechanics does not describe `reality behind
the phenomena', explanation of conservation of individual values of a constant of the motion during free evolution being relegated to some subquantum theory (compare).
By von Neumann's projection postulate, assumed in reaction ii), equality of `individual measurement results a[m] of two consecutive measurements of a constant of the motion' is warranted
without having to rely on subquantum theory. This may explain the general acceptance of von Neumann's postulate as a description of the influence of a measurement on the microscopic
It is questionable, however, whether this proves a deterministic behaviour of constants of the motion during free evolution. If it did, this would imply that von Neumann's projection
postulate is a necessary attribute of quantum mechanical measurement, which, however, it is not (as is exemplified, for instance, by its inapplicability to the Stern-Gerlach measurement).
Notwithstanding "proofs to the contrary", reliance on subquantum theory seems to yield better prospects (compare).
• Comparison with classical (statistical) mechanics
It is important to note the difference between `quantum mechanics' and `classical mechanics' as regards the relation of state and observables. Whereas time evolution of the classical state is
defined in terms of the time evolution of the classical quantities (observables), are within quantum mechanics state and observables defined independently, and can time evolution be expressed in
terms of either of these quantities separately. This is reminiscent of `classical statistical mechanics', in which time dependence of the statistical state can analogously be expressed in two
different ways.
As a consequence, a closer analogy between `quantum mechanics' and `classical mechanics' is obtained if a comparison is made between `quantum mechanics' and `classical statistical mechanics'
rather than between `quantum mechanics' and `classical mechanics proper' (compare). This can be seen particularly clearly by comparing the `(quantum mechanical) Liouville-von Neumann equation'
with the (classical) Liouville equation^0, the quantum mechanical commutator (up to a constant) being the natural counterpart of the classical Poisson bracket.
This remark is not unimportant because it makes obsolete the idea of `strong von Neumann projection'. It is an indication that an individual-particle interpretation of the quantum mechanical
state vector may be problematic (compare).
Generalized formalism of quantum mechanics
• It is an important recent insight that, if the description is restricted to the microscopic object, many quantum mechanical experiments that have actually been performed are outside the domain of
applicability of the standard formalism (compare). In particular, the concept of a quantum mechanical observable must be generalized.
• Generalized quantum mechanical observable (restricting ourselves to discrete spectra):
Resolution of the identity operator I, consisting of a set of positive (better: non-negative) operators M[m], satisfying
∑[m] M[m] = I, M[m]≥ O.
Contrary to standard observables, for a `generalized observable' the resolution of the identity operator need not be orthogonal. The operators M[m] generate a so-called positive operator-valued
measure (POVM). A `generalized observable' will be denoted^69 by {M[m]}.
• Probability distribution of measurement results of generalized observable {M[m]} performed in state |ψ> or ρ:
p[m] = <ψ|M[m]|ψ> or p[m] = Tr ρM[m].
• Remarks on the `generalized formalism of quantum mechanics':
□ The `generalized formalism of quantum mechanics' encompasses the standard formalism. If all operators M[m] of a POVM are `mutually commuting projection operators', then the POVM reduces to a
projection-valued measure (PVM), and the generalized observable reduces to a standard one (or, in case of degeneracy, to a set of standard ones, viz. standard observables
f(A) having compatible spectral resolutions).
□ Note that for characterizing a `generalized observable' it is neither necessary nor useful to introduce an operator ∑[m] a[m]M[m], analogous to the operator A = ∑[m] a[m] E[m] in case of a
standard observable. Actually, both for standard and generalized observables probabilities are determined by m rather than by a[m], the latter's magnitude being irrelevant. This irrelevance
will play an important role in choosing an interpretation of the mathematical formalism of quantum mechanics.
□ On the basis of a `quantum mechanical description of the interaction of microscopic object and measuring instrument' it can be demonstrated that the generalized formalism is a natural
extension of the standard one. It has many applications (including the Stern-Gerlach experiment, which, on closer scrutiny, is a `paradigm of the standard formalism only in an approximate
sense', cf. Publ. 37).
□ The generalized formalism will be particularly useful for describing measurements in which information is jointly obtained on incompatible standard observables (deemed impossible by the
standard formalism, but nevertheless playing an important role in foundational discussions of quantum mechanics).
Quantum mechanics and `reality'
• We should distinguish two different kinds of realism, to be referred to as ontological realism and epistemological realism, respectively. Unfortunately, often these levels of discourse are not
sufficiently distinguished, the `(epistemological) description' being considered to be a `faithful representation of (ontological) reality'.
□ Ontological realism
Most physicists are ontological realists in the sense that they believe that physics deals with an outside world. In the `realism versus idealism^0' dichotomy it is `ontological realism' that
is opposed to an `idealism which positions reality within some inside world'. Whereas an idealist may believe that the moon does not exist either `if he does not look' (subjectivistic
idealism) or `if nobody looks' (objectivistic idealism), does an ontological realist believe that the existence of the moon is independent of anybody's looking. Plato's idealism^0 is an
example of `objectivistic idealism' in which `true reality' is thought to constitute an `abstract world of ideas'.
By anti-realism^0 I will understand^11 the idea that `only the phenomena have a real ontological existence', and that, hence, there is no `reality behind the phenomena'. For methodological
reasons most physicists are not anti-realists (if they were, they would probably still believe that billiard balls are rigid spheres instead of being composed of atoms). Atoms, and even
quarks, are believed to be as real as is the moon, even though there may exist disagreement about specific properties these objects are thought to possess.
□ Epistemological realism
`Epistemological realism' refers to a certain way of attributing physical meaning to the terms of a physical theory. Hence, `epistemological realism' is about interpretation, that is, about
`what one thinks the theory is describing'. An interpretation of a physical theory in which the (theoretical) terms are taken in an `epistemologically realist' sense, is referred to as a
realist interpretation.
At the epistemological level a `realist interpretation' may be involved in different dichotomies. First, the `realist interpretation' may be opposed to the `instrumentalist one'. Another
dichotomy is that between realist and empiricist interpretations, playing a prominent role in my account of quantum mechanics. This latter dichotomy is expressing the possibility that the
mathematical formalism of quantum mechanics may be thought either to refer to the microscopic objects themselves (realist interpretation), or to macroscopic phenomena that are directly
observed (empiricist interpretation).
□ Remarks on `Quantum mechanics and reality':
☆ It is important to distinguish ontological and epistemological levels of discourse. Physicists usually combine `ontological realism' with the `realist interpretation of epistemological
realism'. That's why the distinction is often not noticed. However, it is very well possible to combine `ontological realism' with an empiricist interpretation (in my view it is even
preferable to do so). There is a difference between quantum mechanics (the theory) and the reality it is describing (electrons etc.). Electrons are not wave packets flying around in
space: electrons are physical objects, wave packets are theoretical notions. Probably the only way to have a wave packet flying in space is by throwing one's quantum mechanics textbook.
If a quantum mechanical observable were a Hermitian operator, one could observe it by looking into one's quantum mechanics textbook. Unfortunately, these trivial remarks are not
superfluous because whole generations of physicists have been trained to think about electrons as `wave packets flying around in space', and to look upon values of Hermitian operators as
`properties of microscopic objects'.
☆ One should be aware of the possibility that quantum mechanics is not the `theory of everything', yielding a complete description of `all there is', but that it may only be yielding a
description of certain aspects of microscopic reality, much in the same way as classical mechanics just describes certain aspects of macroscopic reality (an electron is not a wave packet,
just like the planet Mars is not the point particle nor the rigid body figuring in textbooks of classical mechanics).
☆ The domain of application of quantum mechanics is microscopic reality. It is an open question whether this domain can be extended either to the macroscopic world or to the far
submicroscopic world at the Planck length. Of course, it is recommendable to explore the applicability of quantum mechanics by applying it to experiments on a wider set of objects than
just the microscopic ones (for instance, mesoscopic objects and submicroscopic elementary particles), but there is no warrant that this will keep working all the way up to the macroscopic
and/or down to the submicroscopic domains. The idea that quantum mechanics should also be applicable to the macroscopic and submicroscopic worlds is a consequence of a scientific
methodology in which theories are supposed to be either true (i.e. universally applicable) or false. However, it is more and more realized^0 that this is not how science works in actual
practice: in general a physical theory has a restricted domain of application in which experimental data are described by it in a more or less exact way (in general, less exact as the
boundaries of the domain are approached). It is not evident why this should be different for quantum mechanics.
Quantum mechanics and `observation'
• No human observer has ever seen an electron or even an atom. Everything we know about such objects stems from `indirect observation by means of measuring instruments obtaining their information
by interacting with the microscopic objects, and amplifying this information to the macroscopic level of the human observer'.
• The human observer is as dispensable in quantum mechanics as he (short for `he or she') is in classical mechanics. Classical mechanics describes macroscopic objects as these are seen by the human
observer. For this reason within the macroscopic domain in general there is no obvious reason to consider a possible difference between `what is seen' and `what there is': `observation/
measurement' is thought to be nondisturbing^81 here.
Within microphysics we should draw a distinction between `observation' and `measurement'. Whereas `measurement' no longer can be considered as nondisturbing is there no difference between
classical and quantum mechanics with respect to `human observation'. Within the domain of quantum physics the human observer sees only the macroscopic parts of his measuring instruments, his
influence preferably being negligeable during the measurement. In present-day physical practice within the microscopic domain `human observation' is largely restricted to the tables and graphs
that have been printed by the scientist's printer on the basis of data obtained from a measuring instrument by the scientist's computer, the measurement results having been sent to the computer
without any human interference (compare).
• An influence like the reduction (collapse) of the wave packet, allegedly exerted by a human observer on a microscopic object by means of mere observation, would be equally miraculous as killing a
fly by just looking at one's fly swatter. In the past this problematic aspect of the so-called "measurement problem" has given rise to ample discussion, in which the problem is often rightly
identified as a `pseudo problem' (as it is not a problem of quantum mechanics itself but a spin-off of an unnecessarily restrictive interpretation of that theory, viz. an individual-particle
• In order not to be seduced into any kind of psychophysicalism with respect to observation in quantum mechanics it is recommendable to avoid any reference to the human observer. The "measurement
problem" should be distinguished from the quantum mechanical problem of measurement, in which the physical interaction is studied between the microscopic object and the measuring instrument used
to get to know that object's properties. Contrary to the "measurement problem" this is a real problem, yielding a better understanding of the meaning of quantum mechanics.
• It is certainly true that the observer has played an important role in early versions of the Copenhagen interpretation. In particular Wigner has been a long-lived proponent of this idea. It seems
to me, however, that nowadays in most of the literature on the foundations of quantum mechanics, even if professing allegiance to the Copenhagen interpretation, the observer has been replaced by
a measuring instrument, abandoning the term `observation' in favour of `measurement'.
The two problems referred to above are not always distinguished. Although, on one hand a `quantum mechanical account of measurement' is rather "un-Copenhagenlike", is, on the other hand, a vivid
interest for the `fundamental role of measurement' so "Copenhagenlike" that already Heisenberg and von Neumann considered `quantum mechanical accounts of measurement'. Unfortunately, they ignored
the inconsistency arising from the distinction between `strong' and `weak von Neumann projection'.
Quantum mechanics and `measurement'
• It is important to note that in textbooks of quantum mechanics the notion of `measurement' is dealt with in a very insufficient way. In general the measuring instrument is not even dealt with at
all. What is a `measurement' is left largely unspecified, thus allowing, for instance, speculations like those induced by Schrödinger's cat to generate confusion.
By the `quantum mechanical problem of measurement' (to be distinguished from the so-called "measurement problem") I will understand the problem of obtaining knowledge about a microscopic object
by probing it with a measuring instrument that is sensitive to the microscopic information, and is able to amplify that information to macroscopically observable dimensions, allowing to treat the
human observer in the same way as is possible in classical mechanics: he can be ignored.
• Pre-measurement
In a quantum mechanical measurement microscopic information is transferred from a microscopic object to a measuring instrument. As far as it is a microscopic process it belongs to the domain of
quantum mechanics; it is to be described by means of quantum mechanics. This part of the measurement process is referred to as pre-measurement.
It is not clear whether also the amplification process to the macroscopic level is `completely within the domain of application of quantum mechanics'. There is no a priori reason to believe it
does (unless one thinks that quantum mechanics is the theory of everything).
The pre-measurement phase is the crucial part of the measurement. For instance, in the Stern-Gerlach experiment the pre-measurement phase is the initial part in which an atom interacts with an
inhomogeneous magnetic field, thus establishing a correlation between its spin and its position. Amplification to macroscopic dimensions is realized by putting detectors in the outgoing beams,
thus ascertaining which of the outgoing beams the atom is in. Since `atomic position' can to a good approximation be treated in a semi-classical way, it is not unreasonable to assume that the
amplification may be viewed upon as a (semi-)classical process. In general this part of the measurement process is left undiscussed as being irrelevant to the understanding of quantum mechanics.
I shall follow this custom, being well aware that an important field is left open here.
• Quantum mechanical description of pre-measurement
In the pre-measurement phase of the `measurement of standard observable A (having eigenvalues a[m] and eigenvectors |a[m]>)' the interaction between object and measuring instrument is described
by a Schrödinger equation. We restrict ourselves here to a rather simplistic model in which, if the initial state of the object is given by |a[m]>, the measurement interaction realizes the
|a[m]> |θ[0]> → |ψ[m]> |θ[m]>,
in which |θ[0]> is the initial state of the measuring instrument and |θ[m]> are the so-called `pointer states', corresponding to the possible post-measurement positions of the pointer of the
measuring instrument (indicated by m in figures 1 and 2), and the states |ψ[m]> are normalized states of the microscopic object, determined by the specific properties of the interaction between
object and measuring instrument.
If the initial state of the object is given by |ψ> = ∑[m]c[m]|a[m]>, then, due to the validity of the superposition principle, the final state |Ψ[f]> of the system `object + measuring instrument'
is given by
|Ψ[f]> = ∑[m]c[m] |ψ[m]> |θ[m]>.
The pointer states are generally considered to be mutually orthogonal:
<θ[m]|θ[m′]> = δ[m,m′].
This is not unreasonable if such states are characterized by the `pointer's position being in a well-defined interval of its scale, observationally different from all other possible pointer
Although we can choose <ψ[m]|ψ[m]> = 1 (so as to have ∑[m] |c[m]|^2 = 1), in general there is no reason to think that the states |ψ[m]> should be mutually orthogonal too. In general they are not:
<ψ[m]|ψ[m′]> ≠ 0 for m ≠ m′.
• From the state |Ψ[f]> the quantum mechanical final state of the microscopic object is derived as the (reduced) density operator
ρ[of] = Tr[a] |Ψ[f]><Ψ[f]| = ∑[m] p[m] |ψ[m]><ψ[m]|, p[m] = |c[m]|^2,
where Tr[a] denotes taking the partial trace^0 over the degrees of freedom of the measuring instrument.
• The state vector |Ψ[f]> = ∑[m]c[m] |ψ[m]> |θ[m]>, being a superposition of product terms |ψ[m]> |θ[m]>, is an example of an entangled state. Due to the fact that its origin is of a typically
quantum mechanical nature (viz. the superposition principle), such states are particularly interesting.
• Measurements of the first and second kind
□ In the literature on the foundations of quantum mechanics attention is often restricted to so-called `measurements of the first kind' for which
|ψ[m]> = |a[m]>.
For a `measurement of the first kind' of standard observable A we obtain as the `(reduced) density operator of the final state of the microscopic object':
ρ[of] = Tr[a] |Ψ[f]> <Ψ[f]| = ∑[m]|c[m]|^2 |a[m]><a[m]|.
Note that this coincides with the von Neumann prescription of weak projection, which, hence, follows from the `quantum mechanical theory of measurement' under the assumption |ψ[m]> = |a[m]>.
Note also, however, that strong projection is not corroborated by it (compare).
For the `final state of the measuring instrument' we analogously find in case of `measurements of the first kind':
ρ[af] = Tr[o] |Ψ[f]> <Ψ[f]| = ∑[m]|c[m]|^2 |θ[m]><θ[m]|.
This expression is usually interpreted as a description of a von Neumann ensemble in which each element of the ensemble is supposed "to be in one of the states |θ[m]>".
□ Measurements for which |ψ[m]> ≠ |a[m]> are called `measurements of the second kind'. As seen here such measurements fail to satisfy `(weak) von Neumann projection'.
Determining the final state of the measuring instrument, an additional `deviation from first kind behaviour' is found, viz.
ρ[af] = Tr[o] |Ψ[f]> <Ψ[f]| = ∑[mm′]c[m]c[m′]^* <ψ[m′]|ψ[m]> |θ[m]><θ[m′]|,
implying that it is impossible to interpret the final state of the measuring instrument as a `von Neumann ensemble of pointers in states |θ[m]>' (compare).
□ Probably, the restriction to `measurements of the first kind', to be observed in a large part of the literature on quantum mechanical measurement, is caused by these two `failures of
measurements of the second kind' to satisfy canonized rules. yet, since many measurement procedures, applied in actual practice, turn out to be `measurements of the second kind', we do not
seem to be able in general to live up to this restriction. It therefore seems important to scrutinize the reasons why these `failures to satisfy canonized rules' are so often ignored in
discussions of the foundations of quantum mechanics by dealing with quantum measurement as if it has to be `of the first kind'.
□ At least three reasons for the above-mentioned restriction to `measurements of the first kind' can be distinguished:
☆ i) The first reason mainly derives from the Copenhagen interpretation, more particularly its embracement of von Neumann's projection postulate. Neglect of the projection postulate's
failure (in the sense that the postulate is satisfied by hardly any realistic measurement procedure, including widely used ones) is sometimes defended by pointing at the `theoretical
possibility of the existence of an "ideal" measurement procedure satisfying the postulate' next to, and equivalent with, the experimentally realized ones.
Although I do not know of any general proof of the non-existence of such "ideal" measurements^25, I do not think this to be a strong argument because there are at least two reasons why
`von Neumann projection' may be irrelevant to `quantum measurement'.
○ A first reason is the classical origin of the postulate, analogously to what is possible in classical physics warranting an object `to possess a property after it has been observed'.
This holds good, for instance, for Schrödinger's cat, which will remain either dead or alive after having been observed to be so.
However, in general a measurement performed on a microscopic object will be more invasive than `looking at a cat'. It should be noted that in this respect the Compton-Simon
experiment, inspiring von Neumann to formulate his `projection postulate', is far from generic. There is no reason to believe that all measurements will have similar properties. For
instance, it is hard to imagine how an "ideal" version of the Stern-Gerlach measurement procedure would be possible, this latter procedure being effective only at the expense of not
satisfying von Neumann's projection postulate.
○ As a second reason to doubt the general relevance of von Neumann projection should be mentioned that not the `final state of the object', but rather the `final state of the pointer of
a measuring instrument' seems to be relevant when it comes to determining a measurement result. Hence, whereas application of the projection postulate to the measuring instrument
might be justified, is there no reason to assume a similar behaviour of a microscopic object. The requirement that von Neumann projection be satisfied seems to be based on a confusion
of `preparation' and `measurement' that can be observed within the Copenhagen interpretation.
☆ ii) A more pressing reason to restrict oneself to `measurements of the first kind' might be derived from the result obtained for the final state ρ[af] of the measuring instrument. Only
for `measurements of the first kind' this state seems to have an unambiguous meaning in terms of the pointer states |θ[m]>, cross terms arising in case of a `measurement of the second
kind' (compare footnote 65). Whereas for `measurements of the first kind' explicit treatment of the measurement interaction seems to circumvent the problems posed by Schrödinger's cat
paradox (essentially caused by the `cross terms'), is it evident that this solution is not available for `measurements of the second kind'.
Even so, this does not need to be the end of `measurements of the second kind'. Ignoring here the so-called decoherence solution (assuming stochastic fluctuations of the environment to be
active so as to wash out the `cross terms' in ρ[af]), the problems encountered in contemplating `measurements of the second kind' nowadays can be viewed upon as signs that the notion of
`quantum measurement' needs to be generalized still further in order to encompass realistic measurement procedures within the microscopic domain. In fact, it has been realized that it is
necessary to introduce the notion of a generalized quantum mechanical observable, making obsolete any reference to `von Neumann projection' since no `orthonormal set of state vectors |a
[m]> or |θ[m]>' has to be referred to any longer.
The problem encountered in interpreting the state ρ[af] is just a precursor of the more general problems obtaining in realist interpretations of the quantum mechanical formalism. Indeed,
the density operator ρ[af] of a `measurement of the second kind' would not pose any problem in an instrumentalist interpretation.
☆ iii) A third reason may derive from the relative simplicity of the assumption of first-kindness, seemingly making it superfluous to determine what is really going on in a quantum
mechanical measurement, and allowing to take part in the discussion on the foundations of quantum mechanics without having to invest too much energy. However, in view of their `red
herring'-like character the route via `measurements of the first kind' is not available. We shall have to look for a different solution to the "measurement problem", a main inspiration in
this endeavour being the observation that it is necessary to approach `quantum mechanical measurement' from a far more general point of view than is allowed by von Neumann projection (
compare). This is regarding both the mathematical formalism as well as its interpretation.
• The Heisenberg cut
In the amplification process, in which the information is amplified from the microscopic dimensions of the `microscopic object' to the macroscopic dimensions of `directly observable phenomena',
or even of a human observer's mind, there may exist some point after which the process can be considered to be macroscopic and describable classically. This point is called the Heisenberg cut.
This cut is positioned somewhere between the microscopic object and the observer's mind. It is often assumed that its precise position is arbitrary, and can be anywhere in this range (this has
even been "proven" by von Neumann by considering a chain of consecutive measurements connecting microscopic object and observer; however, in this "proof" von Neumann employed his projection
postulate, thus relying on a model of quantum measurement having a dubious applicability).
It seems to me that, although there is a certain arbitrariness with respect to the position of the cut, this arbitrariness is strongly limited both on the side of the object as well as on the
side of the observer. On the object side it should not be placed prior to completion of the pre-measurement phase, that is, the point where a correlation is established of the type described here
It also does not make much sense to involve the human observer in an explicit way.
Nor does it seem to make sense to extend the applicability of quantum mechanics to the phase of `registration of pointer positions of measuring instruments within the macroscopic parts of the
physical apparata' (unless quantum mechanics is believed to be the `theory of everything').
For all these reasons the `Heisenberg cut' has largely lost its importance as a characteristic property of quantum measurement. I shall have to say something more on the relation between
microscopic object and observer here.
Conditional preparation
• Every quantum mechanical measurement has also a preparative aspect in the sense that it not only prepares the measuring instrument in some final state, but it does so also with the microscopic
object. In the final state
|Ψ[f]> = ∑[m]c[m]|ψ[m]> |θ[m]>
of the (pre-)measurement process this preparative aspect is represented by the state vectors |ψ[m]>. Such a state vector can be legitimately interpreted as describing the `state of the object,
conditional on measurement result m having been read off as the position of the measuring instrument's pointer'. Note that the state vectors |ψ[m]> are completely determined by the interaction
between object and measuring instrument. Only in very special cases it may occur that the vectors |ψ[m]> are equal to the eigenvectors of a standard observable. In that case von Neumann's
projection postulate may be satisfied. However, in actual experimental practice this is seldom, if ever, fulfilled (compare). In general the vectors |ψ[m]> are not even orthogonal.
• In a `conditional preparation by a measurement' the state vector undergoes a transition
|ψ> → |ψ[m]>,
generalizing von Neumann's projection^22. The necessity of taking |ψ[m]> as the post-measurement state conditional on measurement result m, follows from the fact that a measurement of an
arbitrary standard observable B (with eigenvectors |b[n]>) performed immediately after the first measurement, is yielding (conditional) probabilities
These conditional probabilities are derivable from the relation p(m,n) = p(n|m)p(m) with the joint probabilities p(m,n) of obtaining the pair of measurement results (m,b[n]) in a joint
measurement of the POVM {M[m]} and B in the state |Ψ[f]>.
• `Measurement' as a `preparation procedure'
In an individual-particle interpretation the discontinuous change |ψ> → |ψ[m]> of the state vector may cause a Schrödinger cat problem that can be tentatively solved by switching to a realist
ensemble interpretation in which the transition from |ψ> to |ψ[m]> is interpreted as a selection of a subensemble from the `total ensemble of microscopic objects prepared by the measurement, the
selection being carried out on the basis of an observation of measurement result (pointer position m)'.
Alternatively, and to be preferred from the point of view of an empiricist interpretation, it is possible to interpret the transition to |ψ[m]> as a transition to a different preparation
procedure, `observation of measurement result m' and `selection on the basis of this observation' being parts of this preparation procedure.
• figure 3
measurement principle, in the generalized form given above it is a highly useful and practically applied `preparation principle' (sometimes referred to as a `preparative measurement' or a
`filter'). That it, yet, is presented so often as a feature of a measurement is a consequence of a very restrictive view of `measurement', in which the states |ψ[m]> correspond to outgoing beams
that do not spatially overlap (see figure 3). In that case the measurement result m is determined by the beam the particle is found in after leaving the apparatus. In such experiments the final
position of the microscopic object is taken as the pointer observable. Due to this very special choice of the pointer observable the preparation of the final state of the object is important lest
the procedure function as a measurement of a property of the initial state.
In general, `pointer observables' are quite different, however. In general, they correspond to `properties of the measuring instrument' rather than to `properties of the microscopic object'.
Unfortunately, in foundational discussions this very restricted procedure (the Stern-Gerlach measurement of spin is approximately of this type) has become more or less paradigmatic of quantum
mechanical measurement. In particular, Heisenberg's interpretation of his `uncertainty relation' hinges on it, this relation being taken to be valid in the final state of the microscopic object,
and, allegedly, expressing the measure of disturbance of the microscopic object by the measurement. The Copenhagen confusion of `preparation' and `measurement' is largely a consequence of this
restricted view on quantum mechanical measurement. It has had a considerable confusing effect in the discussion on the foundations of quantum mechanics. This can only be resolved by means of a
careful description of quantum mechanical measurement processes.
• The quantum Zeno effect
One example of the above-mentioned confusion is the application of von Neumann projection to the so-called quantum Zeno effect, which allegedly has as a result that the object by continuous
observation is frozen in its initial state (``a watched pot never boils''). This can only happen if the interaction with the `measuring instrument that performs the observation' is such that it
has this freezing effect. This means that the interaction of object and measuring instrument must be sufficiently energetic. Thus, it will certainly not be possible to prevent a radioactive
nucleus from decaying by merely continuously looking at a 4π counter surrounding the nucleus.
Interpretations of quantum mechanics
• `Interpretation' as a `mapping'
Since `quantum mechanics' is a physical theory it is different from the `reality it purports to describe'. Therefore we need an interpretation establishing a relation between the two. I will use
the notion of `interpretation' in the following sense:
An interpretation of a physical theory is a mapping from its mathematical formalism into the physical world.
There are different possibilities for interpreting the quantum mechanical formalism. In the following two of these will play an important role, viz. (cf. figure)
a: the realist interpretation in which the quantum mechanical formalism is assumed to be mapped into the `microscopic reality of the atomic and subatomic world';
b: the empiricist interpretation in which the quantum mechanical formalism is thought to be mapped into the `macroscopic world of phenomena caused by microscopic objects (like flashes on a
fluorescent screen, or clicks of a Geiger counter)'.
Like the mathematical formalism, also its interpretation cannot be derived. An interpretation is neither true nor false. It is either useful or less useful for establishing a correspondence
between the mathematics of the theoretical formalism and the physics of the world we are dealing with. For several reasons, to be discussed in the following, the realist interpretation (which is
the usual textbook interpretation) turns out to be less useful than the empiricist one.
• Prejudice
In my view quantum mechanics is an ordinary physical theory. I do not think that its domain of application encompasses notions like the `human mind', `human consciousness', `free will', or the
`universe as a whole' (although, of course, quantum mechanics certainly is important for cosmology as far as the behaviour of microscopic objects is important to it). Attempts to apply quantum
mechanics to such notions are largely ignored in the following. I also do not think that quantum mechanics can be understood on the basis of any influence exerted by a human observer on a
microscopic (let alone macroscopic) object by just looking at it (compare Schrödinger's cat).
Realist versus empiricist interpretation of quantum mechanics
• Realist interpretations of quantum mechanics
figure 1 In a realist interpretation of quantum mechanics the interpretational mapping is assumed to be from the mathematical formalism into microscopic reality^35. In this interpretation the
mathematical entities of the theory (state vector |ψ>, density operator ρ, standard observable A, and generalized observable {M[m]}) are assumed to represent `properties of the microscopic
object'. Thus, the quantum mechanical momentum operator is assumed to describe the `physical momentum of an electron' much in the same way as the classical quantity mv is assumed to describe the
`physical momentum of a billiard ball'. `Instruments of preparation, used to prepare the microscopic object in an initial state', as well as `measuring instruments', are not represented within
the quantum mechanical description, even if they are physically present. This is symbolized in figure 1 by dashing the preparing and measuring apparata.
□ Objectivistic-realist versus contextualistic-realist interpretations of quantum mechanics
We should distinguish `objectivistic and contextualistic versions of a realist interpretation of quantum mechanics'. In an `objectivistic-realist interpretation' the quantum mechanical
description is assumed to refer to `objective reality', that is, a `reality independent of any observer, including his measuring instruments'. In a `contextualistic-realist interpretation'
quantum mechanical concepts are assumed to have a meaning only within a certain physical context (like the object's environment, or a measurement arrangement).
□ Objectivistic-realist versus contextualistic-realist interpretations in classical physics
Classical theories are usually interpreted in an objectivistic-realist sense. In general, however, it is not possible to do so in a rigorous sense. For instance, due to its atomic
constitution, and the possible vibrations of its atoms, a billiard ball is not a rigid body (as assumed in the classical theory of rigid bodies). At most is a billiard ball satisfying
`classical rigid body theory' in an approximate sense, and only within certain contexts (e.g. of experiments allowing the ball to `behave as if it were a rigid body'). Hence, as far as
`classical rigid body theory' can be interpreted realistically, it at most allows a contextualistic-realist interpretation: if the ball is hit so hard that the atoms are set in violent
vibrational motion, then `classical rigid body theory' is not applicable any more.
Nevertheless, it would be pedantic to deny a billiard ball its property of `rigidity' within experimental contexts in which atomic vibrations -although existing- are unobservable.
□ Different implementations of a `realist interpretation of quantum mechanics'
In a `realist interpretation of quantum mechanics' the probabilities p[m] are thought to refer to `properties of the microscopic object' (represented by the measurement results a[m]), being
possessed by the object either before the measurement (Einstein's objectivistic realism), during the measurement (Bohr's contextualistic realism) or after the measurement (Heisenberg's
"empiricism"). It turns out that Einstein's proposal is impossible, that Bohr's one is too vague to prevent inconsistencies and unjustified applications, and that Heisenberg's one is
confounding preparative and determinative aspects of measurement.
• Empiricist interpretation of quantum mechanics
□ figure 2 In the empiricist interpretation the interpretational mapping of |ψ>, ρ, A, and {M[m]} is assumed to be from the mathematical formalism into the `macroscopic reality of sources for
preparing microscopic objects and instruments for performing measurements': 'phenomena of preparation and of measurement' are assumed to correspond to `knob settings of preparing
instruments', and to `pointer positions of measuring instruments', respectively^91.
Thus, wave functions and density operators are assumed to be `symbolic representations (labels) of preparation procedures' (for instance, referring to a cyclotron with specified knob settings
as a preparing instrument of a beam of particles; or to less specified natural processes like the preparation of an electromagnetic field by the sun).
A quantum mechanical observable is assumed to be `a label of a measuring instrument' (for instance, a photosensitive device for detecting photons) or measurement procedure.
Even though the microscopic object is present, in the empiricist interpretation it is assumed not to be represented by the quantum mechanical formalism (like atomic vibrations are not
represented within the classical theory of rigid bodies). This is symbolized in figure 2 by dashing the object. In the `empiricist interpretation' the quantum mechanical formalism is assumed
to describe `just the phenomena', phenomena being located within the macroscopic sources for preparing microscopic objects and within the instruments for measuring their properties.
In the `empiricist interpretation' the probabilities p[m] are thought to refer to `properties of the measuring instrument (pointer positions)' rather than to `properties of the microscopic
□ The `empiricist interpretation of quantum mechanics' should be distinguished from Heisenberg's empiricism, since with Heisenberg a measurement result is thought to correspond to a `property
of the microscopic object', made observable in its post-measurement state. Thus, with Heisenberg the meaning of a flash on a fluorescent screen is the `manifestation of a microscopic particle
at a certain position on the screen' rather than as a `reaction of that screen to the impact of a microscopic particle'. Although in `such position measurements' Heisenberg's interpretation
may be acceptable because both alternatives may be correct, is its scope too restricted: in general the relation between a microscopic object and the final pointer position of the measuring
instrument is much more remote. It even is possible that the microscopic object is annihilated in the process of measurement, and, hence, does not have a final value of any observable at all.
Heisenberg's "empiricism" is closely related to von Neumann's projection postulate, and meets similar objections. Like the latter it cannot be extended to more general measurement procedures.
□ The empiricist interpretation considers quantum mechanics to have executed, although often inadvertently, Mach's methodological imperative of `just describing the phenomena'. Note, however,
that the `phenomena' considered here are different from the `sensations in human perception' dealt with by Mach^0.
In a sense the `empiricist interpretation' is a realist one too, since it is a mapping of the mathematical formalism into physical reality, be it into the `macroscopic (directly observable)
part of that reality' rather than the microscopic one. `Pointer positions of measuring instruments' can be interpreted as `properties of the pointer' analogously to the sense in which
`rigidity' can be attributed to a billiard ball. It is important to note here, that such a realist view of the empiricist interpretation of quantum mechanics requires the measuring instrument
to be explicitly represented within the quantum mechanical description (compare).
□ Notwithstanding similarities should the `empiricist interpretation of quantum mechanics' be carefully distinguished from the empiricist views as fostered by the philosophical doctrine of
logical positivism/empiricism, the latter considering metaphysical anything that is unobservable. An empiricist interpretation of quantum mechanics is perfectly consistent with a belief in
the "real" existence of microscopic objects like atoms and electrons (`reality behind the phenomena'), even though these are not directly observed. The relevant point is that, according to
the empiricist interpretation, quantum mechanics does not describe these objects, but it merely describes the `relations between the preparing and measuring procedures mediated by them'. Far
from embracing an anti-realist philosophy denying any `reality behind the phenomena', the empiricist interpretation of quantum mechanics accepts the possibility of such a reality analogously
to the way it accepts the reality of atoms within a billiard ball.
□ In order to describe the `microscopic objects themselves', new (subquantum) theories have to be developed, quite analogously to the necessity of developing atomic solid state theories as sub
theories to `classical rigid body theory'. It should be noted that by interpreting quantum mechanics as a description of the `macroscopic reality of the phenomena' the `empiricist
interpretation of quantum mechanics' leaves considerably more room for subquantum theories than is left by a `realist interpretation of quantum mechanics'. Indeed, the `empiricist
interpretation of quantum mechanics' is consistent with incompleteness in the wider sense. This holds true in an analogous way for `classical rigid body theory', which, if interpreted in a
realist sense (rather than an empiricist one), would yield a rather absurd model of a billiard ball as a `closely packed configuration of rigid atoms', but in the empiricist interpretation of
that theory allowing sub-rigid body theories to yield more appropriate descriptions of the constituting atoms.
□ The `empiricist interpretation' should also be distinguished from the Copenhagen interpretation, which, although having an empiricist reputation, actually contains many realist elements
stemming from an inclination to view quantum mechanics as a description of `microscopic reality itself' (for instance, von Neumann's projection postulate). Nevertheless, the `empiricist
interpretation' is indebted to the Copenhagen one by taking seriously the importance attributed to the role of the measuring instrument in assessing the meaning of the quantum mechanical
formalism (compare), be it without neglecting the `distinction between the realities of microscopic object and measuring instrument' while being engaged with quantum mechanical measurement.
□ The `empiricist interpretation' in a natural way takes into account the distinction between `preparation' and `measurement', too often ignored within quantum mechanics. On the basis of this
distinction it is possible to account in a trivial way for the equivalence of the Schrödinger and Heisenberg pictures. Thus,
Schrödinger picture: Prepare ρ(t) by preparing ρ and then waiting time t before applying measurement procedure A;
Heisenberg picture: Prepare ρ, and simultaneously apply measurement procedure A(t), where A(t) is defined as `wait after the preparation ρ during a time t before applying measurement
procedure A'.
• Reasons to prefer a realist interpretation of quantum mechanics
□ Ontological reason:
☆ Quantum reality
According to our observations there seems to be a fundamental difference between macroscopic and microscopic reality, the latter being experienced as "strange and paradoxical". Such
"strangeness" is incorporated into the quantum mechanical formalism (entanglement, incompatibility), which, for this reason, might be thought to yield a description of a `microscopic
quantum reality'.
Such strangeness of microscopic reality has an appeal of its own, lacking in the empiricist interpretation. It is widely felt that such a strange quantum reality may open up new vistas
for understanding, and for future applications of quantum mechanics (quantum computing, quantum information).
It also is attractive to ignore all problems related to `quantum mechanical measurement' by adopting a `no nonsense attitude' in which these problems are simply ignored, and attributed to
the strangeness of an `objective reality'.
□ Epistemological reasons:
☆ The classical paradigm
Classical mechanics is generally thought to be successfully interpretable in a realist sense (however, compare). From this point of view it is not unreasonable to try to analogously
interpret quantum mechanics in a realist way (just changing the notions of state, physical quantity, as well as the equation governing time evolution, but maintaining as much as possible
the classical way of thinking). Hence, the classical paradigm favours a `realist interpretation of quantum mechanics'. Indeed, in their fundamental discussion on the meaning of quantum
mechanics both Bohr and Einstein entertained a `realist interpretation of quantum mechanical observables' (be it that Einstein's interpretation was objectivistic-realist, whereas Bohr's
can best be characterized as a contextualistic-realist one).
☆ Platonism
In Plato's allegory of the cave it is realized that our knowledge about reality may be compared with the knowledge of cave dwellers who are able to look only into the inward direction of
the cave, and who obtain knowledge about what is happening outside the cave only by means of the shadows cast by the objects of the outside world on the cave's inside wall. The upshot is
that what we see is just an imperfect image of reality. The difference between a realist and an empiricist interpretation of quantum mechanics can be characterized in terms of this
allegory by asking whether the theory is describing either the real (outside) quantum world, or just its shadows, the "phenomena". Plato would consider unscientific any theory aspiring to
`just describe the shadows'. It would be in agreement with Plato to assume that quantum mechanics could only derive its scientific status from the deep insights it gives with respect to
the constitution of microscopic reality. Platonism favours a `realist interpretation of physical theories'.
□ Methodological reason:
☆ Inference to the best explanation
The hypothesis that `reality is like quantum mechanics says it is' is often assumed to provide the best explanation of the success of quantum mechanics. Here `reality' means the `reality
of the microscopic object itself' (rather than `just the phenomena'). It is felt that an empiricist interpretation is overestimating the role of measurement and is renouncing too much the
possibilities of quantum mechanics as an explanatory device (for instance, atoms are felt to be stable objects because they satisfy quantum mechanics; it would be absurd to assume that
atoms are stable or unstable as a consequence of measurements that are performed on them: ``To restrict quantum mechanics to be exclusively about piddling laboratory operations is to
betray the great enterprise^85.'' Einstein too criticized the Copenhagen interpretation for resigning to contextualism, and for abandoning the requirement that a physical theory describe
`objective reality' rather than a `reality that is interacting with a measuring instrument'.
• Reasons to prefer an empiricist interpretation of quantum mechanics
□ Ontological reason:
☆ Experimental data are exclusively `pointer positions of measuring instruments'
The main reason to prefer an `empiricist interpretation of quantum mechanics' is that any experimental test of quantum mechanics compares the probability distributions of quantum
mechanics with experimental data (relative frequencies) obtained by letting a microscopic object interact with a measuring instrument and subsequently observing a phenomenon corresponding
to the `state the measuring instrument is in' (so-called `pointer readings'). We expect the pointer position to tell us something about the microscopic object, but we should be aware that
a `pointer reading of a macroscopic measuring instrument' is not a direct observation of a `property of the microscopic object'.
Distinguishing between `pointer readings' and `properties of the microscopic object' solves one of the ontological riddles of quantum mechanics, namely `that it would be impossible to
attribute the measurement result to the microscopic object' as a `property possessed already before the measurement'. For instance, in a position measurement it allegedly would be
impossible to infer from a click of the particle detector that this is a consequence of the particle being there immediately before the measurement event (Jordan: ``By measuring we force
the electron to take up a particular position; previously it was neither here nor there; it had not yet decided about taking up any particular position.''). Within the `realist
interpretation of quantum mechanics' this is corroborated by the inapplicability of the 'possessed values principle': if it is a property of the microscopic object at all, a quantum
mechanical measurement result is an `emergent property'. Nevertheless, the Copenhagen idea that `microscopic objects do not have properties (like position) unless such a property is
measured', is rather counterintuitive. Not many experimenters will be prepared to deny the causal relationship between a `click of a Geiger counter' and the `objective presence of a
microscopic particle at the position of that counter'.
In the `empiricist interpretation' the problem of possessed values of quantum mechanical observables simply does not arise because it is completely natural that the measurement result
(the post-measurement pointer position) comes into being only during the measurement, and, hence, cannot be attributed to the microscopic object as a `property possessed prior to
measurement'. According to the empiricist interpretation such a property is not even described by quantum mechanics; it requires a subquantum theory for its description. From this
perspective there is no reason to doubt that an electron had a well-defined position prior to measurement, be it that quantum mechanics does not tell us all about it: quantum mechanics
only contains certain (statistical) information about the reaction of a measuring instrument that is brought into interaction with the microscopic object.
□ Epistemological reason:
☆ Analogous cases
Consider e.g. `classical rigid body theory', which, as argued here, does not allow an `objectivistic-realist interpretation', although a contextualistic-realist one might be feasible
(although improbable). However, on closer scrutiny it turns out that `classical rigid body theory' should better be interpreted in an empiricist than in a contextualistic-realist way.
Indeed, due to its atomic constitution there does not exist any physical context in which a billiard ball is really a rigid body, even if no deviation from rigidity is observed.
`Classical rigid body theory' just describes the `phenomena within its domain of application' even if there are atomic vibrations, provided these are imperceptible on the macroscopic
scale. Within the domain of application of `classical rigid body theory' the atomic constitution may become important if e.g. optical measurements are performed.
Another analogy is provided by thermodynamics, which theory describes `thermal phenomena within its domain of application' (determined by the requirement that a condition of molecular
chaos^0 be satisfied, which here defines the context). Here, too, the common usage of considering `temperature' as a property of an object (rather than as a pointer reading on a
thermometer scale) is questionable (thus, if temperature were objectively defined by kinetic energy, a way to get one's tea water boiling would be to drop one's teapot from a sufficient
height). Evidently, even a contextualistic-realist interpretation of thermodynamics has its problems, which can be solved by an empiricist one (doubtless, a thermometer dropped together
with the object will not show the increase of temperature referred to above).
In the following it will be seen that similar problems exist for the interpretation of quantum mechanics^84: also here a contextualistic-realist interpretation, as involved in the
Copenhagen ideas of correspondence and complementarity, is not sufficient to solve all interpretational problems; an `empiricist interpretation' is better suited for that purpose.
□ Methodological reasons:
☆ Evading the paradoxes of a realist interpretation
The failure of the possessed values principle is one of the many problems and paradoxes that plague the realist interpretation (for instance, particle-wave duality/complementarity, the
"measurement problem", EPR nonlocality). Evading these paradoxes is an important reason to prefer the `empiricist interpretation', in which they do not arise.
`Realist interpretations' are inexhaustible sources of speculative ideas about microscopic reality. It is questionable whether all these ideas, although often very exciting, are equally
fruitful. Some of these are rather reminiscent of ideas about the world aether, made obsolete by the (empiricist!) development of relativity theory. Note that EPR nonlocality is as
nonempirical as 100 years ago was the world aether.
As another possibility to evade paradox it has been attempted to simply assume the nonexistence of certain troublesome elements. For instance, in Mermin's `correlation without correlata'
it is assumed that quantum correlations can be conceived without assuming the existence of `entities that are correlated'. Apart from the fact that in quantum mechanical measurements the
correlata are evident (viz. the pointer positions of measuring instruments). Only if the distinction between EPR and EPR-Bell experiments is ignored could this be overlooked, in the
`billiard ball analogy' Mermin's assumption would be analogous to the idea that it would be impossible to explain the `correlation observed to exist between the positions of different
points of the `ball when it is moving as a rigid body', by the `tight bindings of the positions of the different individual atoms caused by the interatomic forces between neighbouring
Sticking to an `empiricist interpretation of quantum mechanics' is a way to evade being distracted from our `experimentally relevant observations of pointer positions' toward an imagined
world of properties attributed to microscopic reality on the basis of the `peculiarities of the mathematical formalism of quantum mechanics'.
☆ Escaping from suggestions implied by the standard formalism
The applicability of the generalized formalism to quantum mechanical measurements is an important indication of the usefulness of an empiricist interpretation. As a matter of fact,
experimental data are never crucially dependent on the (eigen)values a[m] of observables because the data refer to `pointer positions of a measuring instrument', the labeling of which is
man-made, and, for this reason, rather arbitrary^3 (in the generalized formalism this is symbolized by denoting labels by m rather than by a[m]). Generalized observables (represented by
POVMs) are independent of the precise way measurement results are labeled. For generalized observables the probability distributions are only dependent on m, not on a[m] (there is no
empirically relevant operator ∑[m] a[m] M[m] to provide eigenvalues).
For relations that do depend on the (eigen)values (like the Heisenberg-Kennard-Robertson inequality or the Bell inequality) there exist alternatives which are independent of these values,
and which yet have essentially the same physical meaning (e.g. entropic uncertainty relation and BCHS inequality, respectively).
☆ Acknowledging the essential role of measurement
In a `realist interpretation of quantum mechanics' there is a tendency to ignore the role played by `measurement' in assessing the meaning of the quantum mechanical formalism. This role
was stressed by the founding fathers of quantum mechanics (compare the Copenhagen interpretation), however without having much response in textbooks of quantum mechanics, in which the
theory is generally presented in an objectivistic-realist way, and the measuring instrument is completely left out of consideration.
One reason for this negligence may be the fact that initially `measurement' was treated in a halfhearted way, the measuring instrument not being treated as a quantum mechanical object
dynamically interacting with the microscopic object, but rather as just providing a `context for the microscopic object to contract certain properties'. By restricting themselves to a `
contextualistic-realist interpretation of the quantum mechanical formalism' the founding fathers only went part of the way towards acknowledging the `crucial role played by measurement in
quantum mechanics'. Restriction to the standard formalism (including von Neumann projection) contributed to the possibility of `avoiding measurements that could experimentally demonstrate
the crucial influence of the interaction with the measuring instrument' (compare).
In an empiricist interpretation the role of measurement is given full credit. By exploiting the generalized formalism of quantum mechanics it can be seen that the Copenhagen emphasis on
`measurement' was not without a physical basis. Presumably it is not very wise to ignore the role of measurement (as well as preparation) in accounting for the strangeness of quantum
mechanics. By attributing `properties of the quantum mechanical formalism' to `reality behind the phenomena' rather than to `preparations and measurements' we take the risk to fool
ourselves by mistaking, as in Plato's allegory of the cave, the shadows for reality. Contrary to Platonist contention, according to the `empiricist interpretation' quantum mechanics is
dealing with the shadows only.
Realist versus instrumentalist interpretation of quantum mechanics
• In the literature on the philosophy of quantum mechanics the choice is often between a realist or an instrumentalist interpretation rather than between a realist or an empiricist one.
• In an instrumentalist interpretation^0 the mathematical formalism of quantum mechanics is considered as "just an instrument for calculating measurement results". No physical meaning is attributed
to either state vector or observable. An `instrumentalist interpretation' is in a pragmatic way directed towards the `mathematical representation of phenomena', not asking too much questions that
might be induced by the strangeness of the new quantum phenomena (as compared to those of classical physics).
• Remarks on the `instrumentalist interpretation'
□ In the past instrumentalism has been an acceptable way to circumvent the problems and paradoxes raised by a `realist interpretation'. Thus, Bohr entertained an `instrumentalist view with
respect to the quantum mechanical state vector or wave function' (``There is no quantum world.'')^8. In certain versions of the Copenhagen interpretation this instrumentalism is even extended
to the whole mathematical formalism.
Within the `context of discovery' this may have been a fruitful attitude because it prevented asking `questions with respect to microscopic reality' that at that moment could not (yet) be
answered ("Shut up and calculate").
On the other hand, within the `context of justification' we are interested in here, `instrumentalism' may become a drawback. Indeed, an `instrumentalist interpretation of quantum mechanics'
suffers from too large a vagueness, thus causing confusion. For instance, in the `instrumentalist interpretation' it is left undecided whether the quantum mechanical term `measurement result'
is referring to a `property of the microscopic object' or to a `pointer position of a measuring instrument' (compare). This ambiguity has played an important role in the EPR discussion by not
preventing confounding of EPR and EPR-Bell experiments.
□ At least in Bohr's view, the Copenhagen interpretation is instrumentalist with respect to the state vector. Unfortunately, in many textbooks of quantum mechanics the way the state vector is
dealt with can hardly be distinguished from a realist one, even when adherence to the Copenhagen (orthodox) interpretation is acknowledged. The paradoxes of quantum mechanics are mainly
caused by these realist tendencies; they might be avoided by a more strict instrumentalism. Thus, a `strictly instrumentalist version of the Copenhagen interpretation' would not be liable to
the paradoxes stemming from von Neumann's projection postulate, which would not have the undesirable features it may have in a `realist interpretation' (like, for instance, EPR nonlocality).
□ Empiricist versus instrumentalist interpretation
The empiricist interpretation is sometimes confounded with the `instrumentalist interpretation' because both interpretations are shunning assertions with respect to `reality behind the
phenomena'. However, there are two differences between these interpretations:
i) whereas the `instrumentalist interpretation' does not attribute a physical meaning to the state vector at all, does the `empiricist interpretation' look upon it as a referring to a
preparation procedure;
ii) in the `instrumentalist interpretation' it is left unspecified whether a `quantum mechanical measurement result a[m]' is referring to a `property of the microscopic object (suitably
amplified)' or to a `pointer of a measuring instrument'; on the other hand does the `empiricist interpretation' make an unambiguous choice.
Both the empiricist and realist interpretations remedy the vagueness of the `instrumentalist interpretation' by specifying in an unequivocal way a physical reference for each term of the
`mathematical formalism of quantum mechanics'; this is done in different ways, however, the `empiricist interpretation' strengthening the empiricist tendencies taken up in a rather
half-hearted way by instrumentalism, whereas realist tendencies probably are stemming from a classical paradigm.
□ By not attributing `quantum mechanical concepts' as properties to the microscopic object but rather to the measuring instrument an `empiricist interpretation of quantum mechanics' is gaining
the same advantage as an `instrumentalist interpretation' in dealing with the problems and paradoxes of quantum mechanics. This is achieved without introducing any of the confusing vagueness
characterizing the `instrumentalist interpretation'. Therefore in my opinion the `instrumentalist interpretation of quantum mechanics' is obsolete by now.
It is possible to upgrade the `Copenhagen interpretation' in a consistent way from its `instrumentalist/realist make-up' to an `empiricist interpretation' liable to be called a neo-Copenhagen
interpretation (cf. Publ. 53).
`Objectivity versus subjectivity' in quantum mechanics
• I restrict myself here to the situation where the choice has been made to accept quantum mechanics as the theory describing the `pre-measurement phase of measurement within the microscopic
domain'. Hence, I ignore a possible subjectivity inherent in theory choice, being particularly important because of theory-ladenness of measurement/observation. The possibility to do so stems
from a structuralist view of physical theories (cf. footnote 30), in which each physical theory turns out to have a restricted `domain of application'.
What I cannot ignore is the difference with respect to the issue of `objectivity versus subjectivity' stemming from different interpretations of the mathematical formalism of quantum mechanics.
We shall have to deal with two different dichotomies with respect to choices of interpretations, viz. the `individual-particle interpretation' versus `ensemble interpretation' and `realist
interpretation' versus `empiricist interpretation' dichotomies.
• Epistemological versus ontological aspects of `objectivity versus subjectivity'
□ The issue of `objectivity versus subjectivity' is an epistemological one in the first place, since it refers to our knowledge: either our knowledge about some object may be `objective
knowledge', describing the object `as it exists independently of the knower', or knowledge may be coloured by the knower's subjective beliefs (possibly based on having more information than
another knower).
□ Objectivistic versus subjectivistic ontologies
However, as far as our knowledge is about physical reality, the dichotomy is liable to obtain an ontological significance as well. Restricting ourselves to the `knowledge encompassed by
quantum mechanics', it is here that `interpretations' come in because they specify the object the knowledge is referring to: either an `individual object' or an `ensemble' (compare the
dichotomy of `individual-particle interpretation' versus `ensemble interpretation'); either the `microscopic object' or the `preparing and measuring instruments/procedures' (compare the
dichotomy of `realist interpretation' versus `empiricist interpretation').
The corresponding reality may either be thought to be independent of our knowledge (thus corresponding to an `objectivistic ontology assuming existence of an objective quantum reality'); or
the reality may be thought to depend on our knowledge (either in the interactional way implicit in von Neumann's projection postulate according to which an observation by an observer is
causing the state of the microscopic object to change, or in a relational way, compare). The latter views may even entertain a subjectivistic ontology. Proponents of such a subjectivistic
ontology are, e.g., London and Bauer, Wigner, but some traces can also be found with Bohr and Heisenberg. An early adversary was Schrödinger, who preferred an `objectivistic ontology' not
referring to a human observer^90.
□ Reluctance to accept a subjectivistic ontology has caused physicists like Einstein to stress the epistemological nature of this subjectivity (implemented by an ensemble interpretation of the
quantum mechanical state vector). We, indeed, do not have any reason to believe, as suggested by von Neumann's projection postulate, that a human observer is able to influence physical
reality by merely looking.
The idea of a `subjectivistic ontology' is made obsolete by realizing that the human observer is as dispensable within quantum mechanics as he is within classical mechanics, and that
`observation within the quantum domain' -currently not any longer being based on human perception- should meet a requirement of intersubjectivity quite analogous to the way it is required
within the macroscopic domain. In particular, Schrödinger's cat paradox, allegedly demonstrating the influence of an `individual observation' on an `individual cat's state', should preferably
be discussed in an intersubjectivist vein, viz. as a `problem of quantum mechanical measurement' rather than as a `means to kill an individual cat by just observing it'. Unfortunately,
`intersubjectivity' is often referred to as `objectivity', thus blurring the distinction between the ontological and epistemological meanings the concept of `objectivity' may have.
□ From `ontological objectivity' to `non-contextuality'
There is still another way in which the ontological notion of `objectivity' may arouse ambiguity, viz. if not the `relation of the microscopic object to the observer' is considered but
instead the `relation of the microscopic object to other objects in its environment', in particular the measurement arrangement'. It might be recommendable within this context to refer to `
ontological objectivity' as non-contextuality^82, thus stressing that the important point is `whether or not the microscopic object can be regarded as an isolated object, independent of
preparing and measuring instruments that are present in its environment', more particularly, whether or not `measurement results obtained during the measurement phase' can be attributed to
the microscopic object as `objective/non-contextual properties possessed prior to and independent of the measurement'.^67 The crucial distinction between classical and quantum physics is the
`fundamental ontological dependence of the microscopic object on the measurement context whenever a measurement on it is carried out', such a dependence on the measurement context being
assumed to be irrelevant in a measurement on a macroscopic object.
□ Objectivity, (inter)subjectivity, and contextuality in `realist interpretations of quantum mechanics'
The roles of objectivity, (inter)subjectivity, and contextuality in quantum measurement as encountered in `realist interpretations of quantum mechanics' are symbolized in the following table
illustrating the difference between `observation (macroscopic interface)' and `measurement (microscopic interface)' (compare).
In this picture the Heisenberg cut is split into two interfaces, made necessary in order to explicitly take into account the role of `quantum measurement' as opposed to `human observation'.
As a result of the classical paradigm this difference is underestimated in `realist interpretations of quantum mechanics'. In these interpretations the measuring instrument is completely
ignored in the quantum mechanical description, and `measurement results' (referred to as `quantum phenomena' or `measurement phenomena') are treated as properties of the microscopic object,
possessed either before (Einstein), during (Bohr), or after (Heisenberg) the measurement. As a result features that are easily unraveled in the `empiricist interpretation' (compare) are hard
to distinguish.
Apart from the identification of `measurement results' and `properties of the microscopic object' the issue is obscured by the possibility of maintaining either an `individual-particle
interpretation' or an `ensemble interpretation' of the quantum mechanical formalism.
In the present discussion I have taken into account the `impossibility of non-contextuality of the microscopic interface' as argued here.
□ Remarks on `Objectivity, (inter)subjectivity, and contextuality in realist interpretations'
☆ Macroscopic interface
In `realist interpretations of quantum mechanics' a quantum mechanical measurement result corresponds to a `macroscopic measurement phenomenon (e.g. a track in a Wilson chamber)', its
interface with the human observer being in the macroscopic domain. The `measurement phenomenon' is subjected to the same objectivity/intersubjectivity as is any other macroscopic
phenomenon. We therefore may assume for the `measurement phenomenon' the same `ontological independence from the observer (non-contextuality)' as is usual in classical physics.
The macroscopic interface may be `subjective in an epistemological sense' because different observers may have different additional knowledge (as, for instance, may be the case in an EPR
experiment where Alice may or may not send to Bob information about her measurement results). This classical feature has not been taken into account in the above table because after both
Alice and Bob have set up their measurement arrangements, their measurement results are as objective/intersubjective as in general macroscopic phenomena are.
☆ Microscopic interface
Within quantum measurement the crucial interface is rather that between the `microscopic object' and `that part of the measuring instrument that is sensitive to the microscopic
information' (for instance, in the Stern-Gerlach measurement this interface is situated where the interaction between atom and magnetic field is). Since this interaction is a microscopic
process it is liable to be described by quantum mechanics (pre-measurement).
Since the direct influence of the observer does not seem to reach that far into the interior of the measurement process, the notion of `subjectivity' does not apply here. However, the
quantum mechanical character of the interaction between object and measuring instrument entails an `ontological contextuality' that marks a fundamental difference with `classical
measurement'. For a given measurement arrangement (encompassing possible classical information transfer protocols) this contextuality does not imply any subjectivity of the measurement
The idea of `ontological contextuality' is at the basis of the contextualistic-realist interpretation of quantum mechanics.
☆ Is an objectivistic ontology possible?
Einstein combined his subjectivistic epistemology with a non-contextual (objectivistic) ontology. In particular, due to his objectivistic-realist interpretation of quantum mechanical
observables he assumed the individual quantum mechanical measurement result a[m] to be an `objective property possessed by the microscopic object prior to measurement' (compare the `
possessed values principle'; see also Einstein's view on `determinism', and determinism versus causality). As a consequence of the Kochen-Specker theorem^0 by now it is well known that
Einstein's approach is impossible in general.
However, this does not imply that an objectivistic/non-contextual ontology would be impossible; it just means that an `objectivistic-realist interpretation of quantum mechanical
observables' is not possible: a quantum mechanical measurement result may refer to microscopic reality, but probably not in the sense of describing that reality in a rigorous manner (
compare). A denial of such a reference would amount to confounding ontological and epistemological issues: a `contextualistic-realist interpretation of quantum mechanics' may be
consistent with a `non-contextual (objectivistic) ontology', be it that our knowledge of that reality (represented by our physical theories) may be co-determined by the measurement
arrangement and, hence, may be contextual only (compare, also).
□ Objectivity, (inter)subjectivity, and contextuality in the `empiricist interpretation of quantum mechanics'
There are two reasons why the issue of `objectivity, (inter)subjectivity, and contextuality' is more perspicuous in the `empiricist interpretation of quantum mechanics' than it is in `realist
i) in the empiricist interpretation there is a clear distinction between `measurement results (being properties of the measuring instrument)' and `properties of the microscopic object';
`measurement phenomena' are identified with `pointer positions of the measuring instrument';
ii) the `empiricist interpretation' is an ensemble interpretation, thus allowing to evade confusion like the one alluded to in footnote 83.
□ Remarks on `Objectivity, (inter)subjectivity, and contextuality in the empiricist interpretation'
☆ Dispensability of the human observer
In contrast to the realist interpretations (where the measuring and preparing instruments are not explicitly dealt with), in the empiricist interpretation it is evident that the human
observer is screened off from the microscopic object at the `macroscopic interfaces of the measuring and preparing instruments'. As a result it is even more evident than in the realist
interpretations that the human observer/experimenter does not deal directly with the microscopic object; he is just dealing with the knobs of his preparing apparata and with the pointer
positions of his measuring instruments (or even with the data collected from the measurements by a computerized data retrieval system), thus relegating to the macroscopic domain any
direct influence exerted by him on the measurement (Bohr).
☆ Ontological and epistemological meanings of quantum mechanical measurement results
In the empiricist interpretation measurement result a[m] has both an ontological meaning (in the sense of referring to an `objective/non-contextual property of the measuring instrument',
viz. its pointer position) as well as an epistemological one (in the sense of representing `knowledge on the microscopic object').
Of course, individual observations may be subjective (for instance, due to misreading on a pointer scale an `8' for a `3'; or because different individual observers have different
information (like Alice and Bob may have when performing an EPR-Bell experiment). But the requirement of intersubjectivity can be met in principle by employing sufficiently many
trustworthy assistant observers: quantum mechanics does not need to be interpreted as `describing the observations of an individual observer' any more than is classical mechanics. In the
empiricist interpretation `von Neumann projection in an EPR experiment' is interpreted as an `application of conditional preparation' (compare), the conditional preparation being realized
by selecting a subensemble of particles 2 conditional on Bob's knowledge about the measurement results Alice got from the correlated particles 1.
☆ Information transfer from microscopic object to measuring instrument as a physical process
The empiricist interpretation enables to renounce the question of the `subjectivity of knowledge' in favour of a `physical investigation of the manner in which, starting from the initial
state of the microscopic object, the final (post-measurement) state of the pointer is established by the measurement (compare). `Psycho-physical consideration of the information transfer
from the microscopic object to the human observer' can be replaced by a `physical investigation of the transfer of information from the microscopic object to the measuring instrument'. As
a result, by using the empiricist interpretation we seem to be better equipped to treat quantum mechanics as the ordinary physical theory conjectured here, even though we presently do not
dispose of a theory in a rigorous way describing the whole measurement process ranging from the microscopic to the macroscopic domain.
☆ In view of my prejudice with respect to quantum mechanics as an ordinary physical theory, describing measurement results not relying on influences exerted by human observation or by the
human mind, `subjectivistic-ontological views' will not be discussed here any further. This holds for von Neumann projection as far as it is thought to be caused by an `individual
observer merely looking at the object', as well as for relational views based on a `relation between a microscopic object and an individual human subject'. It should be remembered,
however, that von Neumann projection may play an objective role as a preparation procedure.
`Irreducibility versus reducibility' of quantum probability
• The subject discussed under this heading refers to the question of whether the `quantum mechanical probability p[m] of the Born rule' is a `fundamental property of an individual microscopic
object', or whether it is just a `relative frequency in an ensemble of measurement results', each measurement result being reducible to a `property the microscopic object possessed initially
(i.e. immediately before the measurement)'.
The `irreducibility versus reducibility' dichotomy is an ontological issue since it is referring to the `physical existence or nonexistence of such an initial property, independently of the
theory by which that property is described' (in particular, it is not required that the property be described by quantum mechanics; it might be describable by some subquantum theory). A possible
`characterization on the basis of a subquantum theory' of the `difference between reducible and irreducible indeterminism' is given here.
• The Copenhagen preference of `irreducibility' is codified in the quantum postulate. Historically, important sources of the `assumption of irreducibility' are:
i) (experimental source): observation of `exponential decay processes', implying that the decay rate is dependent only on the `number of decaying particles' present at a particular moment, and,
hence, is independent of the state of that particle (`a decaying particle has no age');
ii) (theoretical source): the superposition principle. It is felt that the state described by the state vector |ψ> = ∑[m]c[m]|a[m]> is different from the state described by the density operator ρ
= ∑[m]|c[m]|^2 |a[m]><a[m]| (even though the states cannot be distinguished by a measurement of observable A), the latter state allegedly describing an `ensemble of particles each of which having
a well-defined value a[m] of A (von Neumann ensemble)'. Within the `Copenhagen interpretation' the distinction between the two states has been interpreted as signifying a distinction between
`irreducibility' (the superposition) and `reducibility' (the ensemble);
iii) (philosophical source): the logical positivist/empiricist abhorrence of metaphysics, `reducibility of quantum probabilities to the existence of hidden variables' being considered
• Remarks on `irreducibility versus reducibility'
□ `Irreducibility' is at the basis of the Copenhagen thesis of the `completeness of quantum mechanics'. It is associated with `indeterminism'. Einstein was a strong proponent of `reducibility'
(compare the discussion of the EPR experiment (1935), which is actually meant to be a `disproof of irreducibility'). Notwithstanding by the experimental falsification of the
Bohr-Kramers-Slater (BKS) theory^0 (featuring `irreducibility of quantum probability') it was demonstrated that a certain determinism is present in certain quantum mechanical measurement
processes (viz. conservation of energy and momentum in the Compton-Simon experiment), has the `Copenhagen interpretation' for a long time been the dominant one.
□ The issue of `reducibility versus irreducibility' is closely related to the question of whether quantum mechanical measurement processes are either deterministic or indeterministic in an
ontological sense to the effect whether or not a measurement result is uniquely determined by a `property the microscopic object possessed immediately preceding the measurement'.
Note that this question is not answered (in the negative) by the Kochen-Specker theorem^0, which is a `theorem of (standard) quantum mechanics', and therefore might have only a restricted
applicability. Moreover, the answer may depend on which interpretation of the mathematical formalism is adopted (compare).
□ Unfortunately, the `irreducibility versus reducibility' dichotomy -being a purely ontological issue- is often equated with the ontic versus epistemic^7 dichotomy, `irreducibility' being
assumed to have an ontological meaning, but `reducibility' being associated with (epistemological) `lack of knowledge' (implemented by Einstein's allegation of incompleteness of quantum
However, if probabilities p[m] are taken as relative frequencies in an ensemble (as is done by Einstein), then they can be seen as `properties of the ensemble', and, hence, they are
ontological (be it not in the sense of `properties of an individual object'). In order not to confound ontological and epistemological issues it is advisable to omit the `ontic versus
epistemic' dichotomy (compare).
• The issue of `irreducibility versus reducibility' can also be met in the literature under the headings:
□ i) `Probabilistic versus statistical' interpretation of the Born rule
The terminology `probabilistic versus statistical' refers to `irreducible quantum probabilities' as being probabilistic, whereas `reducible probabilities' are referred to as being statistical
In evaluating the `probabilistic versus statistical' dichotomy it is important to answer the question ``Probabilities of what?'' Unfortunately, the Born rule does not answer this question
because it does not specify what is the physical meaning of the term `quantum mechanical measurement result'. There are three possibilities:
a) like in `classical mechanics' quantum mechanical measurement result a[m] may be equated with a `property the object possessed prior to and independent of the measurement' (and, ideally,
`posterior to that measurement' too); this classical view has been endorsed by Einstein;
b) stressing the `empirical importance of measurement' in the `Copenhagen interpretation' quantum mechanical measurement result a[m] is equated, in agreement with von Neumann's strong
projection postulate, with a `post-measurement property of the microscopic object';
c) in the empiricist interpretation quantum mechanical measurement result a[m] is equated with a `final pointer position of a measuring instrument' (note that a) and b) correspond to
different versions of the realist interpretation).
Since neither in b) nor in c) the quantum mechanical measurement result a[m] is referring to the initial state of the object, the problem of a `probabilistic interpretation of the Born rule'
does not arise in these interpretations: the mathematical formalism of quantum mechanics is simply thought not to say anything about the issue of (ir)reducibility.
☆ Remarks on the `probabilistic versus statistical' dichotomy:
○ The terminology `probabilistic versus statistical' purports to reflect the difference between `quantum probability' and `classical statistics', the latter satisfying the Kolmogorovian
axioms^0, whereas quantum probability does not satisfy these (as follows, for instance, from violation of the Bell inequality by certain quantum probabilities).
○ The crucial idea behind the `statistical interpretation of quantum mechanics' is the notion of `objective property', to the effect that the microscopic object is thought to `have as a
property the measurement result a[m] already before the measurement' (compare). In this interpretation the probabilities p[m] are thought just to reflect our `lack of knowledge' about
the well-defined value a[m] an individual object is thought to have in an `ensemble prepared by a quantum mechanical preparation procedure'.
Note that this reference to `knowledge' is the origin of the custom to denote the `statistical interpretation' as an epistemic interpretation.
○ In the `probabilistic interpretation' a reduction to an `initial value of the observable, possessed by the object', is assumed to be impossible because the microscopic object is
thought prior to measurement not to have such a property (e.g. Jordan). In the probabilistic interpretation of the Born rule a probability p[m] is thought to be an `intrinsic property
of an individual microscopic object'. Quantum statistics is thought to reflect an `irreducible indeterminism of quantum measurement processes', not reducible to any `lack of
knowledge' about specific circumstances that could cause a `well-defined individual measurement result' to be obtained in a deterministic way.
○ If the terminology is used consistently, then the `probabilistic versus statistical' dichotomy is equivalent to the `irreducibility versus reducibility' one. However, such consistency
cannot always be observed. Thus, in the empiricist interpretation probabilities p[m] refer to `relative frequencies in an ensemble of pointer positions of a measuring instrument';
here the `statistical interpretation' is not even applicable because `pointer positions' are different from `properties of the microscopic object'. Therefore the `irreducibility
versus reducibility' terminology seems preferable. Nevertheless for historical reasons I occasionally refer to the `probabilistic versus statistical' dichotomy, taking into account
that it may be possible to generalize the `statistical interpretation' so as to refer to `subquantum properties' as `properties possessed by the microscopic object prior to
measurement' rather than to `quantum mechanical measurement results' (compare).
□ ii) `Objective versus subjective probability' in quantum mechanics
☆ The `objectivity versus subjectivity' dichotomy is often used as referring to the `irreducibility versus reducibility' dichotomy, `objective probability' being equated with `irreducible
probability', and `subjective probability' with `reducible probability'.
○ Remarks on `objective versus subjective probability':
■ In my opinion the `objectivity versus subjectivity' dichotomy should better not be applied to quantum mechanical probabilities, because, as with the ontic versus epistemic
dichotomy, there is a danger of confounding ontological and epistemological issues. Thus, whereas an "ontic" interpretation allegedly is `objective', an "epistemic" interpretation
is thought to be `subjective' in the sense that `reducible probability' is referring to the `subjective knowledge an observer may have about some property of a microscopic object,
even though, as a member of an ensemble, the object may possess that property with certainty'. However, in that case that probability is an `objective property of that ensemble',
the `ensemble' being a physical object as real as an `individual object' (compare). Whereas the `irreducibility versus reducibility' dichotomy has an ontological meaning only, is
the `objectivity versus subjectivity' dichotomy seen to have both an ontological and an epistemological one. This is particularly evident in the empiricist interpretation in which
a `quantum mechanical measurement result' is thought both to refer `ontologically to a pointer position of a measuring instrument' as well as `epistemologically to a property of a
microscopic object'.
■ Referring to `irreducible probability' as `objective probability' has as an additional disadvantage the confusion caused by the fact that according to the Copenhagen
interpretation `irreducibility of probability' is a consequence of disturbance by measurement, and, therefore, cannot be `objective' in the sense defined here.
`Determinism versus indeterminism' in quantum mechanics
• Ambiguous use of the notion of `(in)determinism'
There are at least three ways in which the notion of `(in)determinism' may be used within quantum mechanics:
i) the notions of `(in)determinism' and `(a)causality' may be used in an indiscriminate way;
ii) the notion of `(in)determinism' may refer either to the free evolution or to the measurement process, or to both of these;
iii) the notion of `(in)determinism' may be used either in an epistemological sense (as a property of a physical theory) or in an ontological one (as a property of physical reality).
□ i) (In)determinism versus (a)causality
☆ `Determinism' is understood to signify that `the state of a system is uniquely determined by an initial one' (as implemented by the uniqueness of the solutions of the system's evolution
By `causality' I will understand `liability to causal explanation'.
In order to avoid confusion I will stick to these characterizations, even though in the quantum mechanical literature `determinism' and `causality' are often identified.
☆ The confusing identification of `determinism' and `causality' is a consequence of the idea that measurement result a[m], obtained in a measurement of observable A, could be explained by
the assumption that `the observable had that value immediately before the measurement'. Although such `explanation by determinism' (if suitably generalized) may be a possibility it is a
`causal explanation' all the same. Hence, the issue is `causality' in the first place.
☆ Remarks on `(in)determinism versus (a)causality':
○ The possibility or impossibility of `explanation by determinism' marks the distinction between the `statistical' and `probabilistic' interpretations of the Born rule, the former
-contrary to the latter- assuming such an explanation to be possible.
○ The issue of `explanation by determinism of the quantum mechanical measurement result a[m]' is at the basis of the possessed values principle as well as the notion of faithful
measurement. It hinges on the existence of an element of physical reality, introduced by Einstein in the famous EPR paper to act as the (deterministic) cause of measurement result a
[m], thus implementing Einstein's view that `the good Lord does not throw dice'. Einstein's idea of an `element of physical reality as a property of the microscopic object, possessed
independent of measurement', was rejected by the Copenhagen interpretation on the basis of empiricist reservations (Hume^0, Mach^0) with respect to the "metaphysical" notion of
`causality' as well as by the `Copenhagen quantum postulate'.
○ Inapplicability of the possessed values principle (as demonstrated by the Kochen-Specker theorem^0) makes obsolete Einstein's proposal to consider the measurement result a[m] itself
as an `element of physical reality'. This circumstance, based on the `mathematical formalism of quantum mechanics', might seem to endorse `Copenhagen indeterminism' as against
`Einstein's determinism'.
Nevertheless, from a methodological point of view abandoning `causality' or `explanation by determinism' is not very attractive. Few experimental physicists, if pressed to critically
compare `the way they use to discuss their experiments' with `what they may have learned during their quantum mechanics courses', will be ready to take seriously Jordan's assertion to
the effect that `a Geiger counter may have been triggered by a particle that immediately before the measurement need not have been at the counter's position'.
○ `Explanation by determinism' seems to be particularly useful for explaining in EPR-Bell experiments the possible occurrence of strict correlations between values of certain standard
spin observables measured in causally disjoint regions of space-time (e.g. in the singlet state of two spin-1/2 particles a joint measurement of equal spin components exclusively
yields opposite values, which could not be so easily explained if beforehand that correlation were not present as an objective property).
○ If, as a consequence of the Kochen-Specker theorem, quantum mechanics is not able to implement `explanation by determinism', it seems natural for this purpose to rely on a different (
subquantum) theory encompassing a more general subquantum element of physical reality as a `deterministic cause of quantum measurement result a[m]'. In agreement with the empiricist
interpretation of the quantum mechanical formalism (in which a[m] does not refer to the microscopic object but to the measuring instrument) the possibility of such subquantum
properties of microscopic objects may be acknowledged to be as natural as is the assumption that a billiard ball consists of atoms (rather than being a rigid body), properties of the
atoms being described by a `theory different from the classical theory of rigid bodies'.
○ The Copenhagen idea that quantum mechanics is incompatible with `causality' (in the sense of `explanation by determinism') has been rather generally accepted following the discussions
on the EPR paper. However, rejection of Einstein's `reliance on causality' was not based on the mathematical formalism (the Kochen-Specker theorem only came 30 years later), but it
was a consequence of a logical positivist/empiricist abhorrence of metaphysics, shunning `hidden variables' even if they were disguised as `quantum mechanical measurement results' (as
was the case with the EPR proposal).
○ It seems to me that, because the discussions were generally confined to the realist interpretation of quantum mechanics, the possibility of `causes having a subquantum nature' (so
plausible in an empiricist interpretation) was brushed aside too easily, and the Copenhagen idea of `indeterminism' or `acausality' accepted too readily (compare).
□ ii) (In)determinism during free evolution and/or measurement
☆ (In)determinism may refer either to the situation in which the system is isolated during free evolution^16, or to its behaviour when it is interacting with a measuring instrument.
☆ In quantum mechanics free evolution of the state vector is described by the solutions of the Schrödinger equation. By this an even more deterministic behaviour is suggested than is usual
in classical mechanics (thus, for a `three-body system' solutions of the quantum mechanical evolution equation may be less chaotic than the solutions of the corresponding classical
☆ Nevertheless, a deterministic behaviour of the quantum mechanical state vector during free evolution is sometimes combined with an allegation of indeterministic behaviour of quantum
mechanical observables, to the effect that the values of these observables are thought to perform `quantum jumps'. For instance, a free quantum particle may be thought to move in a
chaotic manner, jumping to and fro within the region where the wave function is nonvanishing, thus within the quantum domain allegedly causing well-defined trajectories (like those of
classical mechanics) to be nonexistent during free evolution.
☆ `Measurement indeterminism' in the first place refers to the behaviour of the state vector during measurement. It has been stipulated by von Neumann -as well as by Dirac- to be described
by von Neumann's projection postulate. Hence, unless the initial state is an eigenvector of the measured observable, the state vector allegedly behaves indeterministically during
☆ `Deterministic behaviour of quantum mechanical observables during measurement' is at issue in explanation by determinism of quantum mechanical measurement result a[m]. It played a key
role in Einstein's challenge of the Copenhagen idea of `completeness of quantum mechanics', culminating in the EPR paper.
☆ Remarks on `(in)determinism during free evolution and/or measurement':
○ `(In)determinism during free evolution' was not at issue in the discussions on the `completeness of quantum mechanics as endorsed by the Copenhagen interpretation'. Indeed, Heisenberg
acknowledges the possibility of the existence of deterministic trajectories during the free evolution of an electron, be it that these are considered by him to be metaphysical as a
consequence of their unobservability (such unobservability being attributed to the impossibility of a `simultaneous determination of position and momentum' as a consequence of `
irreducible indeterminism of such a measurement').
○ Indeterminism of an observable's value a[m] during free evolution is permitted by the standard mathematical formalism of quantum mechanics, since in general only probabilities p[m](t)
= |<a[m]|ψ(t)>|^2 are completely determined by applying the Schrödinger equation to the initial state
|ψ(0)> = ∑[m] c[m]|a[m]> (in which c[m] = <a[m]|ψ(0)>).
This even holds true if the observable is a constant of the motion.
○ On the other hand, with respect to a constant of the motion a case could be made for constancy of measurement result a[m] in the deterministic sense that its value would be
independent of the time of the measurement. Although this cannot be verified experimentally, it would be hard to explain the time-independence of the (experimentally testable)
relative frequencies p[m] = |c[m]|^2 for arbitrary quantum mechanical states if not an explanation would be possible on the basis of a deterministic behaviour of a[m] (c.q. the
subquantum element of physical reality determining a[m]). If such an explanation would not be available it would be incomprehensible why the `mechanism causing the alleged quantum
jumps of the constants of the motion' would in a conspiratory way have to prevent its activity from being observable by keeping p[m](t) constant.
○ Since it is equally metaphysical on the basis of its unobservability to exclude `determinism during free evolution' as to assume it, it does seem wise to take an agnostic position
with respect to this issue. Since quantum mechanics does not seem to provide a definite answer, a solution may have to be sought outside the domain of application of quantum
mechanics. The subject has been discussed by Bohm, whose theory is widely held to yield a description in terms of deterministic trajectories underpinning the statistical quantum
mechanical description (see, however, my criticism of this idea). Probably, in order to find indications in which direction an answer should be sought, we have to await experiments
probing the domain of application of quantum mechanics in order to see where quantum mechanics fails to describe our experiments.
○ An answer to the question of whether during measurement the value of an observable can behave deterministically notwithstanding an indeterministic behaviour of the state vector (as
described by von Neumann projection or its generalization) hinges on the interpretations of both `state vector' and `observable'.
Thus, in the `Copenhagen interpretation' the indeterministic behaviour of the state vector is translated into a similar behaviour of the measured observable (by assuming the
observable to be transformed by von Neumann projection from indeterminate into `having a well-defined value'). Alternatively, Einstein tries to save determinism of the value of an
observable during measurement by `relaxing the link between state vector and observable', interpreting the former as a `description of an ensemble' rather than `referring to an
individual particle', and assuming each individual member of the ensemble to have a well-defined value of the observable.
The dependence of the answer on interpretation (viz. the individual particle versus ensemble dichotomy) is still further complicated by taking into account the distinction between
realist and empiricist interpretations.
□ iii) Ontological and epistemological senses of `(in)determinism during measurement'
In view of my agnostic position with respect to `(in)determinism during free evolution' I shall restrict myself here to `(in)determinism during measurement'.
☆ In the ontological sense `determinism' is meant to be a (possible) `property of physical reality', possessed independently of any theory (compare). In the epistemological sense
`determinism' is a (possible) `property of a physical theory'.
`Ontological indeterminism' and `epistemological indeterminism' should be distinguished. We have:
`ontological indeterminism' implies `epistemological indeterminism'
(since `ontological indeterminism', if existing, should be implemented in our theories)',
`epistemological indeterminism' does not imply `ontological indeterminism'
(since `indeterminism of our theories' does not necessarily imply `ontological indeterminism').
☆ Von Neumann projection epitomizes the `epistemological indeterminism' of quantum mechanical measurement theory. The big question is whether this implies the ontological indeterminism
characterizing the Copenhagen interpretation. This issue is closely related to the question of `explanation by determinism', answered in different ways by Einstein and by the `Copenhagen
interpretation'. Whereas Einstein tried to reconcile `epistemological indeterminism' (associated with the `statistical character of quantum mechanics') with `ontological determinism'
(corresponding to reducible probability), did the Copenhagen interpretation (minus Bohr, who in these matters used to avoid ontological assertions as much as possible) interpret that
character (now referred to as probabilistic) as reflecting `ontological indeterminism' (corresponding to irreducible probability).
☆ The question of `ontological indeterminism' is connected with the superposition principle, to the effect that superpositions |ψ> = ∑[m]c[m]|a[m]> were at the basis of Jordan's assertion
that an observable does not have a well-defined value prior to measurement, thus assuming a measurement to be an `indeterministic process in which the observable obtains its value in an
ontological sense'.
On the other hand, Einstein assumed that superpositions just pointed to `lack of knowledge about the value observable A already had prior to measurement', thus leaving open the
possibility that measurement in a deterministic way reveals that value. In this view von Neumann projection could be considered as having an epistemological meaning, describing the
`increase of knowledge realized by the measurement' (compare the ensemble interpretation), rather than the `ontological meaning endorsed by the Copenhagen interpretation'.
☆ Remarks on `ontological and epistemological senses of (in)determinism during measurement':
○ `Ontological indeterminism' combined with `epistemological indeterminism' is at the basis of the probabilistic interpretation of the Born rule. A combination of `ontological
determinism' and `epistemological indeterminism' is at the basis of the statistical interpretation of the Born rule. This distinction can also be expressed in terms of the
individual-particle versus ensemble dichotomy.
○ Negligence within the main part of the quantum mechanical literature of the distinction between ontology and epistemology plays a confusing role in assessing the meaning of quantum
mechanics. For instance, if the Heisenberg-Kennard-Robertson inequality is considered as an expression of an uncertainty principle this fits into the idea of `epistemological
indeterminism'; on the other hand, as an indeterminacy principle the same inequality seems to be an expression of `ontological indeterminism' (compare).
○ We should be aware of Bohr's cautiousness, inducing him to avoid as much as possible ontological assertions by restricting himself to `what we can tell about a microscopic object'
rather than referring to `properties possessed by the object'. For Bohr `measurement indeterminism' was epistemological in the first place. He repeatedly warned^49 against Jordan's
assertion that by the measurement `measurement results' would be created in an ontological sense as `properties of the microscopic object' (also).
○ It is less clear to what extent Heisenberg sided either with Jordan or with Bohr. Heisenberg's empiricism induced him to take nearly as cautious a position with respect to ontology
("metaphysics") as Bohr was taking. However, his acknowledgement of the possibility of trajectories as well as his disturbance theory of measurement suggest that Heisenberg, as a
physicist, may have been thinking more ontologically with respect to microscopic reality than did the "philosopher" Bohr.
○ In the `realist interpretation of the quantum mechanical formalism' the distinction between ontological and epistemological (in)determinism is easily overlooked since here the
theoretical quantity a[m] is interpreted as a `property of the microscopic object'. Then `epistemological indeterminism' may readily be held to `necessarily imply ontological
indeterminism' (as was done by the majority of the supporters of the Copenhagen interpretation). As a result, the `empirical quantity interpreted as quantifying indeterminacy' (viz.
the standard deviation used in the Heisenberg-Kennard-Robertson inequality) was thought not to refer to the pre-measurement state of the microscopic object (since measurement results
a[m] allegedly could not be attributed to that state); instead they were taken to refer to the post-measurement state of the microscopic object, being interpreted as values of
observable A created by the measurement (as implemented by von Neumann projection). Such ideas have been developed by Heisenberg and by Dirac, and have become the standard view of the
majority of physicists during the second half of the 20^th century. Warnings against such a view by a "philosopher" like Bohr were not taken too seriously.
○ Bohr was one of the few to realize that the distinction between ontological and epistemological (in)determinism is relevant. Although Bohr blamed `quantum indeterminacy' on the
interaction between the microscopic object and a macroscopic measuring instrument (cf. the quantum postulate), he did not venture to draw ontological conclusions with respect to
microscopic reality (contrary to Jordan), but remained on the epistemological level. According to him `standard deviations' expressed the `latitude a quantum mechanical observable is
defined with within the experimental arrangement set up to measure that observable'. According to Bohr it is impossible to draw a sharp distinction between the microscopic object and
the measurement arrangement, thus preventing ontological statements with respect to the microscopic object's behaviour during a measurement. Bohr's `latitude', being a matter of
`unsharp definition', represents an `epistemological indeterminateness'.
○ It should be noted that Bohr and Einstein agreed on the epistemological meaning of `standard deviations' (be it in completely different ways). What they disagreed on, was whether an
(ontological) `explanation by determinism' is possible. With respect to this issue there is a certain asymmetry between the adversaries. Whereas Einstein accepted the possibility of
such explanations by assuming the existence of elements of physical reality, did Bohr (in agreement with his cautious position with respect to ontology) not claim the `nonexistence of
such elements of physical reality' but he just pointed at the `ambiguity of Einstein's definition of these quantities' (which ambiguity is attributed by Bohr to their dependence,
ignored by Einstein, on the measurement arrangement that is actually present).
Bohr's reference to the `measurement arrangement' can be seen as the advent of contextualism within quantum mechanics, be it that it was meant to be taken in a strictly
epistemological sense (in particular, a latitude, even if represented by a standard deviation, is not supposed to correspond to a sharp, although incompletely known, value of an
observable^68, but should be considered to be a `restriction on the definability of that observable'). The ontological implementation of `unsharpness' should be attributed to
physicists like Jordan and Heisenberg. It was adopted by the majority of physicists. It was seldom realized that the Copenhagen assumption of `ontological indeterminism' is as
metaphysical as Einstein's assumption of `ontological determinism'. In particular, the circumstance that `measurement is deterministic in case of an eigenstate of the measured
observable' might have been seen as an indication at the epistemological level of `ontological determinism'. This, however, was ignored by the Copenhagen interpretation.
☆ In the `empiricist interpretation' the distinction between ontological and epistemological issues presents itself in a natural way, reference to the pointer of a measuring instrument
evidently being `ontological with respect to that instrument' but `epistemological with respect to the microscopic object' (since it is representing `knowledge on that object'). By
restricting the attention to the microscopic object (as is done in a `realist interpretation') we lose the possibility of drawing such a clear distinction. In particular, in the
`empiricist interpretation' there is no reason to interpret the Heisenberg-Kennard-Robertson inequality as a consequence of `ontological measurement indeterminism' (even though, as a
consequence of a gross misinterpretation, the inequality has widely been interpreted as such). Note that Einstein did not think the Heisenberg-Kennard-Robertson inequality to be
inconsistent with `measurement determinism' (although he did not realize that for its implementation `subquantum elements of physical reality' should be taken into account, compare).
Seen in this light we may conclude that the Copenhagen fear of metaphysics, causing a rejection of hidden variables, has been instrumental in introducing an equally metaphysical
`ontological indeterminism' (compare).
☆ Apart from allowing more readily to see that the mathematical formalism of quantum mechanics does not necessarily imply `ontological indeterminism', an empiricist interpretation allows to
implement in a natural way notions of `epistemological indeterminacy' (like Bohr's latitude) that are not related to the dynamics of the microscopic object but may be seen as `properties
of the measuring instrument or the measurement procedure applied to measure a quantum mechanical observable'. Indeed, applying the billiard ball analogy, it is not unreasonable to suppose
that quantum mechanical measurements (to be compared with macroscopic observations which, as a consequence of `limited resolution of the observation procedure' are incapable to
distinguish between all distinct atomic configurations of the ball) may not distinguish between all distinct `subquantum elements of physical reality' of a microscopic object. Einstein's
(epistemological) idea of `incompleteness of quantum mechanics' may be analogous to the idea of `incompleteness of a description of a billiard ball by the classical theory of rigid
☆ Epistemological indeterminateness is further discussed within the context of the individual-particle versus ensemble dichotomy. It should be noted, however, that, as is seen from Bohr's
answer to EPR, the Bohr-Einstein controversy over the `(in)completeness of quantum mechanics' was not about this latter dichotomy, but about the question of whether or not Einstein's
`element of physical reality' is `independent of the measurement arrangement', or not. Even if Bohr may have meant this in an epistemological sense, his idea `that there is a dependence
on the measurement arrangement' does not seem to be physically relevant unless it has also an ontological dimension.
The controversy between Einstein and Bohr can best be understood as a controversy over the relative merits of objectivistic-realist (Einstein) and contextualistic-realist (Bohr)
interpretations of quantum mechanical observables, restriction to realist interpretations by both contestants making it more difficult to distinguish between ontological and
epistemological issues. | {"url":"http://www.phys.tue.nl/ktn/Wim/qm1.htm","timestamp":"2014-04-18T20:42:53Z","content_type":null,"content_length":"207212","record_id":"<urn:uuid:2c700c3d-87ef-4485-ac83-1dac96e491a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rex, GA SAT Math Tutor
Find a Rex, GA SAT Math Tutor
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
12 Subjects: including SAT math, statistics, algebra 1, algebra 2
...I look forward to working with you in the future.I am a science teacher who absolutely loves science and math! I have tutored algebra since high school to students on both the high school and
collegiate levels. I definitely tutor students at a rate that they feel comfortable with, and have found great success in tutoring these subjects.
15 Subjects: including SAT math, chemistry, geometry, biology
I recently graduated with a B.S. in Biochemistry from Houghton College where I maintained 3.98 GPA and 4.0 science GPA. I have had significant science and math course work. Throughout college and
since I tutored for Organic Chemistry, General Chemistry, Physics, Genetics, General Biology, Calculus and have taught several MCAT prep course.
11 Subjects: including SAT math, chemistry, physics, biology
...After college, I went into the workforce working as a scientist in the food industry. After working a few years, I then decided to pursue my Master's in Biology at Chatham University. My
experience in tutoring has spanned from my high school years into my college years and into my personal life.
29 Subjects: including SAT math, chemistry, reading, physics
...I have formally taught Algebra 2 in a classroom setting as well as having tutored several students in this subject area. I have formally taught Geometry in a classroom setting as well as
having tutored several students in this subject area. I have formally taught Pre-Algebra in a classroom setting as well as having tutored several students in this subject area even over the
10 Subjects: including SAT math, geometry, algebra 1, algebra 2
Related Rex, GA Tutors
Rex, GA Accounting Tutors
Rex, GA ACT Tutors
Rex, GA Algebra Tutors
Rex, GA Algebra 2 Tutors
Rex, GA Calculus Tutors
Rex, GA Geometry Tutors
Rex, GA Math Tutors
Rex, GA Prealgebra Tutors
Rex, GA Precalculus Tutors
Rex, GA SAT Tutors
Rex, GA SAT Math Tutors
Rex, GA Science Tutors
Rex, GA Statistics Tutors
Rex, GA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Atlanta Ndc, GA SAT math Tutors
Conley SAT math Tutors
Ellenwood SAT math Tutors
Forest Park, GA SAT math Tutors
Hapeville, GA SAT math Tutors
Jonesboro, GA SAT math Tutors
Lake City, GA SAT math Tutors
Lithonia SAT math Tutors
Lovejoy, GA SAT math Tutors
Morrow, GA SAT math Tutors
Pine Lake SAT math Tutors
Red Oak, GA SAT math Tutors
Redan SAT math Tutors
Scottdale, GA SAT math Tutors
Stockbridge, GA SAT math Tutors | {"url":"http://www.purplemath.com/Rex_GA_SAT_Math_tutors.php","timestamp":"2014-04-21T12:41:42Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:02311009-4528-450a-873f-54ff5e665283>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
food truck powered by????.....
kougs8 food truck powered by????..... Wed, 07/18/12 5:01 PM
• Total Posts: 4
• Joined: 7/18/2012 permalink
• Location: BC,Canada,
XX )
First off this is a fantastic forum, a wealth of information for somebody new like me
I just want to know if any of you have done builds to trucks that are powered under propane, instead of diesel or gas?
Is there any major issues with building a kitchen on the back of a propane powered step van??? Other than the fact that you are riding around with A LOT of propane (for the
appliances too!)
what are your thoughts?
chefbuba Re:food truck powered by????..... Wed, 07/18/12 9:05 PM
• Total Posts: 1951
• Joined: 6/22/2009 permalink
• Location: Near You, WA
There should be no problems. The price for propane is much more attractive over gas/diesel too. Gas here is $3.65....diesel 3.95....
I paid $1.89 gal for my propane fill today.
roadkillgrill Re:food truck powered by????..... Wed, 07/18/12 11:29 PM
• Total Posts: 174
• Joined: 8/1/2009 permalink
• Location: Helena,
AR )
There's a natural fear of running out of LP. I would hate the thought of running out of Propane at an event and not be able to drive for a refill at midnight. Otherwise might
be workable.
Solivares00 Re:food truck powered by????..... Thu, 07/19/12 2:28 AM
• Total Posts: 3
• Joined: 7/18/2012 permalink
• Location: Richmond, TX
Do i have to have propane ina set place before inspection or can i take it out and connect then i dont have any cages thats how it came when i bought it
chefbuba Re:food truck powered by????..... Thu, 07/19/12 9:54 AM
• Total Posts: 1951
• Joined: 6/22/2009 permalink
• Location: Near You, WA
Foodbme Re:food truck powered by????..... Fri, 07/20/12 12:02 AM
• Total
Posts: permalink
• Joined: 9 )
• Location: Propane weighs about 4.2 pounds per US gallon, at 60 degrees Fahrenheit. Propane expands 1.5% per 10 degrees.
AZ 40 gals of propane = 168#
Gasoline weighs 6.073 pounds per US Gallon.
40 Gals of Gasoline = 243#
A difference of 75#
The differential would be the weight of a gas tank verses the weight of a propane tank.
Smaller propane tanks such as a 40# tank refer to the amount of propane it will hold. A 20# tank holds 20#'s a 40# tank hold 40#'s etc.. The empty weight of the tank is located on the
collar and is marked as T.W (tare weight). You can add 40 to that number and that is the tanks full weight. If you need to know how much propane is in the tank take the total current
weight of the tank subtract the empty weight (T.W) from that and divided that number by 4.125 which is the weight per gallon of propane.
My guess would be that a filled 40 Gal Propane tank would weigh a little less than a Gasoline tank.
Next factor is weight distribution. Where's the gas tank and where's the propane tank going to be located?
OOOOPS!!!! Forgot you're in Canada! Need to convert to Imperial Gallons????
<message edited by Foodbme on Fri, 07/20/12 12:06 AM>
kougs8 Re:food truck powered by????..... Fri, 07/20/12 11:39 AM
• Total Posts: 4
• Joined: 7/18/ permalink
• Location: )
BC,Canada, XX
I think I am confusing everybody down there in the states :)
Up here in canada we have vehicles that run off propane. Exactly the same as a gas or deisil vehicle. You just pull up to the propane pump at the gas station. They converted a
lot of step van's to this form of power cause it's cheaper than gasoline.
The main issue with it is you loose torque. I haven't bought the step van yet, I was just curious what others thought, but now i realize that you don't have these type of step
vans in the USA....my bad :)
RJ's Re:food truck powered by????..... Sun, 07/22/12 6:18 PM
• Total
Posts: 7 permalink
• Joined: 4/
25/2012 )
• Location:
Vancouver Hey Kougs8, I am in BC as well. Building a food truck on Vancouver Island. I'm pretty sure they have propane powered vehicles in the U.S. Anyways, I understand your question. You
Island, XX should have no problem whatsoever having a propane powered truck. As far as the torque is concerned, if you plan on crossing the Rockies, you can expect some very slow climbs up the
hills. Other than that, you should have no problem. If you have a lot of heavy equipment, you will definetely be rolling slowly. I have had many propane vehicles, the fuel savings are
huge. Finding propane at gas stations can be a pain in the neck, but other than that, Go for it man!!
kougs8 Re:food truck powered by????..... Mon, 07/23/12 11:28 AM
• Total
Posts: 4 permalink
• Joined: 7/
18/2012 )
• Location:
BC,Canada, hey I am on vancouver island as well!!!
XX I am deciding what truck to buy right now, and the one i looked at had 2 huge propane tanks underneath, so i was wondering where i could put my grey water tank?
Does anyone have pictures of exactly how and where people have hooked them up under their trucks....and can you have a drainage pipe system leading to it? Or does it pretty much have
to be underneath the sinks? For example if my sinks are up near the front of the truck can i drain through pipe work to the grey water tank located between the rear axle of the truck?
Current active users
There are 0 members and 2 guests.
Icon Legend and Permission | {"url":"http://www.roadfood.com/Forums/tm.aspx?m=704769&mpage=1","timestamp":"2014-04-17T01:43:47Z","content_type":null,"content_length":"75698","record_id":"<urn:uuid:b87b656c-48f8-4c00-9845-e69779bfb986>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inductive Logic
1. Although enumerative inductive arguments may seem similar to what classical statisticians call estimation, they are not really the same thing. As classical statisticians are quick to point out,
estimation does not use the sample to inductively support a conclusion about the whole population. Estimation is not supposed to be a kind of inductive inference. Rather, estimation is a decision
strategy. The sample frequency will be within two standard deviations of the population frequency in about 95% of all samples. So, if one adopts the strategy of accepting as true the claim that the
population frequency is within two standard deviations of the sample frequency, and if one uses this strategy repeatedly for various samples, one should be right about 95% of the time. I will discuss
enumerative induction in much more detail later in the article.
2. Another way of understanding axiom (5) is to view it as a generalization of the deduction theorem and its converse. The deduction theorem and converse says this: C ⊨ (B⊃A) if and only if (C·B) ⊨ A
. Given axioms (1-4), axiom (5) is equivalent to the following:
5*. (1 − P[α][(B⊃A) | C]) = (1 − P[α][A | (B·C)]) × P[α][B | C].
The conditional probability P[α][A | (B·C)] completely discounts the possibility that B is false, whereas the probability of the conditional P[α][(B⊃A) | C] depends significantly on how probable B is
(given C), and must approach 1 if P[α][B | C] is near 0. Rule (5*) captures how this difference between the conditional probability and the probability of a conditional works. It says that the
distance below 1 of the support-strength of C for (B⊃A) equals the product of the distance below 1 of the support strength of (B·C) for A and the support strength of C for B. This makes good sense:
the support of C for (B⊃A) (i.e., for (~B∨A)) is closer to 1 than the support of (B·C) for A by the multiplicative factor P[α][B | C], which reflects the degree to which C supports ~B. According to
Rule (5*), then, for any fixed value of P[α][A | (B·C)] < 1, as P[α][B | C] approaches 0, P[α][(B⊃A) | C] must approach 1.
3. This is not what is commonly referred to as countable additivity. Countable additivity requires a language in which infinitely long disjunctions are defined. It would then specify that P[α][((B[1]
∨B[2])∨…) | C] = ∑[i] P[α][B[i] | C]. The present result may be derived (without appealing to countable additivity) as follows. For each distinct i and j, let C ⊨ ~(B[i]·B[j]); and suppose that P[α][
D | C] < 1 for at least one sentence D. First notice that we have, for each i greater than 1 and less than n, C ⊨ (~(B[1]·B[i+1])·…· ~(B[i]·B[i+1])); so C ⊨ ~(((B[1]∨B[2])∨ …∨B[i])·B[i+1]). Then, for
any finite list of the first n of the B[i] (for each value of n),
P[α][(((B[1]∨B[2])∨ …∨B[n−1])∨B[n]) | C]
= P[α][((B[1]∨B[2])∨… ∨B[n−1]) | C] + P[α][B[n] | C]
= …
= ∑ P[α][B[i] | C].
By definition,
∞ n
∑ P[α][B[i] | C] = lim[n] ∑ P[α][B[i] | C].
i=1 i=1
So, lim[n] P[α][((B[1]∨ B[2])∨…∨B[n]) | C] = ∑ P[α][B[i] | C]
4. Here are the usual axioms when unconditional probability is taken as basic:
P[α] is a function from statements to real numbers between 0 and 1 that satisfies the following rules:
1. if ⊨A (i.e. if A is a logical truth), then P[α][A] = 1;
2. if ⊨~(A·B) (i.e. if A and B are logically incompatible), then P[α][(A∨B)] = P[α][A] + P[α][B];
Definition: if P[α][B] > 0, then P[α][A | B] = P[α][(A·B)] / P[α][B].
5. Bayesians often refer to the probability of an evidence statement on a hypothesis, P[e | h·b·c], as the likelihood of the hypothesis. This can be a somewhat confusing convention since it is
clearly the evidence that is made likely to whatever degree by the hypothesis. So, I will disregard the usual convention here. Also, presentations of probabilistic inductive logic often suppress c
and b, and simply write ‘P[e | h]’. But c and b are important parts of the logic of the likelihoods. So I will continue to make them explicit.
6. These attempts have not been wholly satisfactory thus far, but research continues. For an illuminating discussion of the logic of direct inference and the difficulties involved in providing a
formal account, see the series of papers (Levi, 1977), (Kyburg, 1978) and (Levi, 1978). Levi (1980) develops a very sophisticated approach.
Kyburg has developed a logic of statistical inference based solely on logical direct inference probabilities (Kyburg, 1974). Kyburg's logical probabilities do not satisfy the usual axioms of
probability theory. The series of papers cited above compares Kyburg's approach to a kind of Bayesian inductive logic championed by Levi (e.g., in Levi, 1967).
7. This idea should not be confused with positivism. A version of positivism applied to likelihoods would hold that if two theories assign the same likelihood values to all possible evidence claims,
then they are essentially the same theory, though they may be couched in different words. In short: same likelihoods implies same theory. The view suggested here, however, is not positivism, but its
inverse, which should be much less controversial: different likelihoods implies different theories. That is, given that all of the relevant background and auxiliaries are made explicit (represented
in ‘b’), if two scientists disagree significantly about the likelihoods of important evidence claims on a given hypothesis, they must understand the empirical content of that hypothesis quite
differently. To that extent, though they may employ the same syntactic expressions, they use them to express empirically distinct hypotheses.
8. Call an object grue at a given time just in case either the time is earlier than the first second of the year 2030 and the object is green or the time is not earlier than the first second of 2030
and the object is blue. Now the statement ‘All emeralds are green (at all times)’ has the same syntactic structure as ‘All emeralds are grue (at all times)’. So, if syntactic structure determines
priors, then these two hypotheses should have the same prior probabilities. Indeed, both should have prior probabilities approaching 0. For, there are an infinite number of competitors of these two
hypotheses, each sharing the same syntactic structure: consider the hypotheses ‘All emeralds are grue[n] (at all times)’, where an object is grue[n] at a given time just in case either the time is
earlier than the first second of the n^th day after January 1, 2030, and the object is green or the time is not earlier than the first second of the n^th day after January 1, 2030, and the object is
blue. A purely syntactic specification of the priors should assign all of these hypotheses the same prior probability. But these are mutually exclusive hypotheses; so their prior probabilities must
sum to a value no greater than 1. The only way this can happen is for ‘All emeralds are green’ and each of its grue[n] competitors to have prior probability values either equal to 0 or
infinitesimally close to it.
9. This assumption may be substantially relaxed without affecting the analysis below; we might instead only suppose that the ratios P[α][c^n | h[j]·b]/P[α][c^n | h[i]·b] are bounded so as not to get
exceptionally far from 1. If that supposition were to fail, then the mere occurrence of the experimental conditions would count as very strong evidence for or against hypotheses — a highly
implausible effect. Our analysis could include such bounded condition-ratios, but this would only add inessential complexity to our treatment.
10. For example, when a new disease is discovered, a new hypothesis h[u+1] about that disease being a possible cause of patients’ symptoms is made explicit. The old catch-all was, “the symptoms are
caused by some unknown disease — some disease other than h[1],…, h[u]”. So the new catch-all hypothesis must now state that “the symptoms are caused by one of the remaining unknown diseases — some
disease other than h[1],…, h[u], h[u+1]”. And, clearly, P[α][h[K] | b] = P[α][~h[1]·…·~h[u] | b] = P[α][~h[1]·…·~h[u]· (h[u+1]∨~h[u+1]) | b] = P[α][~h[1]·…·~h[u]·~h[u+1] | b] + P[α][h[u+1] | b] = P
[α][h[K*] | b] + P[α][h[u+1] | b]. Thus, the new hypothesis h[u+1] is “peeled off” of the old catch-all hypothesis h[K], leaving a new catch-all hypothesis h[K*] with a prior probability value equal
to that of the old catch-all minus the prior of the new hypothesis.
11. This claim depends, of course, on h[i] being evidentially distinct from each alternative h[j]. I.e., there must be conditions c[k] with possible outcomes o[ku] on which the likelihoods differ: P[
o[ku] | h[i]·b·c[k]] ≠ P[o[ku] | h[j]·b·c[k]]. Otherwise h[i] and h[j] are empirically equivalent, and no amount of evidence can support one over the other. (Did you think a confirmation theory could
possibly do better? — could somehow employ evidence to confirm the true hypothesis over evidentially equivalent rivals?) If the true hypothesis has evidentially equivalent rivals, then convergence
result just implies that the odds against the disjunction of the true hypothesis with these rivals very probably goes to 0, so the posterior probability of this disjunction goes to 1. Among
evidentially equivalent hypotheses the ratio of their posterior probabilities equals the ratio of their priors: P[α][h[j] | b·c^n·e^n] / P[α][h[i] | b·c^n·e^n] = P[α][h[j] | b] / P[α][h[i] | b]. So
the true hypothesis will have a posterior probability near 1 (after evidence drives the posteriors of evidentially distinguishable rivals near to 0) just in case plausibility arguments and
considerations (expressed in b) make each evidentially indistinguishible rival so much less plausible by comparison that the sum of each of their comparative plausibilities (as compared to the true
hypothesis) remains very small.
One more comment about this. It is tempting to identify evidential distinguishability (via the evidential likelihoods) with empirical distinguishability. But many plausibility arguments in the
sciences, such as thought experiments, draw on broadly empirical considerations, on what we know or strongly suspect about how the world works based on our experience of the world. Although this kind
of “evidence” may not be representable via evidential likelihoods (because the hypotheses it bears on don't deductively or probabilistically imply it), it often plays an important role in scientific
assessments of hypotheses — in assessments of whether a hypothesis is so extraordinary that only really extraordinary likelihood evidence could rescue it. It is (arguably) a distinct virtue of the
Bayesian logic of evidential support that it permits such considerations to be figured into the net evaluation of support for hypotheses.
12. This is a good place to describe one reason for thinking that inductive support functions must be distinct from subjectivist or personalist degree-of-belief functions. Although likelihoods have a
high degree of objectivity in many scientific contexts, it is difficult for belief functions to properly represent objective likelihoods. This is an aspect of the problem of old evidence.
Belief functions are supposed to provide an idealized model of belief strengths for agents. They extend the notion of ideally consistent belief to a probabilistic notion of ideally coherent belief
strengths. There is no harm in this kind of idealization. It is supposed to supply a normative guide for real decision making. An agent is supposed to make decisions based on her belief-strengths
about the state of the world, her belief strengths about possible consequences of actions, and her assessment of the desirability (or utility) of these consequences. But the very role that belief
functions are supposed to play in decision making makes them ill-suited to inductive inferences where the likelihoods are often supposed to be objective, or at least possess inter-subjectively agreed
values that represent the empirical import of hypotheses. For the purposes of decision making, degree-of-belief functions should represent the agent's belief strengths based on everything she
presently knows. So, degree-of-belief likelihoods must represent how strongly the agent would believe the evidence if the hypothesis were added to everything else she presently knows. However,
support-function likelihoods are supposed to represent what the hypothesis (together with explicit background and experimental conditions) says or implies about the evidence. As a result,
degree-of-belief likelihoods are saddled with a version of the problem of old evidence – a problem not shared by support function likelihoods. Furthermore, it turns out that the old evidence problem
for likelihoods is much worse than is usually recognized.
Here is the problem. If the agent is already certain of an evidence statement e, then her belief-function likelihoods for that statement must be 1 on every hypothesis. I.e., if Q[γ] is her belief
function and Q[γ][e] = 1, then it follows from the axioms of probability theory that Q[γ][e | h[i]·b·c] = 1, regardless of what h[i] says — even if h[i] implies that e is quite unlikely (given b·c).
But the problem goes even deeper. It not only applies to evidence that the agent knows with certainty. It turns out that almost anything the agent learns that can change how strongly she believes e
will also influence the value of her belief-function likelihood for e, because Q[γ][e | h[i]·b·c] represents the agent's belief strength given everything she knows.
To see the difficulty with less-than-certain evidence, consider the following example. Let e be any statement that is statistically implied to degree r by a hypothesis h together with experimental
conditions c (e.g. e says “the coin lands heads on the next toss” and h·c says “the coin is fair and is tossed in the usual way on the next toss”). Then the correct objective likelihood value is just
P[e | h·c] = r (e.g. for r = 1/2). Let d be a statement that is intuitively not relevant in any way to how likely e should be on h·c (e.g. let d say “Jim will be really pleased with the outcome of
that next toss”). Suppose some rational agent has a degree-of-belief function Q for which the likelihood for e due to h·c agrees with the objective value: Q[e | h·c] = r (e.g. with r = 1/2).
Our analysis will show that this agent's belief-strength for d given ~e·h·c will be a relevant factor; so suppose that her degree-of-belief in that regard has any value s other than 1: Q[d | ~e·h·c]
= s < 1 (e.g., suppose s = 1/2). This is a very weak supposition. It only says that adding ~e·h·c to everything else the agent currently knows leaves her less than certain that d is true.
Now, suppose this agent learns the following bit of new information in a completely convincing way (e.g. I seriously tell her so, and she believes me completely): (d∨e) (i.e., Jim will be really
pleased with the outcome of the next toss unless it comes up heads).
Thus, on the usual Bayesian degree-of-belief account the agent is supposed to update her belief function Q to arrive at a new belief function Q[new] by the updating rule:
Q[new][S] = Q[S | (d∨e)], for each statement S.
However, this update of the agent's belief function
has to screw up
the objectivity of her new belief-function likelihood for
, because she now should have:
Q[new][e | h·c] = Q[new][e·h·c] / Q[new][h·c] = Q[e·h·c | (d∨e)] / Q[h·c | (d∨e)] = Q[(d∨e)·(e·h·c)] / Q[(d∨e)·(h·c)] = Q[(d∨e)·e | h·c] / Q[(d∨e) | h·c] = Q[e | h·c] / Q[((d·~e)∨e) | h·c] = Q[e
| h·c] / [Q[e | h·c] + Q[d·~e | h·c]] = Q[e | h·c] / [Q[e | h·c] + Q[d | ~e · h·c] × Q[~e | h·c]] = r / [r + s×(1− r)] = 1 / [1 + s×(1− r)/r].
Thus, the updated belief function likelihood must have value Q[new][e | h·c] = 1 / [1 + s×(1− r)/r]. This factor can be equal to the correct likelihood value r just in case s = 1. For example, for r
= 1/2 and s = 1/2 we get Q[new][e | (h·c] = 2/3.
The point is that even the most trivial knowledge of disjunctive claims involving e may completely upset the value of the likelihood for an agent's belief function. And an agent will almost always
have some such trivial knowledge. Updating on such conditionals can force the agent's belief functions to deviate widely from the evidentially relevant objective values of likelihoods on which
scientific hypotheses should be tested.
More generally, it can be shown that the incorporation into a belief function Q of almost any kind of evidence for or against the truth of a prospective evidence claim e — even uncertain evidence for
e, as may come through Jeffrey updating — completely undermines the objective or inter-subjectively agreed likelihoods that a belief function might have expressed before updating. This should be no
surprise. The agent's belief function likelihoods reflect her total degree-of-belief in e, based on a hypothesis h together with everything else she knows about e. So the agent's present belief
function may capture appropriate public likelihoods for e only if e is completely isolated from the agents other beliefs. And this will rarely be the case.
One Bayesian subjectivist response to this kind of problem is that the belief functions employed in scientific inductive inferences should often be “counterfactual” belief functions, which represent
what the agent would believe if e were subtracted (in some suitable way) from everything else she knows (see, e.g., Howson & Urbach, 1993). However, our examples show that merely subtracting e won't
do. One must also subtract any disjunctive statements containing e. And it can be shown that one must subtract any uncertain evidence for or against e as well. So the counterfactual belief function
idea needs a lot of working out if it is to rescue the idea that subjectivist Bayesian belief functions can provide a viable account of the likelihoods employed by the sciences in inductive
13. To see the point more clearly, consider an example. To keep things simple, let's suppose our background b says that the chances of heads for tosses of this coin is some whole percentage between
0% and 100%. Let c say that the coin is tossed in the usual random way; let e say that the coin comes up heads; and for each r that is a whole fraction of 100 between 0 and 1, let h[[r]] be the
simple statistical hypothesis asserting that the chance of heads on each random toss of this coin is r. Now consider the composite statistical hypothesis h[[>.65]], which asserts that the chance of
heads on each random (independent) toss is greater than .65. From the axioms of probability we derive the following relationship: P[α][e | h[[>.65]]·c·b] = P[e | h[[.66]]·c·b] × P[α][h[[.66]] | h
[[>.65]]·c·b] + P[e | h[[.67]]·c·b] × P[α][h[[.67]] | h[[>.65]]·c·b] + …+ P[e | h[[1]]·c·b] × P[α][h[[1]] | h[[>.65]]·c·b]. The issue for the likelihoodist is that the values of the terms of form P
[α][h[[r]] | h[[>.65]]·c·b] are not objectively specified by the composite hypothesis h[[>.65]] (together with c·b), but the value of the likelihood P[α][e | h[[>.65]]·c·b] depends essentially on
these non-objective factors. So, likelihoods based on composite statistical hypotheses fail to possess the kind of objectivity that likelihoodists require.
14. The Law of Likelihood and the Likelihood Principle have been formulated in slightly different ways by various logicians and statisticians. The Law of Likelihood was first identified by that name
in Hacking (1965), and has been invoked more recently by the likelihoodist statisticians A.F.W. Edwards (1972) and R. Royall (1997). R.A. Fisher (1922) argued for the Likelihood Principle early in
the 20^th century, although he didn't call it that. One of the first places it is discussed under that name is (Savage, et al., 1962).
15. What it means for a sample to be randomly selected from a population is philosophically controversial. Various analyses of the concept have been proposed, and disputed. For our purposes an
account of the following sort will suffice. To say
S is a random sample of population B with respect to attribute A
means that
the selection set S is generated by a process that has an objective chance (or propensity) r of choosing individual objects that have attribute A from among the objects in population B, where on
each selection the chance value r agrees with the value r of the frequency of As among the Bs, F[A,B].
Defined this way, randomness implies probabilistic independence among the outcomes of selections with regard to whether they exhibit attribute A, on any given hypothesis about the true value of the
frequency r of As among the Bs.
The tricky part of generating a randomly selected set from the popualtion is to find a selection process for which the chance of selecting an A each time matches the true frequency without already
knowing what the true frequency value is — i.e. without already knowing what the value of r is. However, there clearly are ways to do this. Here is one way:
the sample S is generated by a process that on each selection gives each member of B an equal chance of being selected into S (like drawing balls from a well-shaken urn).
Here, schematically, is another way:
find a subclass of B, call it C, from which S can be generated by a process that gives every member of C an equal chance of being selected into S, where C is representative of B with respect to A
in the sense that the frequency of A in C is almost precisely the same as the frequency of A in B.
Polsters use a process of this kind. Ideally a poll of registered voters, population B, should select a sample S in a way that gives every registered voter the same chance of getting selected into S.
But that may be impractical. However, it suffices if the sample is selected from a representative subpopulation C of B — e.g., from registered voters who answered the telephone between the hours of 7
PM and 9 PM in the middle of the week. Of course, the claim that a given subpopulation C is representative is itself a hypothesis that is open to inductive support by evidence. Professional polling
organizations do a lot of research to calibrate their sampling technique, to find out what sort of subpopulations C they may draw on as highly representative. For example, one way to see if
registered voters who answer the phone during the evening, mid-week, are likely to constitute a representative sample is to conduct a large poll of such voters immediately after an election, when the
result is known, to see how representative of the actual vote count the count from of the subpopulation turns out to be.
Notice that although the selection set S is selected from B, S cannot be a subset of B, not if S can be generated by sampling with replacement. For, a specific member of B may be randomly selected
into S more than once. If S were a subset of B, any specific member of B could only occur once in S. That is, consider the case where S consists of n selections from B, but where the process happens
to select the same member b of B twice. Then, were S a subset of B, although b is selected into S twice, S can only possess b as a member once, so S has at most n−1 members after all (even fewer if
other members of B are selected more than once). So, rather than being members of B, the members of S must be representations of members of B, like names, where the same member of B may be
represented by different names. However, the representations (or names) in S technically may not be the sorts of things that can possess attribute A. So, technically, on this way of handling the
problem, when we say that a member of S exhibits A, this is shorthand for the referent of S in B possesses attribute A.
16. This is closely analogous to the Stable-Estimation Theorem of (Edwards, Lindman, Savage, 1993). Here is a proof of Case 1, i.e. where the number of members of the reference class B is finite and
where for some integer u at least as large as the size of B there is a specific (perhaps very large) integer K such that the prior probability of a hypothesis stating a frequency outside region R is
never more than K times as large as a hypothesis stating a frequency within region R. (The proof is Case 2 is almost exactly the same, but draws on integrals wherever the present proof draws on sums
using the ‘∑’ expression.)
A few observations before proceeding to the main derivation:
1. The hypotheses under consideration consist of all expressions of form F[A,B] = k/u, where u is as described above and k is a non-negative integer between 0 and u.
2. R is some set of fractions of form k/u for a contiguous sequence of non-negative integers k that includes the sample frequency m/n.
3. In the following derivation all sums over values r in R are abbreviations for sums over integers k such that k/u is in R; similarly, all sums over values s not in R are abbreviations for sums
over integers k such that k/u is not in R. The sum over {s | s=k/u} represents the sum over all integers k from 0 through u.
4. Define L to be the smallest value of a prior probability P[α][F[A,B]=r | b] for r a fraction in R. Notice that L > 0 because, by supposition, finite K ≥ P[α][F[A,B]=s | b] / P[α][F[A,B]=r | b]
for the largest value of P[α][F[A,B]=s | b] for which s is outside of R and the smallest value of P[α][F[A,B]=r | b] for which r is outside of region R.
5. Thus, from the definition of L and of K, it follows that: K ≥ P[α][F[A,B]=s | b] / L for each value of P[α][F[A,B]=s | b] for which s is outside of R; and 1 ≤ P[α][F[A,B]=r | b] / L for each
value of P[α][F[A,B]=r | b] for which r is inside of R.
6. It follows that:
∑ s^m×(1−s)^n−m×(P[α][F[A,B]=s | b] / L)
≤ ∑ s^m×(1−s)^n−m×P[α][F[A,B]=s | b] × K
∑ r^m×(1−r)^n−m × (P[α][F[A,B]=r | b] / L)
≥ ∑ r^m×(1−r)^n−m × P[α][F[A,B]=r | b].
7. For β[R, m+1, n−m+1] defined as ∫[R] r^m (1−r)^n−m dr / ∫[0]^1 r ^m (1−r)^n−m dr, when u is large, its an established mathematical fact that
/ ∑ s^m×(1−s)^n−m
s∈{s | s=k/u}
is extremely close to the value of β[R, m+1, n−m+1].
We now proceed to the main part of the derivation.
From the Odds Form of Bayes' Theorem (Equation 10) we have,
Ω[α][F[A,B]∉R | F[A,S]=m/n · Rnd[S,B,A] · Size[S]=n · b]
P[α][F[A,B]=s | F[A,S]=m/n · Rnd[S,B,A] · Size[S]=n · b]
P[α][F[A,B]=r | F[A,S]=m/n · Rnd[S,B,A] · Size[S]=n · b]
P[F[A,S]=m/n | F[A,B]=s · Rnd[S,B,A] · Size[S]=n · b] × P[α][F[A,B]=s | b]
P[F[A,S]=m/n | F[A,B]=r · Rnd[S,B,A] · Size[S]=n · b] × P[α][F[A,B]=r | b]
s^m×(1−s)^n−m × P[α][F[A,B]=s | b]
r^m×(1−r)^n−m × P[α][F[A,B]=r | b]
s^m×(1−s)^n−m × (P[α][F[A,B]=s | b] / L)
r^m×(1−r)^n−m × (P[α][F[A,B]=r | b] / L)
≈ K×[(1/β[R, m+1, n−m+1]) − 1].
Ω[α][F[A,B]∉R | F[A,S]=m/n · Rnd[S,B,A] · Size[S]=n · b]
≤ K×[(1/β[R, m+1, n−m+1]) − 1].
Then by equation (11), which expresses the relationship between posterior probability and posterior odds against,
P[α][F[A,B]∈R | F[A,S]=m/n · Rnd[S,B,A] · Size[S]=n · b]
= 1 / (1 + Ω[α][F[A,B]∉R | F[A,S]=m/n · Rnd[S,B,A] · Size[S]=n · b]
≥ 1 / (1 + K×[(1/β[R, m+1, n−m+1]) − 1]).
17. To get a better idea of the import of this theorem, let's consider some specific values. First notice that the factor r×(1−r) can never be larger than (1/2)×(1/2) = 1/4; and the closer r is to 1
or 0, the smaller r×(1−r) becomes. So, whatever the value of r, the factor q/((r×(1−r)/n)^½ ≤ 2×q×n^½. Thus, for any chosen value of q,
P[r−q < F[A,S] < r+q | F[A,B] = r·Rnd[S,B,A]·Size[S] = n] ≥ 1 − 2×Φ[−2×q×n^½].
For example, if q = .05 and n = 400, then we have (for any value of r),
P[r−.05 < F[A,S] < r+.05 | F[A,B] = r·Rnd[S,B,A]·Size[S] = 400]
≥ .95.
For n = 900 (and margin q = .05) this lower bound raises to .997:
P[r−.05 < F[A,S] < r+.05 | F[A,B] = r·Rnd[S,B,A]·Size[S] = 900]
≥ .997.
If we are interested in a smaller margin of error q, we can keep the same sample size and find the value of the lower bound for that value of q. For example,
P[r−.03 < F[A,S] < r+.03 | F[A,B] = r·Rnd[S,B,A]·Size[S] = 900]
≥ .928.
By increasing the sample size the bound on the likelihood can be made as close to 1 as we want, for any margin q we choose. For example:
P[r−.01<F[A,S] <r+.01 | F[A,B] = r·Rnd[S,B,A]·Size[S] = 38000]
≥ .9999.
As the sample size n becomes larger, it becomes extremely likely that the sample frequency will come to within any specified region close to the true frequency r, as close as you wish.
18. That is, for each inductive support function P[α], the posterior P[α][h[j] | b·c^n·e^n] must go to 0 as the ratio P[α][h[j] | b·c^n·e^n] / P[α][h[i] | b·c^n·e^n] goes to 0; and that must occur if
the likelihood ratios P[e^n | h[j]·b·c^n] / P[e^n | h[i]·b·c^n] approach 0, provided that and the prior probability P[α][h[i] | b] is greater than 0. The Likelihood Ratio Convergence Theorem will
show that when h[i]·b is true, it is very likely that the evidence will indeed be such as to drive the likelihood ratios as near to 0 as you please, for a long enough (or strong enough) evidence
stream. (If the stream is strong in that the likelihood ratios of individual bits of evidence are small, then to bring about a very small cumulative likelihood ratio, the evidence stream need not be
as long.) As likelihood ratios head towards 0, the only way a Bayesian agent can avoid having her inductive support function(s) yield posterior probabilities for h[j] that approach 0 (as n gets
large) is to continually change her prior probability assessments. That means either continually finding and adding new plausibility arguments (i.e. adding to or modifying b) that on ballance favor h
[j] over h[i], or continually reassessing the support strength due to plausibility arguments already available, or both.
Technically, continual reassessments of support strengths that favor h[j] over h[i] based on already extant arguments (in b) means switching to new support functions (or new vagueness sets of them)
that assign h[j] ever higher prior probabilities as compared to h[i] based on the same arguments in b. In any case, such revisions of argument strengths may avoid the convergence towards 0 of the
posterior probability of h[j] only if it proceeds at a rate that keeps ahead of the rate at which the evidence drives the likelihood ratios towards 0.
For a thorough presentation of the most prominent Bayesian convergence results and a discussion of their weaknesses see (Earman, 1992, Ch. 6). However, Earman does not discuss the convergence
theorems under consideration here (due to the fact that the convergence results discussed here first appeared in (Hawthorne, 1993), just after Earman's book came out).
19. In scientific contexts all of the most important kinds of cases where large components of the evidence fail to be result-independent of one another are cases where some part of the total evidence
helps to tie down the numerical value of a parameter that plays an important role in the likelihood values the hypothesis specifies for other large parts of the total evidence. In cases where this
only happens rather locally, where the evidence for a parameter value influences the likelihoods of only a very small part of the total evidence that bears on the hypothesis, we can treat the
conjunction of the evidence for the parameter value with the evidential outcomes whose likelihood the parameter value influences as a single chunk of evidence, which is then result-independent of the
rest of the evidence (on each alternative hypothesis). This is the sort of chuncking of the evidence into result-independent parts suggested in the main text.
However, in cases where the value of a parameter left unspecified by the hypothesis has a wide-ranging influence on many of the likelihood values the hypothesis specifies, another strategy for
obtaining result-independence among these components of the evidence will do the job. A hypothesis that has an unspecified parameter value is in effect equivalent to a disjunction of more specific
hypotheses, where each disjunct consists of a more precise version of the original hypothesis, a version in which the value for the parameter has been “filled in”. Relative to each of these more
precise hypotheses, any evidence for or against the parameter value that hypothesis specifies is evidence for or against that more precise hypothesis itself. Furthermore, the evidence whose
likelihood values depend on the parameter value (and because of that, failed to be result-independent of the parameter value evidence relative to the original hypothesis) is result-independent of the
parameter value evidence relative to each of these more precise hypotheses — because each of the precise hypotheses already identifies precisely what (it claims) the value of the parameter is. Thus,
wherever the workings of the logic of evidential support is made more perspicuous by treating evidence as composed of result-independent chunks, one may treat hypotheses whose unspecified parameter
values interfere with result-independence as disjunctively composite hypotheses, and apply the evidential logic to these more specific disjuncts, and thereby regain result-independence.
20. Technically, suppose that O[k] can be further “subdivided” into more outcome-descriptions by replacing o[kv] with two “mutually exclusive parts”, o[kv]^* and o[kv]^#, to produce new outcome space
O[k]^$ = {o[k1],…,o[kv]^*,o[kv]^#,…,o[kw]}, where P[o[kv]^*·o[kv]^# | h[i]·b·c[k]] = 0 and P[o[kv]^* | h[i]·b·c[k]] + P[o[kv]^# | h[i]·b·c[k]] = P[o[kv] | h[i]·b·c[k]]; and suppose similar
relationships hold for h[j]. Then the new EQI* (based on O[k]^*) is greater than or equal to EQI (based on O[k]); and EQI^* > EQI just in case at least one of the new likelihood ratios, e.g., P[o[kv]
^* | h[i]·b·c[k][]] / P[o[kv]^* | h[j]·b·c[k]], differs in value from the “undivided” outcome's likelihood ratio, P[o[kv]^ | h[i]·b·c[k][]] / P[o[kv ] | h[i]·b·c[k]]. A supplement linked to this
article proves this claim.
21. The likely rate of convergence will almost always be much faster than the worst case bound provided by Theorem 2. To see the point more clearly, let's look at a very simple example. Suppose h[i]
says that a certain bent coin has a propensity for “heads” of 2/3 and h[j] says the propensity is 1/3. Let the evidence stream consist of outcomes of tosses. In this case the average EQI equals the
EQI of each toss, which is 1/3; and the smallest possible likelihood ratio occurs for “heads”, which yields the value γ = ½. So, the value of the lower bound given by Theorem 2 for the likelihood of
getting an outcome sequences with a likelihood ratio below ε (for h[j] over h[i]) is
1 − (1/n)(log ½)^2/((1/3) + (log ε)/n)^2 = 1 − 9/(n×(1 + 3(log ε)/n)^2.
Thus, according to the theorem, the likelihood of getting an outcome sequence with a likelihood ratio less than ε = 1/16 (=.06) when h[i] is true and the number of tosses is n = 52 is at least .70;
and for n = 204 tosses the likelihood is at least .95.
To see the amount by which the lower bound provided by the theorem is in fact overly cautious, consider what the usual binomial distribution for the coin tosses in this example implies about the
likely values of the likelihood ratios. The likelihood ratio for exactly k “heads” in n tosses is ((1/3)^k (2/3)^n−k) / ((2/3)^k (1/3)^n−k) = 2^n−2k; and we want this likelihood ratio to have a value
less than ε. A bit of algebraic manipulation shows that to get this likelihood ratio value to be below ε, the percentage of “heads” needs to be k/n > ½ − ½(log ε)/n. Using the normal approximation to
the binomial distribution (with mean = 2/3 and variance = (2/3)(1/3)/n) the actual likelihood of obtaining an outcome sequence having more than ½ − ½(log ε)/n “heads” (which we just saw corresponds
to getting a likelihood ratio less than ε, thus disfavoring the 1/3 propensity hypothesis as compared to the 2/3 propensity hypothesis by that much) when the true propensity for “heads” is 2/3 is
given by the formula
Φ[(mean − (½ − ½(log ε)/n))/(variance)^½] = Φ[(1/8)^½n^½(1 + 3(log ε)/n)]
(where Φ[x] gives the value of the standard normal distribution from −∞ to x). Now let ε = 1/16 (= .0625), as before. So the actual likelihood of obtaining a stream of outcomes with likelihood ratio
this small when h[i] is true and the number of tosses is n = 52 is Φ[1.96] > .975, whereas the lower bound given by Theorem 2 was .70. And if the number of tosses is increased to n = 204, the
likelihood of obtaining an outcome sequence with a likelihood ratio this small (i.e., ε = 1/16) is Φ[4.75] > .999999, whereas the lower bound from Theorem 2 for this likelihood is .95. Indeed, to
actually get a likelihood of .95 that the evidence stream will produce a likelihood ratio less than ε >.06, the number of tosses needed is only n = 43, rather than the 204 tosses the bound given by
the theorem requires in order to get up to the value .95. (Note: These examples employ “identically distributed” trials — repeated tosses of a coin — as an illustration. But Convergence Theorem 2
applies much more generally. It applies to any evidence sequence, no matter how diverse the probability distributions for the various experiments or observations in the sequence.)
22. It should now be clear why the boundedness of EQI above 0 is important. Convergence Theorem 2 applies only when EQI[c^n | h[i]/h[j] | b] > −(log ε)/n. But this requirement is not a strong
assumption. For, the Nonnegativity of EQI Theorem shows that the empirical distinctness of two hypotheses on a single possible outcome suffices to make the average EQI positive for the whole sequence
of experiments. So, given any small fraction ε > 0, the value of −(log ε)/n (which is always greater than 0 when ε < 0) will eventually become smaller than EQI, provided that the degree to which the
hypotheses are empirical distinct for the various observations c[k] does not on average degrade too much as the length n of the evidence stream increases.
When the possible outcomes for the sequence of observations are independent and identically distributed, Theorems 1 and 2 effectively reduce to L. J. Savage's Bayesian Convergence Theorem [Savage,
pg. 52-54], although Savage's theorem doesn't supply explicit lower bounds on the probability that the likelihood ratio will be small. Independent, identically distributed outcomes most commonly
result from the repetition of identical statistical experiments (e.g., repeated tosses of a coin, or repeated measurements of quantum systems prepared in identical states). In such experiments a
hypothesis will specify the same likelihoods for the same kinds of outcomes from one observation to the next. So EQI will remain constant as the number of experiments, n, increases. However, Theorems
1 and 2 are much more general. They continue to hold when the sequence of observations encompasses completely unrelated experiments that have different distributions on outcomes — experiments that
have nothing in common but their connection to the hypotheses they test.
23. In many scientific contexts this is the best we can hope for. But it still provides a very reasonable representation of inductive support. Consider, for example, the hypothesis that the land
masses of Africa and South America separated and drifted apart over the eons, the drift hypothesis, as opposed to the hypothesis that the continents have fixed positions acquired when the earth first
formed and cooled and contracted, the contraction hypothesis. One may not be able to determine anything like precise likelihoods, on each hypothesis, for the evidence that: (1) the shape of the east
coast of South America matches the shape of the west coast of Africa as closely as it in fact does; (2) the geology of the two coasts match up so closely when they are “fitted together” in the
obvious way; (3) the plant and animal species on these distant continents should be as similar as they are, as compared to how similar species are among other distant continents. Although neither the
drift hypothesis nor the contraction hypothesis supplies anything like precise likelihoods for these evidential claims, experts readily agree that each of these observations is much more likely on
the drift hypothesis than on the contraction hypothesis. That is, the likelihood ratio for this evidence on the contraction hypothesis as compared to the drift hypothesis is very small. Thus, jointly
these observations constitute very strong evidence for drift over contraction.
Historically, the case of continental drift is more complicated. Geologists tended to largely dismiss this evidence until the 1960s. This was not because the evidence wasn't strong in its own right.
Rather, this evidence was found unconvincing because it was not sufficient to overcome prior plausibility considerations that made the drift hypothesis extremely implausible — much less plausible
than the contraction hypothesis. The problem was that there seemed to be no plausible mechanism by which drift might occur. It was argued, quite plausibly, that no known force could push or pull the
continents apart, and that the less dense continental material could not push through the denser material that makes up the ocean floor. These plausibility objections were overcome when a plausible
mechanism was articulated — i.e. the continental crust floats atop molten material and moves apart as convection currents in the molten material carry it along. The case was pretty well clinched when
evidence for this mechanism was found in the form of “spreading zones” containing alternating strips of magnetized material at regular distances from mid-ocean ridges. The magnetic alignments of
materials in these strips corresponds closely to the magnetic alignments found in magnetic materials in dateable sedimentary layers at other locations on the earth. These magnetic alignments indicate
time periods when the direction of earth's magnetic field has reversed. And this gave geologists a way of measuring the rate at which the sea floor might spread and the continents move apart.
Although geologists may not be able to determine anything like precise values for the likelihoods of any of this evidence on each of the alternative hypotheses, the evidence is universally agreed to
be much more likely on the drift hypothesis than on the alternative contraction hypothesis. The likelihood ratio for this evidence on the contraction hypothesis as compared to the drift hypothesis is
somewhat vague, but extremely small. The vagueness is only in regard how extremely small the likelihood ratio is. Furthermore, with the emergence of a plausible mechanism, the drift hypothesis
hypothesis is no longer so overwhelmingly implausible prior to taking the likelihood evidence into account. Thus, even when precise values for individual likelihoods are not available, the value of a
likelihood ratio range may be objective enough to strongly refute one hypothesis as compared to another. Indeed, the drift hypothesis is itself strongly supported by the evidence; for, no alternative
hypothesis that has the slightest amount of comparative plausibility can account for the available evidence nearly so well. (That is, no plausible alternative makes the evidence anywhere near so
likley.) Given the currently available evidence, the only issues left open (for now) involve comparing various alternative versions of the drift hypothesis (involving differences of detail) against
one another.
24. To see the point of the third clause, suppose it were violated. That is, suppose there are possible outcomes for which the likelihood ratio is very near 1 for just one of the two support
functions. Then, even a very long sequence of such outcomes might leave the likelihood ratio for one support function almost equal to 1, while the likelihood ratio for the other support function goes
to an extreme value. If that can happen for support functions in a class that represent likelihoods for various scientists in the community, then the empirical contents of the hypotheses is either
too vague or too much in dispute for meaningful empirical evaluation to occur.
25. If there are a few directionally controversial likelihood ratios, where P[α] says the ratio is somewhat greater than 1, while and P[β] assigns a value somewhat less than 1, these may not greatly
effect the trend of P[α] and P[β] towards agreement on the refutation and support of hypotheses provided that the controversial ratios are not so extreme as to overwhelm the stream of other evidence
on which the likelihood ratios do directionally agree. Even so, researches will want to get straight on what the hypothesis says or implies about such cases. While that remains in dispute, the
empirical content of the hypothesis remains unsettling vague. | {"url":"http://plato.stanford.edu/entries/logic-inductive/notes.html","timestamp":"2014-04-20T03:10:15Z","content_type":null,"content_length":"96234","record_id":"<urn:uuid:9789a23c-67ba-482c-b639-bfd9a3bea25e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
The Tower of Hanoi is a well-known mathematical problem which yields some very interesting number patterns. This version of the problem involves a significant 'final challenge' which can either be
tackled on its own or after working on a set of related 'building blocks' designed to lead students to helpful insights.
Initially working on the building blocks gives students the opportunity to then work on harder mathematical challenges than they might otherwise attempt.
The problem is structured in a way that makes it ideal for students to work on in small groups.
Possible approach
Start by explaining how the Tower of Hanoi game works, making clear the rules that only one disc can be moved at a time, and that a disc can never be placed on top of a smaller disc. This
could be used to show how the game works.
Hand out a set of building block cards (
) to groups of three or four students. (The final challenge will need to be removed to be handed out later.) Within groups, there are several ways of structuring the task, depending on how
experienced the students are at working together.
Each student, or pair of students, could be given their own building block to work on. After they have had an opportunity to make progress on their question, encourage them to share their findings
with each other and work together on each other's tasks.
Alternatively, the whole group could work together on all the building blocks, ensuring that the group doesn't move on until everyone understands.
When everyone in the group is satisfied that they have explored in detail the challenges in the building blocks, hand out the final challenge.
The teacher's role is to challenge groups to explain and justify their mathematical thinking, so that all members of the group are in a position to contribute to the solution of the challenge.
It is important to set aside some time at the end for students to share and compare their findings and explanations, whether through discussion or by providing a written record of what they did.
Key questions
What important mathematical insights does my building block give me?
How can these insights help the group tackle the final challenge?
Possible extension
Of course, students could be offered the Final Challenge without seeing any of the building blocks.
Possible support
Encourage groups not to move on until everyone in the group understands. The building blocks could be distributed within groups in a way that plays to the strengths of particular students.
Handouts for teachers are available here (
word document
pdf document
), with the problem on one side and the notes on the other. | {"url":"http://nrich.maths.org/6690/note?nomenu=1","timestamp":"2014-04-16T13:39:21Z","content_type":null,"content_length":"6295","record_id":"<urn:uuid:b04200d2-cb0b-4bb3-a00a-e7b104e82dfb>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Problems with a Growing Population
Albert Bartlett might have been another obscure physics professor had he not put together a now famous lecture entitled "Arithmetic, Population and Energy" in 1969. The lecture, available broadly on
the internet, begins with the line: "The greatest shortcoming of the human race is our inability to understand the exponential function."
The logic is surprisingly simple and irrefutable. Exponential growth, which is simply consistent growth at some percentage rate each year (or other time period), cannot proceed indefinitely within a
finite system, for example, planet Earth. The fact that human populations continue to grow or that the extraction of energy and other natural resources continues to climb does not in any way refute
this statement. It simply means that the absolute limits have not yet been reached.
Bartlett, who died this month at age 90, gave his lecture all over the world 1,742 times or on average once every 8.5 days for 36 years to audiences ranging from junior high students to seasoned
professionals in many fields. His ability to stay on message for so long about something so important should make him the envy of every modern communications professional.
His favorite shortcut is the doubling time, the time it takes to get to twice the original number at a constant rate of growth. The formula is 70 divided by the percentage rate of growth per year (or
other period). Just a 2 percent growth rate doubles the rate of use of a resource or the size of world population in 35 years. Actual world population growth is about 1.2 percent per year today,
which seems benign; but, it implies the next doubling within 58 years to 14 billion. (U.N. forecasts project world population will reach 10 billion by 2070--57 years from now--and continue to grow
through 2100.)
In his lecture Bartlett relates that in his hometown of Boulder, Colorado, city council members once stated publicly their preferences for population growth rates ranging from 1 percent to 5 percent
per year. In the course of 70 years, roughly one lifetime, the 5 percent rate would make Boulder's population (about 100,000) some 32 times larger or about 3.2 million, which would make it the third
largest city in the country behind Los Angeles and in front of Chicago. A city that size could not possibly fit in the valley now home to Boulder, Bartlett explains. Attempting to do so would
inevitably eliminate all open space, something highly prized by Boulder residents.
Related article: Syria: As the World Backs Off, the Jihadists Attack
Has Bartlett made a dent in our habitual ways of thinking about growth? Maybe. There were others back when Bartlett started giving his lecture who asked the same questions in a different form. One
manifestation of that questioning was the groundbreaking study The Limits to Growth which, despite what its detractors have said, made no predictions. Rather the study models resource use over time
given a large range of conditions including an endowment of resources that was twice what anyone imaged they might be at the time. The troubling conclusion of the study was that nearly all scenarios
led to the crash of industrial civilization at some point.
The key observation in that study aligns with Bartlett's, namely that exponential growth in the consumption of finite resources is unsustainable. At some point growth in the rate of extraction will
cease. And, given the dependence of the economy on continuous growth of resource inputs including energy, this leads to instability and finally decline.
Let me help you envision what exponential growth means. If you receive 10 percent interest on $100, after one year, your $100 will turn into $110. In the second year, at the same interest rate, your
money will turn into $121. At the end of year 50, the amount will be $11,739, a considerable sum. At the end of year 100, the amount will be $1,378,061. By year 200 your heirs will have almost $19
If we dial down the rate to, say, just 2 percent, the corresponding figures are $269 for year 50, $724 for year 100 and $5,248 for year 200. Clearly, rate matters a lot! But, even so, if these
numbers represented the rise in the rate of resource consumption, even at two percent after 50 years, we'd be consuming resources at 2.7 times the original rate. At 100 years it would be 7.2 times,
and at 200 years, 52 times.
Now money is a social invention which can be created by electronic keystrokes these days in any amount. Eons of geologic transformation and concentration are not required. But finite natural
resources by definition have a limit. We cannot say with precision what that limit is, but we know it is there.
The rejoinder to Bartlett and others like him is that technology will overcome any limits, and that we'll use substitutes for resources that run low. It's hard to imagine what might be a good
substitute for uncontaminated, potable water; but, in the cornucopian's mind anything is possible. It's also hard to imagine a modern technical society without metals. But, we'll think of something,
right? However, please don't say that that something is made out of materials derived from oil, natural gas or coal which are also finite.
The problems posed by exponential growth mean we'll have to think of "something" at increasingly short intervals given the ever rising rates of consumption and the broad range of finite materials we
depend on--especially fossil fuels (oil, natural gas, coal) and much of the periodic table of elements including the usual suspects such as iron, copper, aluminum, zinc, silver, platinum, and uranium
and the more exotic ones such as lithium, titanium, the so-called rare earth elements, and helium.
Related article: $750 Billion in Food Wasted Every Year
It's not just one substitute we'll have to find. And, we may be faced with having to find many all at once. The idea that technological innovation will always and everywhere stay ahead of an ever
increasing rate of depletion may be true or not true. But we cannot know this ahead of time.
In fact, if it were true, why hasn't technological innovation brought oil prices down to where they were in the 1990s before the run-up of the last decade? There's no commodity more central to the
functioning of our economy; and, there's been huge spending by the oil industry and deployment of revolutionary new techniques. Yet, the price remains stubbornly high. The glut that was promised year
after year has failed to materialize. The problem is not that technological innovation has ceased; it's that it may not be enough.
And so, we are assuming huge risks by taking it on faith that all hurdles to the continuance of our technical civilization as it stands can be overcome in time and forever by technological advances.
We are taking it on faith, essentially, that we will never screw up so badly that our highly-efficient, just-in-time economy will cease to grow and finally decline until it reaches a level that can
be sustained by a much simpler and less technically advanced set of practices, probably for a much smaller population.
It stands to reason that even the RATE of technological advancement must have a limit. Humans are not infinite in their powers of reason. Even with computers, we cannot innovate at infinite speeds.
It is the rate issue that Albert Bartlett spent the last half of his life trying to bring to the fore in the minds of the public and policymakers. While many in the scientific community have now come
to understand his message, the broader public and policymakers still seem largely in the dark. Rates, and particularly exponential growth, are clearly not easy to grasp; otherwise, so many more human
beings would have grasped these concepts.
But we have Albert Bartlett to thank for relentlessly reminding us that we should pay attention to the simple math that refutes our notions of endless growth. He asks in his lecture the following
Can you think of any problem on any scale, from microscopic to global, whose long-term solution is in any demonstrable way aided, assisted or advanced by having larger populations at the local level,
the state level, the national level, or globally?
So far, I can't think of any.
By. Kurt Cobb
• Good article. Sadly most politicians just don't get it. We live in a closed system. The resources are finite. You cannot continue to increase population thru any means whether it be immigration
or natural births without it being to the detriment of those already here. The US isn't the vast untamed wilderness of the 1800s with seemingly unlimited resources. We have a population which
thru economic realities would stabilize itself were it not for immigration. Almost all population growth in the US is due to immigrants and their children. Should the Senate 744 become law, we
will see a population addition of 100 million by 2050. All should ask themselves if they want their children or grandchildren to have to compete with another 100 million for rapidly declining
• I'm curious: When this nation's population reaches 600 million people in 2100, who will have benefited the most from such growth? American workers? Immigrants? Employers? And will there still be
a reason to celebrate Earth Day?
• We have too many opportunistic "representatives" in Congress who are ignorantly (or in some cases knowingly) destroying future America for immediate gains: Gains in votes and in cheap labor.
Those of us Americans who don't have our heads in our entertainment centers need to stand up for our children and their children. This exponential growth in population will destroy our country as
we know it, and create a beehive lifestyle of natural resource shortages, food and water shortages, energy crises, and disease proliferation for the coming generations. Do not count on technology
to solve these upcoming problems.
• When more then 75% of American's don't want Obamacare, or Amnesty for 33 Million illegal alien's why won't our government listen to us? They act like we work for them anymore----they don't care
what we think, and that's why everything is so screwed-up now. They have forgotten about the Constitution, and if they don't like a law they simply change it. Just like their insider trading they
just changed the law, and they all make millions while the rest of American's are having a harder & harder time making ends meet. Financially Obamacare is going to do a lot of people in. Between
Obamacare, and Immigration no one will be able to afford Energy.
• It is true that growth affects prices and consumption. In the USA, more than 80% of our growth is caused by immigration. At current rates, we will have more than 400_000_000 people by 2050. Some
analysts measure that if it were not due to our massive growth that we would not have to rely on foreign oil.
It's time for congress to enact sustainable immigration policies to manage our growth as a nation.
• The most notable thing about cornucopian's is that they base their argument upon continuued scientific/technical advance and yet they themselves are scientifically illiterate. So what does a real
scientist say?
Max Born, one of the great physicists of the early 20th Century, published: "My Life and My Views" in 1968. Born was a physicists physicist and deeply committed to humanity and civilization. On
page 120, Born writes:
"Science and technology will then follow their tendency to rapid expansion in an exponential fashion, until saturation sets in. But that does not necessarily imply an increase of wealth, still
less of happiness, as long as the number of people increases at the same rate, and with it their need for food and energy. At this point, the technological problems of the atom touch social
problems, such as birth control and the just distribution of goods. There will be hard fighting about these problems..."
• Population growth is global and cannot be realistically prevented from crossing borders. Capital and resources cross borders, and people will find ways to cross, so any attempt to keep one
country free from population pressures is folly and doomed to fail. Focus instead on food security as a way to enable people to thrive where they are, and not migrate in desperation.
Overconsumption is the other half of the resource equation, hiding ecological damages and driving migration. Trade that includes dumping subsidized food abroad worsens hunger, population, and
Who deserves a share of a finite resource? Even if only those who can pay should have some, many in the next generation or two will be able to pay. Their kids and grandkids, too. How much to
leave them? How to get the market to include them? Who pays for the damages caused by climate results of burning fossils? | {"url":"http://oilprice.com/Energy/Energy-General/The-Problems-with-a-Growing-Population.html","timestamp":"2014-04-16T21:59:07Z","content_type":null,"content_length":"46692","record_id":"<urn:uuid:c618fd6e-ff7a-4325-8fe9-2fe34d1a843c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem linking with Fortran NAG library
Hi there,
I'm trying to compile some code that uses the Fortran NAG library. I'm
not sure how to link to the library, however.
When I type
g95 nhmm.f forbac.f makea.f objfun.f sumpi.f variance.f viterbi.f
condsim.f beta.f cmlib.f dldp.f emalg.f
I get the following link errors. I am running Debian Sarge (3.1)
/tmp/ccwUzgR2.o(.text+0x19a): In function `emalg_':
: undefined reference to `e04uef_'
/tmp/ccwUzgR2.o(.text+0x1ae): In function `emalg_':
: undefined reference to `e04uef_'
/tmp/ccwUzgR2.o(.text+0x1c2): In function `emalg_':
: undefined reference to `e04uef_'
/tmp/ccwUzgR2.o(.text+0x1d6): In function `emalg_':
: undefined reference to `e04uef_'
/tmp/ccwUzgR2.o(.text+0x1ea): In function `emalg_':
: undefined reference to `e04uef_'
/tmp/ccwUzgR2.o(.text+0x47c): more undefined references to `e04uef_'
/tmp/ccwUzgR2.o(.text+0xbe0): In function `stepps_':
: undefined reference to `e04ucf_'
Thank you,
On May 6, 6:11 pm, prynh
@gmail.com wrote:
> Hi there,
> I'm trying to compile some code that uses the Fortran NAG library. I'm
> not sure how to link to the library, however.
> When I type
> g95 nhmm.f forbac.f makea.f objfun.f sumpi.f variance.f viterbi.f
> condsim.f beta.f cmlib.f dldp.f emalg.f
> I get the following link errors. I am running Debian Sarge (3.1)
To use the NAG library, one must PURCHASE object code for the NAG
library specific to a compiler and operating system. Have you done so?
I don't think the NAG library is distributed for g95 or gfortran. The
routine E04UCF
"is designed to minimize an arbitrary smooth function subject to
constraints (which may
include simple bounds on the variables, linear constraints and smooth
nonlinear constraints) using a
sequential quadratic programming (SQP) method."
There exist free Fortran codes with similar functionality, and you
could try substituting one of them.
> To use the NAG library, one must PURCHASE object code for the NAG
> library specific to a compiler and operating system. Have you done so?
Thank you for this. No I haven't purchased any code - I just tried
compiling some code that I inherited "out of the box".
> I don't think the NAG library is distributed for g95 or gfortran. The
> routine E04UCF
> "is designed to minimize an arbitrary smooth function subject to
> constraints (which may
> include simple bounds on the variables, linear constraints and smooth
> nonlinear constraints) using a
> sequential quadratic programming (SQP) method."
> There exist free Fortran codes with similar functionality, and you
> could try substituting one of them.
Do you know where I would find such codes ? I'm new to Fortran;
basically I'm trying to get some code that I've been given to run...
Thanks again, | {"url":"http://www.megasolutions.net/fortran/Problem-linking-with-Fortran-NAG-library-64098.aspx","timestamp":"2014-04-23T22:05:49Z","content_type":null,"content_length":"22046","record_id":"<urn:uuid:38d5ecd9-c34e-4e2e-b274-e6a06cb0bd38>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Axiom of choice on finite sets
January 2nd 2012, 01:15 AM
Axiom of choice on finite sets
Hello all,
First: Have a wonderful new year ahead!! (Happy)
I have a fundamental question about axiom of choice. Let me quote wiki over a variant of axiom of choice
Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains exactly one element in common with each of the sets in X
Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains exactly one element in common with each of the sets in X
Is this not a variation or a special case of axiom of schema which states that
Given any set A, there is a set B such that, given any set x, x is a member of B if and only if x is a member of A and φ holds for x.
where φ is a predicate.
My question is that it appears axiom of choice is a special case of the axiom of schema where predicate is a choice function. But axiom of choice is considered as a basic axiom and it is quoted
that it cannot be proved in ZF. Can some one clarify this contradiction??
Best Regards,
January 2nd 2012, 02:56 AM
Re: Axiom of choice on finite sets
Thanks, and the same to you!
But the existence of a choice function is what is guaranteed by the axiom of choice (AC). If you can construct a choice function without AC, then AC is indeed unnecessary and you can use the
axiom schema of specification, or restricted comprehension, to construct the set that contains exactly one element in common with each of the sets in X. A choice function can be constructed, in
particular, when X is finite.
This page about AC is useful.
January 2nd 2012, 03:36 AM
Re: Axiom of choice on finite sets
Ah! I get it, your answer can also be paraphrased something like this
To assert that a mathematical object "exists," even when you cannot give an example of it, is a little bit like this: Suppose that one day you go to a football game by yourself. There are
thousands of other people in the stadium, but you don't know the names of any of them. (And let's suppose you're shy, so you're not about to ask anyone their name.) Then you know those people
have names, but you cannot give any of those names. (Admittedly, this is only a metaphor, and not a perfect one; don't make too much of it.)
The above quote was from your link. Thanks for the link and the answer. (Rock)
PS: Sorry for the repeated definition of axiom of choice in quotes, that was unintentional | {"url":"http://mathhelpforum.com/discrete-math/194841-axiom-choice-finite-sets-print.html","timestamp":"2014-04-20T06:07:50Z","content_type":null,"content_length":"7680","record_id":"<urn:uuid:4b8e30b3-1f10-47c0-981f-456639afaf92>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
A very simple question about addition/subtraction
July 23rd 2012, 01:36 AM #1
Jul 2012
Colsterworth, UK
A very simple question about addition/subtraction
Hi as you will see my maths is rubbish but I am trying to learn and would be most grateful for some help.
This is a basic algebra problem that I am trying to solve
the answer is given as
I got as far as
so to get X=3-2k
3-3 must equal 3, this I don't understand, I would have thought 3-3=0?
Re: A very simple question about addition/subtraction
Hi as you will see my maths is rubbish but I am trying to learn and would be most grateful for some help.
This is a basic algebra problem that I am trying to solve
the answer is given as
I got as far as
so to get X=3-2k
3-3 must equal 3, this I don't understand, I would have thought 3-3=0?
On the left hand side you have 3 - 3 = 0, on the right hand side you have 6 - 3 = 3.
Last edited by Prove It; July 23rd 2012 at 02:17 AM.
Re: A very simple question about addition/subtraction
Hi as you will see my maths is rubbish but I am trying to learn and would be most grateful for some help.
This is a basic algebra problem that I am trying to solve
the answer is given as
I got as far as
so to get X=3-2k
3-3 must equal 3, this I don't understand, I would have thought 3-3=0?
You seem to thinking of 3- 3k+ k as (3- 3)- (k+k) but that is not what is meant at all. "3k" means "3 times k" so you cannot combine that 3 with the other. Instead it is 3+(-3k+ k)= 3+ (-2k).
Re: A very simple question about addition/subtraction
This is how to solve this equation.
x = 6-3-3k+k
x = 3-2k
July 23rd 2012, 02:13 AM #2
July 23rd 2012, 07:13 AM #3
MHF Contributor
Apr 2005
July 27th 2012, 10:45 AM #4
Jul 2012
United States | {"url":"http://mathhelpforum.com/new-users/201271-very-simple-question-about-addition-subtraction.html","timestamp":"2014-04-17T05:11:46Z","content_type":null,"content_length":"41176","record_id":"<urn:uuid:f466555f-e081-44ab-85c7-c2ada678b59e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Need for Basic Math & Science Skills in College Students | The Classroom | Synonym
For students pursuing degrees in the humanities, math and science classes might seem like tedious distractions, and basic math and science skills can seem wholly unnecessary. College students may
begin school unprepared for math and science classes. A 2011 Microsoft survey found that, even among students pursuing math and science-related degrees, only one in five felt that they were
well-prepared for college math and science. This lack of preparation can limit students' degree courses and make basic college classes more challenging than they should be.
Core Classes
Even students who never want to touch a math or science book again will need basic math and science skills to complete their degree. In addition to the requirements for majors, most colleges and
universities require students to complete core classes, including classes in math and science. Students who lack basic math and science skills will struggle in these classes, and this can lower their
grades and even delay graduation.
Thinking Skills
Math and science teach new ways of thinking. Both place a strong emphasis on logic and clearly demonstrating assumptions and basic principles. Logical thinking is key in almost every field, and
students who master basic mathematical and scientific thinking may do better in other classes. For example, a philosophy student who has mastered the logic of algebra and the importance of the
scientific method can apply that knowledge to clearly and succinctly argue a philosophical point without adding in assumptions, opinions or emotions.
It's impossible to completely disentangle math and science from other classes. Students studying literature will use math to dissect and write poetry. In history and social studies classes, math
skills can help students read graphs and charts. Scientific reasoning can help students think critically, questioning claims made in government, philosophy and sociology classes. In many fields,
including psychology, sociology, philosophy and other social sciences, students must read studies and understand their results -- a skill that requires a basic background in both science and math.
Graduate School Admissions
Students who plan to go to graduate school can improve their chances of admission if they master basic math and science skills. The GRE General Test has a math section, and doing well on this test
can improve a student's chances of acceptance to grad school. Students interested in law school must take the LSAT, a test that focuses heavily on logical reasoning -- a skill students master in math
and science classes.
Style Your World With Color
Photo Credits
• Jupiterimages, Brand X Pictures/Brand X Pictures/Getty Images | {"url":"http://classroom.synonym.com/need-basic-math-science-skills-college-students-1954.html","timestamp":"2014-04-16T07:22:13Z","content_type":null,"content_length":"33796","record_id":"<urn:uuid:75aa60fc-6f52-4aa2-b83c-2b9d4d4b2c1e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometric interpretation of singular values
up vote 1 down vote favorite
The singular values of a matrix A can be viewed as describing the geometry of AB, where AB is the image of the euclidean ball under the linear transformation A. In particular, AB is an elipsoid, and
the singular values of A describe the length of its major axes.
More generally, what do the singular values of a matrix say about the geometry of the image of other objects? How about the unit L1 ball? This will be some polytope: is there some natural way to
describe this shape in terms of singular values, or other properties of matrix A?
linear-algebra convex-geometry
From SVD of a matrix, we may get a better understanding of the singular values of a matrix. This paper may of some help www1.math.american.edu/People/kalman/pdffiles/svd.pdf – Sunni Mar 7 '10 at
add comment
1 Answer
active oldest votes
It's all about how the object of interest looks after you choose the orthogonal base corresponding to the singular decomposition of A. Then, is only a matter of stretching, just as with
the euclidean ball. In this special case it's so simple, because the ball looks the same in all orthogonal bases. But since orthogonal transformation is only about rotation and reflection,
up vote 1 the singular values then again describe the stretching of you object after the appropriate transformation.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra convex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/17384/geometric-interpretation-of-singular-values?sort=votes","timestamp":"2014-04-19T17:23:52Z","content_type":null,"content_length":"50579","record_id":"<urn:uuid:6d322f89-876a-4e99-883c-e5ec138dd31c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Once more, Never again
Empty. Empty. Empty. Empty.
Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty.
Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty.
Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty.
Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty.
Empty. Empty. Empty. Empty. Empty. Empty. Empty. Empty.
Pursuing this at such a time? I have a final tomorrow. Should I have waited one more day?
Or should I worry about my prospective future beyond my academics? | {"url":"http://crocoducklingthing.tumblr.com/","timestamp":"2014-04-19T11:56:10Z","content_type":null,"content_length":"34625","record_id":"<urn:uuid:be239217-99e6-4899-8180-5edfcff3ffdd>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstracts for Satish Rao
The EECS Research Summary for 2003
Bounding Congestion in Networks
Chris Harrelson and Kirsten Hildrum
(Professor Satish Rao)
A recent result by Harald Räcke [1] gives an oblivious algorithm for routing a flow in a general graph G=(V,E) with polylogarithmic (in V) congestion over the optimal set of routes. He does this by
showing there exists an embedding of any graph into a tree, T(G), and that routing any flow on T(G) is no worse than routing the same flow on G. In addition, he also embeds T(G) on to G and shows
that it is possible to route on G with only polylogarithmically more congestion than on T(G). Räcke, however, does not give a polynomial-time algorithm for constructing the embedding.
We have several goals related to this. First, we seek to give a polynomial-time construction of an embedding that can be used this way, and hope that such a construction will give lead to a simpler
proof of his result.
Second, we wish to find, in general, the cost of an oblivious algorithm. While Räcke's result shows that the gap between the best oblivious algorithm and the best non-oblivious algorithm is polylog
(n) for the general case, this may not be tight, and the gap may even be constant when all the messages have a single source or single destination. The gap may be larger if the graph is allowed to
have directed edges.
[1] H. Räcke, "Minimizing Congestion in General Networks," Proc. Symp. Foundations of Computer Science, Vancouver, Canada, November 2002.
Send mail to the author : (hildrum@eecs.berkeley.edu)
Finding the Nearest Neighbor Using Queries to a Distance Oracle
Kirsten Hildrum
(Professor Satish Rao)
Given a set of points, S, in a metric space and query point q, the goal is to find the point in S closest to q. Ideally, the data structure to do this only relies on an oracle that gives the distance
between any two points, and not on any other detail of the metric space.
In some metric spaces, this seems difficult. Consider, for example, a metric space and a query point such that all points in S are at a distance one from each other, and all but one are at distance q
from S. Finding the one point closest to q will probably require querying the distance from q to every point in S. In other metric spaces, it is possible to do this with a logarithmic number of
queries (see [1] or [2]).
Following up on [1] and [2], we have a result showing that this can be done in constant space and logarithmic time for unchanging sets S and hope to argue that a similar data structure with expected
constant space works for dynamic sets S.
A more general goal is to characterize the metric spaces for which finding the nearest point requires a sublinear number of queries and develop algorithms for those metric spaces.
[1] K. Hildrum, J. D. Kubiatowicz, S. Rao, and B. Y. Zhao, "Distributed Object Location in a Dynamic Network," Proc. ACM Symp. Parallel Algorithms and Architectures, Winnepeg, Canada, August 2002.
[2] D. Karger and M. Ruhl, "Finding Nearest Neighbors in Growth-Restricted Metrics," Proc. ACM Symp. Theory of Computing, Montreal, Canada, May 2002.
More information (http://www.cs.berkeley.edu/~hildrum/neighbors.html) or
Send mail to the author : (hildrum@eecs.berkeley.edu) | {"url":"http://www.eecs.berkeley.edu/XRG/Summary/Old.summaries/03abstracts/abstracts.SR.html","timestamp":"2014-04-21T00:07:36Z","content_type":null,"content_length":"4001","record_id":"<urn:uuid:c6daa759-bc1e-4a3d-bfe7-dcabbcb7224a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Root simplification
August 5th 2012, 10:03 AM
Root simplification
I have a really simple and idiotic question. I have no clue as to how to simplify root expressions. For example (sqrt(x)=the square root of x):
how can I deal with this expression and represent it as a multiple of n for example?
August 5th 2012, 10:15 AM
Re: Root simplification
That cannot really be done. BUT
$\frac{\sqrt{n}+1}{\sqrt{n+1}}=\frac{(\sqrt{n}+1)( \sqrt{n+1})}{{n+1}}$
Not really what you seem to want.? | {"url":"http://mathhelpforum.com/algebra/201768-root-simplification-print.html","timestamp":"2014-04-17T02:53:46Z","content_type":null,"content_length":"4559","record_id":"<urn:uuid:b3f23ab4-024a-4fd6-ae74-0c12aa45f379>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Quantum World
As you will remember, every particle is either a boson or a fermion. Let us use the symbol |a> for the state of a boson, and let us keep firmly in mind that a “state” in quantum mechanics is a
probability algorithm; it serves to assign probabilities to possible outcomes of measurements — in this case any measurement to which the boson may be subjected.
As said, quantum states (and thus the possible outcomes of measurements that may be made) are determined by the actual outcomes of measurements that have been made. The symbol |a>
accordingly plays a double role: it represents (i) the outcome of a measurement and (ii) the algorithm to be used for assigning probabilities to the possible outcomes of whichever
measurement is made next.
If we want to know the probability with which a boson initially described by |a> is later found in the state |b>, we need to know the amplitude associated with this possibility. For this we
shall use the symbol <b|a>.
Now suppose that a boson has been found “in” the state |a>: it is correctly and completely described in terms of the probabilities that |a> serves to assign. Also suppose that another boson
has been found “in” the state |b>: it is correctly and completely described in terms of the probabilities that |b> serves to assign. (We shall further assume that |a> and |b> are possible
outcomes of the same measurement so that, technically speaking, they are orthogonal.) What symbol shall we use for the state of the composite system made up of the two bosons?
If we use something like |a,b>, we introduce into our notation a distinction that does not correspond to anything in the actual world. In addition to the physically warranted
distinction between the boson described by |a> and the boson described by |b>, there is then the physically unwarranted difference between the “left” boson and the “right” boson. To
eliminate this physically unwarranted difference, we use the symmetric symbol
(|a,b> + |b,a>)/√2.
The division by √2 ensures that this two-boson state is normalized: the probabilities it assigns to the possible outcomes of any measurement to which the two-boson system may be
subjected add up to 1. Observe that this expression remains unchanged if we exchange |a> and |b>.
If, on the other hand, the distinction between the “left” boson and the “right” boson corresponds to something in the actual world — if, for instance, the “left” and the “right” boson are of
different types so that “left” and “right” represent the respective types to which they belong — then the symbol |a,b> may be used, and we can write
<c,d|a,b> = <c|a> <d|b>
for the amplitude associated with the transition from |a,b> to |c,d>. In this case, however, the bosons are not completely described by the individual states |a> and |b>, inasmuch as these
contain no information about the particle species to which each boson belongs.
In case the bosons are completely described by |a> and |b>, the corresponding transition amplitude is
(1/√2)(<c,d| + <d,c|) × (1/√2)(|a,b> + |b,a>).
The manipulation of this expression is straightforward:
(<c,d|a,b> + <c,d|b,a> + <d,c|a,b> + <d,c|b,a>)/2
= (<c|a> <d|b> + <c|b> <d|a> + <d|a> <c|b> + <d|b> <c|a>)/2
= <c|a> <d|b> + <d|a> <c|b>.
The transition probability is thus given by
p = |<c|a> <d|b> + <d|a> <c|b>|^2.
This should remind you of the probability p = |A(N→E,S→W) + A(N→W,S→E)|^2 we previously obtained. Just put <E|N><W|S> and <W|N><E|S> in place of A(N→E,S→W) and A(N→W,S→E), respectively.
Suppose, then, that an initial measurement indicated the presence of two indistinguishable bosons described, respectively, by |a> and |b>, and that the next (relevant) thing that can be
deduced from an actual event or state of affairs is the presence of two bosons described, respectively, by |c> and |d>. Is the boson that was described by |a> the same as the boson that is now
described by |c> (in which case the boson that was described by |b> is the same as the boson that is now described by |d>)? Or is the boson initially described by |a> the same as the boson now
described by |d> (in which case the boson initially described by |b> is the same as the boson now described by |c>)? Which boson in the initial state of the two-boson system is identical with
which boson in the final state?
Do these questions have answers?
Once again they don’t, but this time the reason is not simply that, if they had, the transition probability would be |<c|a><d|b>|^2 + |<d|a><c|b>|^2 rather than |<c|a><d|b> + <d|a><c|b>|^2.
Instead, we have put our finger on the reason both why these questions have no answers and why the transition probabilities come out the way they do. | {"url":"http://thisquantumworld.com/wp/the-mystique-of-quantum-mechanics/identical-bosons/","timestamp":"2014-04-19T22:10:51Z","content_type":null,"content_length":"26386","record_id":"<urn:uuid:2db0d495-4190-4c19-bcc7-4afcdf6e6993>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
What’s the difference between DFS and BFS?
DFS (Depth First Search) and BFS (Breadth First Search) are search algorithms used for graphs and trees. When you have an ordered tree or graph, like a BST, it’s quite easy to search the data
structure to find the node that you want. But, when given an unordered tree or graph, the BFS and DFS search algorithms can come in handy to find what you’re looking for. The decision to choose one
over the other should be based on the type of data that one is dealing with.
In a breadth first search, you start at the root node, and then scan each node in the first level starting from the leftmost node, moving towards the right. Then you continue scanning the second
level (starting from the left) and the third level, and so on until you’ve scanned all the nodes, or until you find the actual node that you were searching for. In a BFS, when traversing one level,
we need some way of knowing which nodes to traverse once we get to the next level. The way this is done is by storing the pointers to a level’s child nodes while searching that level. The pointers
are stored in FIFO (First-In-First-Out) queue. This, in turn, means that BFS uses a large amount of memory because we have to store the pointers.
Subscribe to our newsletter for more free interview questions.
An example of BFS
Here’s an example of what a BFS would look like. The numbers represent the order in which the nodes are accessed in a BFS:
In a depth first search, you start at the root, and follow one of the branches of the tree as far as possible until either the node you are looking for is found or you hit a leaf node ( a node with
no children). If you hit a leaf node, then you continue the search at the nearest ancestor with unexplored children.
An example of DFS
Here’s an example of what a DFS would look like. The numbers represent the order in which the nodes are accessed in a DFS:
Differences between DFS and BFS
Comparing BFS and DFS, the big advantage of DFS is that it has much lower memory requirements than BFS, because it’s not necessary to store all of the child pointers at each level. Depending on the
data and what you are looking for, either DFS or BFS could be advantageous.
For example, given a family tree if one were looking for someone on the tree who’s still alive, then it would be safe to assume that person would be on the bottom of the tree. This means that a BFS
would take a very long time to reach that last level. A DFS, however, would find the goal faster. But, if one were looking for a family member who died a very long time ago, then that person would be
closer to the top of the tree. Then, a BFS would usually be faster than a DFS. So, the advantages of either vary depending on the data and what you’re looking for.
Nice tutorial.Thanks buddy.
Thank you
Nice tutorial with good examples. Thanks for putting all the topics together | {"url":"http://www.programmerinterview.com/index.php/data-structures/dfs-vs-bfs","timestamp":"2014-04-20T15:51:56Z","content_type":null,"content_length":"43583","record_id":"<urn:uuid:e017cd9e-45bf-42a1-93a2-bf96d225f828>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to reflect a graph through the x-axis, y-axis or Origin?
[30 Jun 2011]
This mail came in from reader Stuart recently:
Can you explain the principles of a graph involving y = −f(x) being a reflection of the graph y = f(x) in the x-axis and the graph of y = f(−x) a reflection of the graph y = f(x) in the y-axis?
My reply
Hello Stuart
Let’s see what this means via an example.
Let f(x) = 3x + 2
If you are not sure what it looks like, you can graph it using this graphing facility.
You’ll see it is a straight line, slope 3 (which is positive, i.e. going uphill as we go left to right) and y-intercept 2.
Now let’s consider −f(x).
This gives us
−f(x) = −3x − 2
Our new line has negative slope (it goes down as you scan from left to right) and goes through −2 on the y-axis.
When you graph the 2 liness on the same axes, it looks like this:
Note that if you reflect the blue graph (y = 3x + 2) in the x-axis, you get the green graph (y = −3x − 2) (as shown by the red arrows).
What we’ve done is to take every y-value and turn them upside down (this is the effect of the minus out the front).
Now for f(−x)
Similarly, let’s do f(−x).
Since f(x) = 3x + 2, then
f(−x) = −3x + 2 (replace every "x" with a "−x").
Now, graphing those on the same axes, we have:
Note that the effect of the "minus" in f(−x) is to reflect the blue original line (y = 3x + 2) in the y-axis, and we get the green line, which is (y = −3x + 2). The green line also goes through 2 on
the y-axis.
Further Example
Here’s an example using a cubic graph.
Blue graph: f(x) = x^3 − 3x^2 + x − 2
Reflection in x-axis (green): −f(x) = −x^3 + 3x^2 − x + 2
Now to reflect in the y-axis.
Blue graph: f(x) = x^3 − 3x^2 + x − 2
Reflection in y-axis (green): f(−x) = −x^3 − 3x^2 − x − 2
Even and Odd Functions
We really should mention even and odd functions before leaving this topic.
For each of my examples above, the reflections in either the x- or y-axis produced a graph that was different. But sometimes, the reflection is the same as the original graph. We say the reflection
"maps on to" the original.
Even Functions
An even function has the property f(−x) = f(x). That is, if we reflect an even function in the y-axis, it will look exactly like the original.
An example of an even function is f(x) = x^4 − 29x^2 + 100
The above even function is equivalent to:
f(x) = (x + 5)(x + 2)(x − 2)(x − 5)
Note if we reflect the graph in the y-axis, we get the same graph (or we could say it “maps onto” itself).
Odd Functions
An odd function has the property f(−x) = −f(x).
This time, if we reflect our function in both the x-axis and y-axis, and if it looks exactly like the original, then we have an odd function.
This kind of symmetry is called origin symmetry. An odd function either passes through the origin (0, 0) or is reflected through the origin.
An example of an odd function is f(x) = x^3 − 9x
The above odd function is equivalent to:
f(x) = x(x + 3)(x − 3)
Note if we reflect the graph in the x-axis, then the y-axis, we get the same graph.
More examples of Even and Odd functions
There some more examples on this page: Even and Odd Functions
Knowing about even and odd functions is very helpful when studying Fourier Series.
I hope that all makes sense, Stuart.
John Chase says:
1 Jul 2011 at 12:23 am [Comment permalink]
Small correction: an odd function passes through the origin if and only if zero is in the domain of the function!
In general though, this is a great discussion to bring up. Students and teachers alike have problems navigating this subject and the intuitions aren’t usually taught. There’s some simple reasoning
involved, but students just want to follow a rule more often then not. Thanks for the great post!
Murray says:
2 Jul 2011 at 6:29 am [Comment permalink]
@John: Thanks for the correction! I have amended the post.
mahesh rander says:
28 Jul 2011 at 12:17 am [Comment permalink]
A really good presentation and great help to students.
Sal Sargent says:
22 Jan 2014 at 9:44 am [Comment permalink]
Like can u reflect from quadrant 1 into quadrant 3?!?!
Murray says:
22 Jan 2014 at 11:19 am [Comment permalink]
@Sal: The last example in the article does just that – the portion that was in Quadrant 1 is now in Quadrant 3. | {"url":"http://www.intmath.com/blog/how-to-reflect-a-graph-through-the-x-axis-y-axis-or-origin/6255","timestamp":"2014-04-17T21:34:23Z","content_type":null,"content_length":"31184","record_id":"<urn:uuid:60d8dbb0-d06e-4199-9bba-2d10d1001a4a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lie's Theorem
Lie’s Theorem
The lemma leading to Engel’s theorem boils down to the assertion that there is some common eigenvector for all the endomorphisms in a nilpotent linear Lie algebra $L\subseteq\mathfrak{gl}(V)$ on a
finite-dimensional nonzero vector space $V$. Lie’s theorem says that the same is true of solvable linear Lie algebras. Of course, in the nilpotent case the only possible eigenvalue was zero, so we
may find things a little more complicated now. We will, however, have to assume that $\mathbb{F}$ is algebraically closed and that no multiple of the unit in $\mathbb{F}$ is zero.
We will proceed by induction on the dimension of $L$ using the same four basic steps as in the lemma: find an ideal $K\subseteq L$ of codimension one, so we can write $L=K+\mathbb{F}z$ for some $z\in
K\setminus L$; find common eigenvectors for $K$; find a subspace of such common eigenvectors stabilized by $L$; find in that space an eigenvector for $z$.
First, solvability says that $L$ properly includes $[L,L]$, or else the derived series wouldn’t be able to even start heading towards $0$. The quotient $L/[L,L]$ must be abelian, with all brackets
zero, so we can pick any subspace of this quotient with codimension one and it will be an ideal. The preimage of this subspace under the quotient projection will then be an ideal $K\subseteq L$ of
codimension one.
Now, $K$ is a subalgebra of $L$, so we know it’s also solvable, so induction tells us that there’s a common eigenvector $v\in V$ for the action of $K$. If $K$ is zero, then $L$ must be
one-dimensional abelian, in which case the proof is obvious. Otherwise there is some linear functional $\lambda\in K^*$ defined by
$\displaystyle k(v)=\lambda(k)v$
Of course, $v$ is not the only such eigenvector; we define the (nonzero) subspace $W$ by
$\displaystyle W=\{w\in V\vert\forall k\in K, k(w)=\lambda(k)w\}$
Next we must show that $L$ sends $W$ back into itself. To see this, pick $l\in L$ and $k\in K$ and check that
But if $l(w)\in W$, then we’d have $k(l(w))=\lambda(k)l(w)$; we need to verify that $\lambda([l,k])=0$. In the nilpotent case — Engel’s theorem — the functional $\lambda$ was constantly zero, so this
was easy, but it’s a bit harder here.
Fixing $w\in W$ and $l\in L$ we pick $n$ to be the first index where the collection $\{l^i(w)\}_{i=0}^n$ is linearly independent — the first one where we can express $l^n(w)$ as the linear
combination of all the previous $l^i(w)$. If we write $W_i$ for the subspace spanned by the first $i$ of these vectors, then the dimension of $W_i$ grows one-by-one until we get to $\dim(W_n)=n$, and
$W_{n+i}=W_n$ from then on.
I say that each of the $W_i$ are invariant under each $k\in K$. Indeed, we can prove the congruence
$\displaystyle k(l^i(w))\equiv\lambda(k)l^i(w)\quad\mod W_i$
that is, $k$ acts on $l^i(w)$ by multiplication by $\lambda(k)$, plus some “lower-order terms”. For $i=0$ this is the definition of $\lambda$; in general we have
for some $w',w''\in W_{i-1}$.
And so we conclude that, using the obvious basis of $W_n$ the action of $k$ on this subspace is in the form of an upper-triangular matrix with $\lambda(k)$ down the diagonal. The trace of this matrix
is $n\lambda(k)$. And in particular, the trace of the action of $[l,k]$ on $W_n$ is $n\lambda([l,k])$. But $l$ and $k$ both act as endomorphisms of $W_n$ — the one by design and the other by the
above proof — and the trace of any commutator is zero! Since $n$ must have an inverse we conclude that $\lambda([l,k])=0$.
Okay so that checks out that the action of $L$ sends $W$ back into itself. We finish up by picking some eigenvector $v_0\in W$ of $z$, which we know must exist because we’re working over an
algebraically closed field. Incidentally, we can then extend $\lambda$ to all of $L$ by using $z(v_0)=\lambda(z)v_0$.
1 Comment »
1. [...] like to have matrix-oriented versions of Engel’s theorem and Lie’s theorem, and to do that we’ll need flags. I’ve actually referred to flags long, long ago, but [...]
Pingback by Flags « The Unapologetic Mathematician | August 25, 2012 | Reply
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives | {"url":"https://unapologetic.wordpress.com/2012/08/25/lies-theorem/?like=1&source=post_flair&_wpnonce=9916d21231","timestamp":"2014-04-21T05:05:13Z","content_type":null,"content_length":"84911","record_id":"<urn:uuid:11072f9f-ffe0-4c8c-ada6-499f80ec4b5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adjust label positioning in Axes3D of matplotlib
up vote 6 down vote favorite
I am having trouble with axes labels overlapping ticks labels in matplotlib. I've tried to reposition the labels "manually" by applying transforms or by calling set_y(), but no avail.
Here's a snippet that reproduces the problem:
import matplotlib
import matplotlib.pyplot as pyplot
import mpl_toolkits.mplot3d
figure = pyplot.figure()
figure.subplots_adjust(bottom=0.25, top=0.75)
axes = figure.gca(projection='3d')
xLabel = axes.set_xlabel('XXX xxxxxx xxxx x xx x')
yLabel = axes.set_ylabel('YY (y) yyyyyy')
zLabel = axes.set_zlabel('Z zzzz zzz (z)')
plot = axes.plot([1,2,3],[1,2,3])
Note how the x and y labels clash with the ticks. Can I solve this elegantly ?
3d matplotlib
add comment
3 Answers
active oldest votes
I share your frustration. I worked on it for a good half hour and got nowhere. The docs say set_xlabel takes an arg labelpad but I get an error (AttributeError: Unknown property
labelpad)! Setting it after the fact doesn't do anything, on xaxis or w_xaxis.
Here's a crude workaround:
import matplotlib
import matplotlib.pyplot as pyplot
import mpl_toolkits.mplot3d
up vote 9 down vote figure = pyplot.figure(figsize=(8,4), facecolor='w')
accepted ax = figure.gca(projection='3d')
xLabel = ax.set_xlabel('\nXXX xxxxxx xxxx x xx x', linespacing=3.2)
yLabel = ax.set_ylabel('\nYY (y) yyyyyy', linespacing=3.1)
zLabel = ax.set_zlabel('\nZ zzzz zzz (z)', linespacing=3.4)
plot = ax.plot([1,2,3],[1,2,3])
ax.dist = 10
1 I swear you, I tried tens of workarounds, but it didn't occur me to simply add a newline on the start of the labels ! Clever ! Thank you ! – AbsentmindedProfessor Apr 5 '11 at
add comment
As a design practice, transformed text is not very legible. I would suggest you to use labels for your axis, maybe color encoded. This is how you do it in matplotlib
import matplotlib
import matplotlib.pyplot as pyplot
import mpl_toolkits.mplot3d
figure = pyplot.figure()
figure.subplots_adjust(bottom=0.25, top=0.75)
axes = figure.gca(projection='3d')
xLabel = axes.set_xlabel('X', fontsize=14, fontweight='bold', color='b')
yLabel = axes.set_ylabel('Y',fontsize=14, fontweight='bold', color='r')
zLabel = axes.set_zlabel('Z',fontsize=14, fontweight='bold', color='g')
up vote 1 down vote
x = pyplot.Rectangle((0, 0), 0.1, 0.1,fc='b')
y = pyplot.Rectangle((0, 0), 0.1, 0.1,fc='r')
z = pyplot.Rectangle((0, 0), 0.1, 0.1,fc='g')
handles, labels = axes.get_legend_handles_labels()
plot = axes.plot([1,2,3],[1,2,3])
Color encoding is not a bad-idea, but this is for an scholarly article, which must be legible in B&W print. Thanks for sharing ! – AbsentmindedProfessor Apr 5 '11 at 0:47
add comment
I really need to follow StackOverflow more often. I am the current maintainer of mplot3d. The reason why the various tricks that typically work in regular 2d plots don't work for 3d plots
is because mplot3d was originally written up with hard-coded defaults. There were also bugs in how mplot3d calculated the angle to render the labels.
v1.1.0 contains several fixes to improve the state of things. I fixed the miscalculation of axes label angles, and I made some adjustments to the spacing. For the next release, I would like
to have 3d axes to take up more than the default axes spacing, since the default was designed to take into account that tick labels and axes labels would be outside the axes, which is not
the case for mplot3d. Because the spacings are determined by relative proportions in mplot3d, having a smaller space to work within forces the labels closer together.
up vote 5
down vote As for other possible avenues for work-arounds, please see the note here. A fair warning, this private dictionary is not intended to be a permanent solution, but rather a necessary evil
until the refactor of mplot3d is complete.
Also, v1.1.0 contains many updates to the api of mplot3d. Please check out the revised documentation here.
Is there an example on how would to access and change self._axinfo? Especially the padding? – mab Aug 7 '13 at 14:58
add comment
Not the answer you're looking for? Browse other questions tagged 3d matplotlib or ask your own question. | {"url":"http://stackoverflow.com/questions/5525782/adjust-label-positioning-in-axes3d-of-matplotlib?answertab=oldest","timestamp":"2014-04-24T17:06:27Z","content_type":null,"content_length":"76464","record_id":"<urn:uuid:b0517577-d330-4c7c-942e-dbf644c2e4ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braintree Algebra Tutor
Find a Braintree Algebra Tutor
...History is one of my passions. I studied history in college and graduated from Wash. U., St.
21 Subjects: including algebra 1, algebra 2, English, writing
...Over this time period, I have also worked as a private tutor in Newton. I truly believe that math does not need to be difficult or overwhelming. It isn't just for the handful of the "gifted" to
enjoy and succeed at.
6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I have done cutting-edge research on location in Tahiti and Catalina Island, and over the past six years have taught 7th grade Unified Science, 10th and 12th grade Chemistry, Environmental
Science, as well as 8th grade Unified Science and 11th grade Chemistry honors. My SAT scores were exemplary...
14 Subjects: including algebra 1, chemistry, Spanish, English
...I've worked primarily with middle school, high school and college age students but I have also tutored adults interested in brushing up on their French to prepare for a trip. I look forward to
getting in touch with you.After studying French in college, I spent three years in three different Fren...
8 Subjects: including algebra 1, Spanish, French, geometry
...Although I am not a fully certified teacher in Massachusetts, I have passed the MTEL Math and literacy exams for teaching 9-12th grade math subject content. Whether a student is trying to avoid
dropping down a level, or if a student aspires to move up a level next year, students have been able t...
13 Subjects: including algebra 2, algebra 1, calculus, geometry | {"url":"http://www.purplemath.com/Braintree_Algebra_tutors.php","timestamp":"2014-04-21T02:03:36Z","content_type":null,"content_length":"23613","record_id":"<urn:uuid:de0b9c68-d9e5-4580-8d5c-64fe53bacbff>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
find a recurrence relation...
October 27th 2008, 03:12 AM #1
Sep 2008
find a recurrence relation...
find a recurrence relation for the number of bit sequences of length n with an even number of 0s
Thank you so much
here is the solution:
a(n) = 2^n - 2^n
a(n) = a(n-1) - a(n-2)
is it true?
Of course it is not true. Did you just makeup that answer?
Do you realize that zero is an even number?
So $a_1 =1$; the bit-string “1” contains an even number of zeros.
And $a_2 =2$; the bit-strings “11” & “00” contain an even number of zeros.
Now suppose we think about bit-strings of length 9.
The 9-bit-strings we want to count contain 0,2,4,6,or 8 zeros.
We have nine places to put the zeros because the other places will contain ones.
That is $a_9 = \sum\limits_{k = 0}^4 {9 \choose {2k}} = 256$.
In general $a_n = \sum\limits_{k = 0}^{\left\lfloor {\frac{n}{2}} \right\rfloor } {n \choose {2k}} = 2^{n - 1}$
October 27th 2008, 07:01 AM #2
Sep 2008
October 27th 2008, 08:08 AM #3 | {"url":"http://mathhelpforum.com/discrete-math/55938-find-recurrence-relation.html","timestamp":"2014-04-20T01:15:51Z","content_type":null,"content_length":"37180","record_id":"<urn:uuid:dffe1992-3642-4cd7-9cd9-6a5fbcbd0221>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Reporting negative binomial LGM results
Cindy Schaeffer posted on Wednesday, December 12, 2012 - 9:48 pm
Sorry if this is a foolish question, I'm still fairly new to analyzing count data. I know that the intercept and slope values obtained in negative binomial LGMs must be exponentiated in order to be
interpretable as means. However, I'm less clear about whether or not ALL of the other parameters in the model also need to be exponentiated when reporting results in manuscript tables (when one
chooses to report exponentiated I and S values). I assume that I need to also exponentiate (please confirm):
1. the SEs for the growth factors (obviously, so that they are reported in the same scale)
2. Residual variance estimates for the intercept and slope and their SEs (Correct?)
3. estimates and SEs for the regressions of intercept and slope on covariates (Correct? This is the one that's least clear to me)
Bengt O. Muthen posted on Thursday, December 13, 2012 - 8:37 am
I am not a fan of exponentiating for this model. True, we model the log-mean and therefore the mean is obtained by exponentiating. There is, however, an advantage to staying in the log-mean metric
because those coefficients are much closer to being normally distributed so that the symmetric CIs we usually compute as +-1.96*SE are relevant. When you exponentiate, you get a non-normally
distributed estimate and you have to adjust to get the right CI. I would just look for sign and significance in the output that you get and report that.
To get a more tangible interpretation of the results I don't think a count mean says very much anyway. I don't have a good feel for how a negbin mean influences the counts. I would instead want to
know the estimated probability distribution for the count outcomes 0, 1, 2,. ... Say at the first time point and the last, in order to gauge how much growth occurs.
But maybe there are books that show ways to report count growth models (none come to mind immediately) that would contradict me.
Cindy Schaeffer posted on Thursday, December 13, 2012 - 10:00 am
Thank you Bengt, very helpful. I see your point about not exponentiating, but unfortunately in applied journals editors and readers tend to want to see "real" numbers - how do effects translate into
reductions or increases in the number of occurances. I will consider your viewpoint (and perhaps cite "personal communication") as I put my results tables together.
Would love to read other viewpoints on this issue.
Bengt O. Muthen posted on Thursday, December 13, 2012 - 10:51 am
A covariate effect on a growth factor can also be translated into an effect on the probabilities of the outcome categories. What I hear is that it would be good if Mplus could add a "calculator"
similar to our new LTA calculator to compute covariate effects for count outcomes.
Yes, it would be good to hear about others' experiences and publications with count outcomes.
Brianna H posted on Thursday, December 13, 2012 - 4:19 pm
Hello Dr. Muthen,
I have a related question about reporting the effects of covariates on the intercepts and slopes of a zero-inflated Poisson model for a count outcome. My model has a continuous covariate and a
categorical (binary) covariate. In the Mplus Users Guide and in other areas of the Discussion Board, Dr. Muthen has recommended reporting STDYX for continuous covariates and STDY for binary
covariates. When I examine the Model Results, STDYX, and STDY output, however, the significance level of the intercept and growth factors vary. For example, in the STDY output, the effect of the
binary covariate on the slope of the inflation factor is significant, but in the Model Results, this is not significant. In contrast, the effect of the binary covariate on the intercept of the count
growth curve is significant both in the Model Results and in STDY. Is it appropriate to report STDY for the effects of the binary covariate; STDYX for the continuous covariate; and to report overall
results from the Intercepts section of the Model Results? Thank you.
Bengt O. Muthen posted on Thursday, December 13, 2012 - 6:54 pm
Significance can be different for un-standardized and standardized estimates, in which case I would decide significance by the un-standardized ones.
I would report all unstandardized coefficients and their significance (SEs) and then also add the standardized - for which I would use STDY for binary and STDYX for cont's covariates. I don't think
SEs and significance is really needed for standardized values.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=14&page=11339","timestamp":"2014-04-18T08:05:29Z","content_type":null,"content_length":"25206","record_id":"<urn:uuid:b64f9994-3816-4148-96f0-ba7990284ef5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |