content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Casselman's Basis of Iwahori Vectors and the Bruhat Order
Canad. J. Math. 63(2011), 1238-1253
Printed: Dec 2011
• Daniel Bump,
• Maki Nakasuji,
W. Casselman defined a basis $f_u$ of Iwahori fixed vectors of a spherical representation $(\pi, V)$ of a split semisimple $p$-adic group $G$ over a nonarchimedean local field $F$ by the condition
that it be dual to the intertwining operators, indexed by elements $u$ of the Weyl group $W$. On the other hand, there is a natural basis $\psi_u$, and one seeks to find the transition matrices
between the two bases. Thus, let $f_u = \sum_v \tilde{m} (u, v) \psi_v$ and $\psi_u = \sum_v m (u, v) f_v$. Using the Iwahori-Hecke algebra we prove that if a combinatorial condition is satisfied,
then $m (u, v) = \prod_{\alpha} \frac{1 - q^{- 1} \mathbf{z}^{\alpha}}{1 -\mathbf{z}^{\alpha}}$, where $\mathbf z$ are the Langlands parameters for the representation and $\alpha$ runs through the
set $S (u, v)$ of positive coroots $\alpha \in \hat{\Phi}$ (the dual root system of $G$) such that $u \leqslant v r_{\alpha} < v$ with $r_{\alpha}$ the reflection corresponding to $\alpha$. The
condition is conjecturally always satisfied if $G$ is simply-laced and the Kazhdan-Lusztig polynomial $P_{w_0 v, w_0 u} = 1$ with $w_0$ the long Weyl group element. There is a similar formula for $\
tilde{m}$ conjecturally satisfied if $P_{u, v} = 1$. This leads to various combinatorial conjectures.
Keywords: Iwahori fixed vector, Iwahori Hecke algebra, Bruhat order, intertwining integrals
MSC Classifications: 20C08 - Hecke algebras and their representations
20F55 - Reflection and Coxeter groups [See also 22E40, 51F15]
22E50 - Representations of Lie and linear algebraic groups over local fields [See also 20G05] | {"url":"http://cms.math.ca/10.4153/CJM-2011-042-3","timestamp":"2014-04-16T16:31:27Z","content_type":null,"content_length":"34686","record_id":"<urn:uuid:448fa3e3-c286-4d2d-95dd-b2374b9234e4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to OnlineConversion.com
What is Linear Feet? How do I convert it to feet?
Linear feet (often called Lineal feet) are the same as regular feet. No conversion is necessary. If something is 6 linear feet tall, it is 6 feet tall.
It should be noted, that the correct term is Linear, since Lineal refers to a line of ancestry, not to length.
There are times when the term Linear is used, and times when it is not. I'll give some examples of them.
First a definition. Linear means "a straight line" so a straight line from point A to point B is the linear distance.
One example would be... It is 2200 linear miles from Seattle to Washington DC. But if you were to drive from Seattle to Washington DC, you would have to drive 2700 miles. The linear distance is a
straight line from point A to point B, and Freeways rarely are straight.
Another good example would be boards, wire fencing, and rolls of cloth, all of which are often sold in linear feet. That just means they are not taking the width into account. If you bought 100
linear feet of lumber, laying them down end to end would stretch for 100 feet, it wouldn't matter how wide the boards were. If you were to multiply the width of the board, or the width of the roll of
cloth, times the linear length, you would get the area.
The same applies for linear yards, linear meters, etc.
» Return to the FAQ | {"url":"http://www.onlineconversion.com/faq_04.htm","timestamp":"2014-04-20T10:46:50Z","content_type":null,"content_length":"8890","record_id":"<urn:uuid:4c5a5a11-1883-4675-95a8-8101c7c42a0f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Ph.D. University of Western Ontario 1989
Curriculum Vitae (pdf)
Research Interests: nonparametric estimation and inference, cross-validatory model selection, nonparametric instrumental methods, and entropy-based measures of dependence and their statistical
underpinnings. I am also interested in parallel distributed computing paradigms and their application to computationally intensive nonparametric estimators.
Li, Q. and J.S. Racine (2007), Nonparametric Econometrics: Theory and Practice, Princeton University Press, ISBN: 9780691121611, 768 Pages.
Here is the table of contents (pdf), Chapter 1 (pdf), the Errata (pdf), the solution manual containing code and answers to odd numbered questions (pdf), and R code for answers to all applied
questions (zip). A solution manual containing code and answers to all questions (odd and even) is available to instructors upon request. To receive a copy kindly email me your course syllabus along
with your surface mailing address. A hard copy will then be sent via surface mail.
You can order the book directly from Princeton University Press (press.princeton.edu/titles/8355.html) or from your favourite online retailer.
Racine, J.S. (2008), Nonparametric Econometrics: A Primer, Foundations and Trends in Econometrics: Vol. 3: No 1, pp 1-88. http://dx.doi.org/10.1561/0800000009.
An edited version of this monograph is reprinted in Russian and appears as Racine, J.S. (2008) "Nonparametric Econometrics: A Primer", Quantile, Number 4, pp 7-56.
Here is the R code to replicate examples in this primer (zip).
Edited Volumes
Oxford Handbook of Semiparametric and Nonparametric Econometric Methods, ISBN 978–0–19–985794–4, Edited By Jeffrey S. Racine, Liangjun Su, and Aman Ullah, Published: 2014.
Advances In Econometrics: Nonparametric Econometric Methods, Volume 25, ISBN: 978-1-84950-623-6, Edited by: Qi Li, Jeffrey S. Racine, Published: 2009.
Gallery of Code and Applications for the np, npRmpi, and crs R Packages
The following link (link to gallery) will take you to a gallery where you can find some commented examples of working code for a range of estimators contained in the np, npRmpi, and crs packages
outlined below. Feel free to email me with suggestions. I welcome code/examples that can be showcased and shared with other users, so please feel free to send me code that you would like to share and
I will host it in the gallery along with your contact information.
The R np and npRmpi Packages
Consult the np FAQ (
) for responses to commonly asked questions and the user manual (
) for functions, descriptions, and examples.
The R (www.r-project.org) np and npRmpi packages (current version 0.50-1) implement a variety of nonparametric and semiparametric kernel-based methods in R, an open source platform for statistical
computing and graphics. Methods include kernel regression, kernel density estimation, kernel conditional density estimation, and a range of inferential procedures. See the links to the vignettes
below for an overview of both packages (I would advise starting with the np vignette).
The np package is the standard package you would use under R, while the npRmpi package is a version that uses the message passing interface for parallel computing. The npRmpi package is designed for
executing batch programs on compute clusters and multi-core computing environments to reduce the run time for computationally intensive jobs. See the example files in the demo directory of the npRmpi
package for illustrative npRmpi code, and see the examples in the help files and the link for replicating examples for the primer above for code to generate a range of illustrative examples.
Here is a direct link to the np package on the Comprehensive R Archive Network (CRAN), a direct link to the npRmpi package on CRAN, a direct link to the CHANGELOG file on CRAN (documents differences
between all versions), an npRmpi test file `test.R' (text), the npRmpi .Rprofile file (text), install instructions for npRmpi under Windows (text), and instructions for compiling the npRmpi binary
from scratch under Windows (text), and instructions for compiling the npRmpi source from scratch for Mac OS X Mountain Lion (text). See the npRmpi github repository (link below) for a recent npRmpi
MS Windows binary (available as a binary zip file from the github Downloads menu) and a recent npRmpi Mac OS X binary (available as a binary tgz file from the github Downloads menu).
See the October 2007 Rnews article (pdf) that describes the np package, the np vignette (pdf) for an overview of the np package, the npRmpi vignette (pdf) for an overview of the installation and use
of the npRmpi package, and the entropy-based inference vignette for an overview of computing entropy measures (pdf) (R code).
See also the review of the np package that appeared in 2008 in the Journal of Applied Econometrics (link to article in the Wiley Online Library) and the review of the npRmpi package that appeared in
2011 in the Journal of Applied Econometrics (link to article in the Wiley Online Library).
These packages are hosted on github (link)
The R crs Package
Consult the crs FAQ (
) for responses to commonly asked questions and the user manual (
) for functions, descriptions, and examples.
The R (www.r-project.org) crs package (current version 0.15-22) implements multivariate regression splines (and quantile regression splines as of version 0.15-8) with both continuous and categorical
predictors in R, an open source platform for statistical computing and graphics. See the links to the vignettes below for an overview of the package.
Here is a direct
to the crs package on the Comprehensive R Archive Network (CRAN), a direct
to the CHANGELOG file on CRAN (documents differences between all versions).
See the R Journal article (pdf) that describes the crs package, the crs vignette (pdf) for an overview of the crs package and the spline primer vignette for an overview of regression splines (pdf).
This package is hosted on github (link).
Recent Research Papers
Gao, Q. and L. Liu and J.S. Racine (forthcoming), "A Partially Linear Kernel Estimator for Categorical Data," Econometric Reviews.
Racine, J.S. (forthcoming), "Mixed Data Kernel Copulas," Empirical Economics.
Racine, J.S. and C. Parmeter (2014), "Data-Driven Model Evaluation: A Test for Revealed Performance," in `Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics', Oxford
University Press, (A. Ullah, J.S. Racine and L. Su Eds), 308-345.
Du, P. and C. Parmeter and J.S. Racine (2013), "Nonparametric Kernel Regression with Multiple Predictors and Multiple Shape Constraints," Statistica Sinica, Volume 23, 1343-1372.
Li, C. and J.S. Racine (2013), "A Smooth Nonparametric Conditional Density Test for Categorical Responses," Econometric Theory, Volume 29, 629-641.
Li, Q. and D. Ouyang and J.S. Racine (2013), "Categorical Semiparametric Varying-Coefficient Models," Journal of Applied Econometrics, Volume 28, 551-579.
Ma, S. and J.S. Racine (2013), "Additive Regression Splines with Irrelevant Categorical and Continuous Regressors," Statistica Sinica, Volume 23, 515-541.
Li, Q. and J. Lin and J.S. Racine (2013), "Optimal Bandwidth Selection for Nonparametric Conditional Distribution and Quantile Functions," Journal of Business and Economic Statistics, Volume 31,
57-65 (19 pages of supplementary material [proofs] available online at http://tandfonline.com/r/JBES).
Nie, Z. and J.S. Racine (2012), "The crs Package: Nonparametric Regression Splines for Continuous and Categorical Predictors," The R Journal, Volume 4, 48-56.
Parmeter, C. and J.S. Racine (2012), "Smooth Constrained Frontier Analysis," in `Recent Advances and Future Directions in Causality, Prediction, and Specification Analysis: Essays in Honor of Halbert
L. White, Jr.', Springer Verlag, (X. Chen and N.R. Swanson Eds), 463-488.
Racine, J.S. (2012), "RStudio: A Platform Independent IDE for R and Sweave," Journal of Applied Econometrics, Volume 27, 167-172.
Hansen, B. and J.S. Racine (2012), "Jackknife Model Averaging," Journal of Econometrics, Volume 167, 38-46.
Gyimah-Brempong, K. and J.S. Racine and A. Gyapong (2012), "Aid and Economic Growth: Sensitivity Analysis," Journal of International Development, Volume 24, 17-33.
Zhang, Z., D. Chen, W. Liu, J.S. Racine, S. Ong, Y. Chen, G. Zhao and Q. Ziang (2011), “Nonparametric Evaluation of Dynamic Disease Risk: A Spatio-Temporal Kernel Approach,” PLoS ONE, Volume 6,
Number 3, e17381, pages 1–8.
Racine, J.S. (2011), "Nonparametric Kernel Methods for Qualitative and Quantitative Data," The Handbook of Empirical Economics and Finance, CRC Press, 183-204.
Li, Q. and J.S. Racine (2010), "Smooth Varying-Coefficient Estimation and Inference for Qualitative and Quantitative Data," Econometric Theory, Volume 26, 1607-1637.
Gyimah-Brempong, K. and J.S. Racine (2010), "Aid and Economic Development: A Robust Approach," Journal of International Trade & Economic Development, Volume 19, 319-349.
Racine, J.S., (2009), "Nonparametric and Semiparametric Methods in R," Advances in Econometrics: Nonparametric Econometric Methods, Elsevier Science, Volume 25, 335-375.
Li, C., D. Ouyang and J.S. Racine (2009), "Nonparametric Regression with Weakly Dependent Data: The Discrete and Continuous Regressor Case," Journal of Nonparametric Statistics. Volume 21, Number 6,
pp. 697-711.
Kiefer, N.M. and J.S. Racine (2009), "The Smooth Colonel Meets the Reverend," Journal of Nonparametric Statistics, Volume 21, Issue 5, pp 521-533.
Li, Q., J.S. Racine and J. M. Woodridge (2009), "Efficient Estimation of Average Treatment Effects With Mixed Categorical and Continuous Data,'' Journal of Business and Economic Statistics, Volume
27, Number 2, pp 206-223.
Meredith, E. and J.S. Racine (2009), “Towards Reproducible Econometric Research: The Sweave Framework,” Journal of Applied Econometrics, Volume 24, pp 366-374.
Li, Q., E. Maasoumi and J.S. Racine (2009), "A Nonparametric Test for Equality of Distributions with Mixed Categorical and Continuous Data," Journal of Econometrics, Volume 148, Issue 2, pp. 186-200.
Ouyang, D., Q. Li and J.S. Racine (2009), "Nonparametric Estimation of Regression Functions with Discrete Regressors," Econometric Theory, Volume 25, Issue 01, pp 1-42.
Maasoumi, E. and J.S. Racine (2009), "A Robust Entropy-Based Test of Asymmetry for Discrete and Continuous Processes," Econometric Reviews, Volume 28, pp 246 - 261.
Li, Q. and J.S. Racine (2008), "Nonparametric Estimation of Conditional CDF and Quantile Functions with Mixed Categorical and Continuous Data," Journal of Business and Economic Statistics, Volume 26,
Number 4, pp. 423-434.
Hayfield, T. and J.S. Racine (2008), "Nonparametric Econometrics: The np Package," Journal of Statistical Software, Volume 27, Number 5, pp 1-32, http://www.jstatsoft.org/v27/i05/ .
Li, J. and J.S. Racine (2008), "Maxima: An Open Source Computer Algebra System," Journal of Applied Econometrics, Volume 23, Issue 4, pp 515-523.
Racine, J.S. (2008) "Nonparametric Econometrics: A Primer," Foundations and Trends in Econometrics, Volume 3, Number 1, pp 1-88.
Li, Q., J.S. Racine and J. Wooldridge (2008), "Estimating Average Treatment Effects with Continuous and Discrete Covariates: The Case of Swan-Ganz Catherization," American Economic Review, Volume 98,
Number 2, pp. 357-62.
Working Papers, Citation Summary, and Miscellany
Working Papers and Articles (RePEc Author Service)
Citation Counts and Related Indices (Google Scholar Citations)
Top 10% Institutions and Economists in the Field of Econometrics (RePEc)
Find your keys yet?
Why are you so slow? | {"url":"http://www.economics.mcmaster.ca/faculty/racinej","timestamp":"2014-04-16T10:25:35Z","content_type":null,"content_length":"60268","record_id":"<urn:uuid:1cc935a3-db6f-4e76-988e-97ed5b2ea2a4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
pigeon-hole principle
February 11th 2011, 07:50 AM
pigeon-hole principle
Statement 1 - If there are 'm' holes and 'n' pigeon's AND n>m then there is at least one hole with >1 pigeon
Statement 2 - If there are 'm' holes and 'n' pigeon's AND n<m then there is at least one hole with 0 pigeon
Are the above two statements equivalent?
Sorry if this question is trivial but I'm just not able to put the two concepts together.
February 11th 2011, 09:05 AM
February 14th 2011, 03:05 AM
First, it makes more sense to talk about equivalent predicates, or properties, not propositions. These two statements are equivalent because both are true, so they are equivalent to any true
A more interesting way to formulate the problem is this. Statement 1 says that |A| > |B| iff every function from A to B is not a injection. Statement 2 says that |A| < |B| iff every function from
A to B is not a surjection. We can combine them to say that every function from A to B is not an injection iff every function from B to A is not a surjection.
This is easy to prove if we note that a function is an injection iff it has a left inverse, and a function is a surjection iff it has a right inverse.
February 15th 2011, 08:59 PM
Thanks emakarov.
Maybe I didn't formulate it properly but what I meant was
Does Statement 1 imply Statement 2 AND Statement 2 => Statement 1?
February 15th 2011, 11:37 PM
February 16th 2011, 12:01 AM
Sorry but I'm not following your argument.
Let us say
Statement 1 = 2 is a prime
Statement 2 = 3 is a prime
Now we know both are true. But I will not call them equivalent as as they one doesn't follow from the other and vice-a-versa . For me they are more like two independent true statements.
Am I missing something?
February 16th 2011, 12:15 AM
To be sure, you have a reasonable intuition. However, you still need to define "equivalent" or, rather, what it means for one statement to imply another. The standard definition says that the
implication is true iff the premise if false or the conclusion is true. It does not require that the premise is used essentially in deriving the conclusion.
There was work in logic and philosophy of mathematics to define other implications. See, for example, relevance logic. | {"url":"http://mathhelpforum.com/discrete-math/170911-pigeon-hole-principle-print.html","timestamp":"2014-04-18T22:28:35Z","content_type":null,"content_length":"7454","record_id":"<urn:uuid:ee7d55df-5238-43a6-9aaf-46371bda4dd8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
LBM: Approximate Invariant Manifolds and Stability
Seminar Room 1, Newton Institute
We study the Lattice Boltzmann Models in the framework of the Geometric Singular Perturbation theory. We begin with the Lattice Boltzmann system discrete in both velocity space and time with the
alternating steps of advection and relaxation, common to all lattice Boltzmann schemes. When time step is small then this system has an approximate invariant manifold close to locally equilibrium
distributions. We found a time step expansion for the approximate invariant manifold and proved its conditional stability in any order of accuracy under condition that the space derivatives of the
correspondent order remain bounded. On this invariant manifold, a macroscopic dynamics arises and we found the time step expansion of the equation of the macroscopic dynamics.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/KIT/seminars/2010090716501.html","timestamp":"2014-04-20T21:41:09Z","content_type":null,"content_length":"6562","record_id":"<urn:uuid:c9f175d9-1c73-43fb-af0b-6a9e5dc6909d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: minimal set of Hilbert axioms for plane geometry; Moore, Greenberg
& Jahren
Replies: 5 Last Post: Dec 2, 2011 11:15 PM
Messages: [ Previous | Next ]
minimal set of Hilbert axioms for plane geometry; Moore, Greenberg
& Jahren
Posted: Nov 26, 2011 12:00 PM
Is there a good treatment of a minimal version of Hilbert's axioms for
plane geometry, with proofs that this minimal version implies the
stronger set of axioms in Hilbert's book Foundations of Geometry and
in Greenberg's book Euclidean and Non-Euclidean Geometries?
I wrote such a paper myself
http://www.math.northwestern.edu/~richter/hilbert.pdf based on notes
by Bjorn Jahren http://folk.uio.no/bjoernj/kurs/4510/gs.pdf and
helpful conversations with him. I imagine Jahren would be a coauthor
if my paper was worth submitting.
I found my minimal version in Venema's book Foundations of geometry.
The Wiki link http://en.wikipedia.org/wiki/Hilbert%27s_axioms points
out that R. L. Moore showed that Hilbert's axiom II.4 was redundant,
but I know of no proof of this other than mine. Greenberg proves that
Hilbert's axiom II.2 is too strong in an exercise. Greenberg does not
list Hilbert's redundant axiom II.4, but he strengthens Hilbert's
axiom II.5, which says that if a line intersects an edge triangle, it
must intersect another edge as well. Greenberg however strengthens
this axiom to say that a line has exactly two side, and shows this
easily implies that a line cannot intersect all three edges of a
triangle. Jahren explained how Hilbert's unstrengthened axiom II.5
implies that a line cannot intersect all three edges of a triangle,
but this doesn't quite prove that a line only has two sides: we need
to handle the case of 3 collinear points. I did this, and this
implies proves Hilbert's redundant axiom II.4.
Let me explain my thinking about high school Geometry, as I wrote my
paper in order to teach to my son, who read it, and is working through
Greenberg's book. I learned that
1) Euclid wasn't too rigorous, as he superposed and missed
2) Birkhoff came up with a much shorter rigorous list of axioms than
Hilbert's by starting with the real line to measure lengths & angles.
3) High school Geometry textbooks more or less follow Birkhoff.
4) Kodaira wrote a very nice textbook on Hilbert's axioms that top
high school students could read, but the book was not translated from
Japanese and is now out of print.
The textbook my son is using seems particularly bad to me. They don't
even formally state Birkhoff's two real line axioms, and only mention
the axioms in remarks in the text. Their first theorem is that any
two right angles are congruent. Their proof is very simple:
90 degrees = 90 degrees!
The point is that Euclid took this result as an axiom, but Hilbert
gave a serious proof of it using his axioms.
Date Subject Author
11/26/11 minimal set of Hilbert axioms for plane geometry; Moore, Greenberg richter@math.northwestern.edu
& Jahren
11/30/11 Re: minimal set of Hilbert axioms for plane geometry; Moore, richter@math.northwestern.edu
Greenberg & Jahren
12/1/11 Re: minimal set of Hilbert axioms for plane geometry; Moore, richter@math.northwestern.edu
Greenberg & Jahren
12/1/11 Re: minimal set of Hilbert axioms for plane geometry; Moore, J H Palmieri
Greenberg & Jahren
12/1/11 Re: minimal set of Hilbert axioms for plane geometry; Moore, Ilya Zakharevich
Greenberg & Jahren
12/2/11 Re: minimal set of Hilbert axioms for plane geometry; Moore, richter@math.northwestern.edu
Greenberg & Jahren | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2319291","timestamp":"2014-04-17T04:08:52Z","content_type":null,"content_length":"25148","record_id":"<urn:uuid:deb067df-695e-4566-8da1-16e614beb59c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 1,940
the fourth standing wave frequency is 4*f1, where f1 is the fundamental wave frequency. The seventy fundamental frequency is 7*f1 7*f1 - 4*f1 = 3*f1 = 144 hz f1 = 144/3
(2.5 cm)sin[(3.0 m-1)x - (27 s-1 )t] + (2.5 cm)sin[(3.0 m-1)x + (27 s-1)t] = (5cm)sin[(3.0m-1)x - (27 s-1)t]
A) 12 choose 3, calculated as 12! / (3! * 9!) = 12*11*10 / (3*2)
Energy = Power*time Power = I^2*r = I*V = 1.5*1.2 5 * 10^3 = 1.5*1.2*time solve for time. Your answer will be in the SI units of seconds; divide this answer by 60 sec/min to convert the answer to
computer science
You're going to have to look up the functions in the particular language you're working with to learn how to store arrays, select elements of the array, and print statements.
Calcium propanoate or calcium propionate has the formula Ca(C2H5COO)2 . . . try using your textbook to look up the chemical formulas for the other reactants involved in these equations. . . such as
hydrogen chloride and anhydrous ethanoic acid. Then research how these compound...
I imagine you're supposed to solve for the missing variables. I'll do a couple so you get the idea. 1) 2a - a/5 = 2/3 adding like terms: 1.8 a = 2/3 9/5 a = 2/3 a = 10/27 2) 3(a+4)+1 = a/2 3*a + 12 +
1 = a/2 3*a + 13 = a/2 13 = -2.5a a = 13/-2.5 . . .
Algebra 1- Math
2x - y = -4 -3x + y = -9 From the first equation: y = 2x + 4 Substitute this into the second equation: -3x + 2x + 4 = -9 -x + 4 = -9 -x = -13 x = 13 Then plug this answer back into the first equation
2*13 - y = -4 26 - y = -4 -y = -30 y = 30
You need to calculate the equilibrium constant K, which for C would be ([H2BO3-]*[CO3 2-]) / ([HCO3-]*[HBO3 2-]) Look up the equilibrium concentrations [] of the various compounds, and solve. If K >
1, products are favored. If K is less than 1, than reactants are favored
First, plot the parallelogram on graphing paper. Next, calculate the length of each side using the formula for the distance between 2 points (x1, y1), (x2, y2): ((x1-x2)^2 + (y1-y2)^2)^0.5 Then
calculate the slope of each line passing through these two points: slope = (y2-y1)/...
why don't you try getting out your compass, following each of the student's steps, and see which one yields a parallelogram PQRS?
Let P be the principal invested. The amount at the end of four years will be for each scenario 1: P*(1+0.0225/2)^(4*2) 2: P*(1+0.0225/4)^(4*4) 3: P*(1.0195^4)
I. 71. The mode, by definition is the answer that appears most frequently. II. The midrange is 64, which is by definition the arithmetic mean of the largest and the smallest values in a sample or
other group. The range is 55. Let H = highest score, and L = lowest score. Then,...
suppose that the times taken to complete an online test are normally distributed with a mean of 45 minutes and a standard deviation of 12 minutes. find the probability a. that for a randomly selected
test, the time taken was more than 1 hour. b. that the mean number of minutes...
250 mL
The length of the telescope is 0.2 + 1.6
concepts to population health
They should use this data to identify where resources should be most efficiently concentrated/directed in order to best affect the health outcomes of the population as a whole.
y = y0 + vy0*t -1/2*g*t^2 x = x0 + vx0*t where y is the y position as a function of time, y0 is the initial y position, vy0 is the initial y velocity, g is the acceleration due to gravity (9.8 m/s^
2), t is time, x is the x position as a function of time, vx0 is the initial x v...
You can take any two currency exchange rates and divide them by each other to get 1: for example 1 CDN = $0.829 Dividing both sides by 1 CDN 1 CDN / 1 CDN = 1 = $0.829/1 CDN Since you can multiply
anything by 1 and still get the same result (1*x = x); You can multiply by the c...
You can take any two currency exchange rates and divide them by each other to get 1: for example 1 CDN = $0.829 Dividing both sides by 1 CDN 1 CDN / 1 CDN = 1 = $0.829/1 CDN Since you can multiply
anything by 1 and still get the same result (1*x = x); You can multiply by the c...
What is the smallest of 3 consecutive positive integers if the product of the smaller two integers is 5 less than 5 times the largest integer? I can't remember how to start this.
What is the smallest of 3 consecutive positive integers if the product of the smaller two integers is 5 less than 5 times the largest integer? I can't remember how to start this.
Precalculus/Trig 5
sin(26) = (120+15)/guywire where guy wire is the length of the guy wire
make a plot rounding to the nearest ten: (1, 630); (2, 670); (3, 740); (4,650); . . . then a plot rounding to the nearest 100: (1, 600); (2, 700); (3, 700); (4, 700) . . . c. What is the average El
Nino rain (628 + 669 + 740 + . . . + 872) / 23 Is this less or greater than the...
Q = m*c*(130- -22) where m is the mass of the water, c is the specific heat of water, which you will have to look up; Make sure it matches with your units of mass and temperature
If she had worked those two days, she would make +$200 Instead, she spends $180 + $198 = $378 The total opportunity cost is 378 + 200 = 578. This is how much money she'll lose by going on the trip
instead of working
centripetal acceleration is v^2/r The tension in the chain is m*v^2/r where v is the speed = 2*PI*r*f where f is the frequency; r is the radius; m is the mass v = 2*PI*1.2*0.432 solve for the
acceleration and tension using the above equations
v = 2*PI*r/T where T is the period 2010 = 2*PI*0.15/T centripetal acceleration = v^2/r Force = m*v^2/r
The greatest coin is a quarter; If you choose three quarters, you only have $0.75, so $5.00 is greater.
Take components of the force: F1x = 364*cos(33) F1y = 364*sin(33) F2x = 522*cos(11) F2y = 522*sin(11) Fnetx = F1x + F2x Fnety = F1y + F2y Fnet = ((Fnetx)^2 + (Fnety)^2)^0.5 at an angle theta given by
tan(theta) = Fnety/Fnetx where Fnet = m*a = 3050*a Solve for a, the acceleration
lamda * f = c where c is the speed of light = 3*10^8 m/s; f is the frequency; lambda is the wavelength.
vf - vi = 5*10^6 - 2*10^6 = 3*10^6 where vf is the final speed and vi is the initial speed. Assuming a constant deceleration, the electron passes through 2.1 multiply 4cm = 8.4 cm of paper. 8.4 = 1/
2*a*t^2 deltav = 3*10^6 = a*t where a is the acceleration, t is the time. You h...
525 - 413
You are not going to die. You just might not get the right answer.
Use conservation of momentum. In the x direction: 0.02*5.5 = 0.02*vA*cos(65) + 0.04*vB*cos(37) In the y direction: 0 = 0.02*vA*sin(65) - 0.02*vB*sin(37) You should be able to use algebra to solve for
these equations, since you have two equations with 2 unknowns; vA is the fina...
F(q2) = ke*q1*q2/(r12)^2 + ke*q3*q2/(r23)^2 = 0 or q1*q2/(r12)^2 = q3*q2/(r23)^2 q*-2q/(r12)^2 = 3q*-2q/(r23)^2 -2*q^2/(r12)^2 = -6q^2/(r23)^2 or 1/(r12)^2 = 3/(r23)^2 where r12 is the distance
between charge 1 and 2; r23 is the distance bewteen charge 2 and 3. For symmetry sa...
F(q2) = ke*q1*q2/(r12)^2 + ke*q3*q2/(r23)^2 = 0 or q1*q2/(r12)^2 = q3*q2/(r23)^2 q*-2q/(r12)^2 = 3q*-2q/(r23)^2 -2*q^2/(r12)^2 = -6q^2/(r23)^2 or 1/(r12)^2 = 3/(r23)^2 where r12 is the distance
between charge 1 and 2; r23 is the distance bewteen charge 2 and 3. For symmetry sa...
deltaL = a*L*deltaT where deltaL is the change in length, a s the linear expansion coefficient, L0 is it's initial length, deltaT is the final temperature deltaL = 10^-5*178*(-35 - 39)
N = (1 + E)^12 -1 where N is the nominal interest rate per year; and E is the effective monthly interest rate per year. For Ally Bank, (N+1)^10 = 1.25 Use this equation to solve for N: N+1 = log10
(1.25) N = log10(1.25) - 1 Then use the first equation to find E
math (calculate speed)
m*g*h = 1/2*m*v^2 where m is the mass; g is the acceleration due to gravity; v is the speed. At the top of his dive, the swimmers energy is entirely potential energy = m*g*h; When he enters the pool,
the potential energy is converted to kinetic energy: 1/2*m*v^2 v = = (2*g*h)^0.5
-3*y + 2*x = -12 multiply both sides by a negative 1: 3*y - 2*x = 12 3*y = 12 + 2*x y = 2/3x + 4 This is an equation of a line with slope 2/3; y intercept = 4; Graph this equation; the graph of this
line is the solution to this equation
Physics, waves
v = (T/mu)^0.5 where v is the speed, T is the tension, and mu is the linear density v = lamda*f ; lambda = 2*L for the fundamental mode, where f is the frequency, and lambda is the wavelength f = (1/
(2*L))*(T/mu)^0.5 1.61 g = 0.00161 kg mu = 0.00161/0.64 200 = (1/(2*0.64))*(T/...
science, physics
The total energy is entirely kinetic energy as the block passes through the equilibrium position E = 1/2*m*v^2 where m is the mass, and v is the velocity
9.4 cm = 0.094 m x = -0.094*cos(w*t) v = dx/dt = 0.094*sin(w*t) where w = (k/m)^0.5 x is the x position; v is the speed; w is the angular velocity; k is the spring constant; m is the mass at t = 0.24
s, v = 0.094*sin(((2080/0.46)^0.5)*0.24)
Her expectation of winning is 1%. The fair value is $200/100 = $2.00
Let the rate at which A works be a; the rate at which B works be b; the rate at which C work be c: (a + b)*12 = 1 (b + c)*15 = 1 a = 2*c gives b + a/2 = 1/15 b = 1/15 - a/2 Substituting into the
first equation: (a + 1/15 - a/2)*12 = 1 a/2 + 1/15 = 1/12 a/2 = 1/12 - 1/15 = (15-...
Plot the points (140, 1.4); (150, 1.25); (170, 0.93); (175, 0.78); (205, 0.43) on your graphing calculator; Use your calculator manual to find out how to fit this data to a straight line (linear
model), and then a quadratic equation (quadratic model) Look at the results; which...
The velocity of the passenger is (50-3) = 47 m/s East
Social Studies
Summarize the major factors that allowed the Incas to conquer and rule their large empire. Some suggest that, in many ways, the Incas were like the Romans or other Pre-Columbian American
Civilizations. Evaluate whether they were more similar or different, and why. Make sure to...
solve the following logarithmic equation. Be sure to reject any value of x that is not in the domain of orginal expression. then in the solution set round to two decimals places. Inx=8 I get e^8 But
am having a hard time with rounding two decimal places. Can someone help?
Thanks for telling me how to actually write it here.
solve the following exponential equation. exact answers only 1-8x 3x ¨i =e *I know that I did not post this correctly but I do not know how to put the 1-8x above the pie symbol and 3x above the e.
calculus-can someone please help me with this ques
I have two questions if someone can PLEASE help. 1.solve the equation in the real number system. x^4+11x^3+24x^2-23x+35=0 **Please show work** 2.Use the remainder theorem to find the remainder. When
f(x)is divided by x-3. Then use the factor theorem to determine whether x-3 is...
Using the best-fit line below for prediction, answer the following questions: a) What would you predict the price of Product X in volume of 150 to be (approximately)? b) What would you predict the
price of Product X in volume of 100 to be (approximately)?
What is the frictional force between the tires of a 2000 kg car and an asphalt road if the coefficient of friction is 1.2?
If the coefficient of kinetic friction between a 36 kg crate and the floor is 0.31, what horizontal force is required to move the crate at a constant velocity (so that FNET=0.0 N) across the floor?
What is the frictional force between the tires of a 2000 kg car and an asphalt road if the coefficient of friction is 1.2?
Look up the density of saltwater and the density of fresh water. The ship will sit higher in the liquid with greater density. The density of the water displaced times the volume of the water
displaced is equal to the weight of the ship. A fluid with a greater density will requ...
Let the new point be (x1, y1, z1) Then, using the distance formula: ((x1-1)^2 + (y1-3)^2 + (z1-3)^2)^0.5 = 5 (x1-1)^2 + (y1-3)^2 + (z1-3)^2 = 25 Put all units in terms of one variable. I'm choosing
x1 first: x1+2/3 = y1+1/2; y1 = x1 + 2/3 - 1/2 = x1 + 1/6 x1 + 2/3 = z1-3/2...
For the square with area 25 yd^2, x^2 = 25; x = 5; where x is the length of a side of the square. The perimeter of this square is 5*4 = 20 For the square with area 5 yd^2, y^2 = 5; y = 5^0.5; where y
is the length of the side of this square. The perimeter of this square is 4*(...
pH = -log[H+] pOH = -log[OH-] and pH = pOH = 14 pOH = -log(0.01) = 2 2 + pH = 14; pH = 12
This is a perfect opportunity for you to develop the skill of reading your textbook and finding the necessary information. You will probably need to find functions that read and search data, separate
data into components, and store data. Find these functions, and read through ...
Available is a 5 M/liter solution. You need 100 mL of a 0.91 M solution. In units of moles, you need 0.91 mol/liter * 0.1 liter = 0.091 moles of HNO3 0.091 moles / (5moles/liter) = 0.0182 liters of
the 5 M solution. Take 0.0182 liters of the 5 M solution, and dilute to 100 mL...
V = 4/3*PI*r^3 dV/dt = 12 = 4*PI*r^2*dr/dt = 4*PI*1^2*dr/dt Solve for dr/dt, the rate at which the radius is growing
help math
vboat + vcurrent = 23/3.6 vboat - vcurrent = 23/3 where vboat is the speed of the boat without a current, v is the speed of the current. The first equation models the boat traveling with the current;
the second equation models the boat traveling against the current. Use algebr...
Algebra 2
graph the function on your graphing calculator. The domain is the set of x values for which the function is defined; the range is the set of y values. Look at the graph and determine the set.
deltaQ = 0; where Q = m*C*T m is the mass, C is the specific heat, T is the temperature 608*C*(Tf-77) + 429*C*(Tf-23) = 0 Where Tf is the final temperature Solve for Tf
The boundaries for phi are 0 to 2*PI The boundaries for theta are 0 to PI
Q = m*C*(Tf-Ti) where Q is the heat, m is the mass of the substance, C is the specific heat, Tf is the final temperature; Ti is the initial temperature. deltaQ = 0 or 5*C(Tf - 10) + 1*C*(Tf - 40) = 0
divide by C: 5*(Tf-10) + (Tf-40) = 0 Solve for Tf.
a) 160 males answered the survey; and 110/160 = 68.75% preferred the current schedule. 140 females answered the survey, and only 50/140 = 35.71% preferred the current schedule, and 90/140 preferred
the flex schedule b) One way of describing the preferences is that males are al...
angular momentum of a hoop is m*r^2; the angular momentum of a solid cylinder is 1/2*m*r^2; the angular momentum of a solid sphere is 2/5*m*r^2; the Angular momentum of a hollow spherical shell is 2/
Volume of air in the balloon = V =(4/3)*PI*r^3 = (4/3)*PI*0.5^3 mass of air displaced is 1.29 * V mass of helium is 0.181*V Upward force due to pressure difference between helium and air is (1.29 -
0.181)*V Sum of forces in the y direction is 0: (1.29 - 0.181)*V - T - 0.0120 =...
F = -k*x where F = force, x is displacement 70.4 = -k*0.0535
F*x = 1/2*m*v^2 where F is the average force, x is the distance over which the bullet is accelerated, v is the speed, m is the mass of the bullet F*0.432 = 1/2*22*907^2
Power = F*v where F is the force, v is the speed 53.3 = F * (1.32m/1.22s)
Evaluate each of these answers: For example, evaluate A. (-1/2)^4 + 7*(1/2)^2 - 9*(1/2) -1 and 0^4 + 7*0^2 - 9*0 -1 If one of these answers is negative and one of these is positive, then the
Intermediate Value Theorem guarantees taht the polynomial has a root in this interval....
Algebra 1- Math
a. At the start, months = 0; employees = 2. In 6 months, months = 6; employees =14 (0,2) (6,14) b and c. 2 employees/month is the slope This is how many employees the company would have to add per
month to get 14 employees at the end of 6 months E = 2 + r*m where E is the numb...
MnO4−(aq) + SO32−(aq) + H+(aq) → Mn2+(aq) + SO42−(aq) + H2O(l) in the form a*MnO4−(aq) + b*SO32−(aq) + c*H+(aq) → d*Mn2+(aq) + e*SO42−(aq) + f*H2O(l) Start by counting how many of each are on each
side. I'm going to choose to...
Algebra 2
B = P*e^(Yr) where B is balance, P is principle = 1600; e is the number e; Y is the number of years; and r is the rate 0.079 Plug these numbers in and solve for the answer.
Start the person at (0,0), and calculate the new position after each path. After path 1: (80,0) After path 2: (80, -250) After path 3: x displacement is -130*cos30; y displacement is -130*sin30 (80 -
130*cos30, -250 -130*sin30) After path 4: x displacement is -190*cos30, y dis...
How many squares are on each side? x*y = 49 where x the number of squares for the width, and y is the number of squares for the length. 120000 = X*Y where X is the width in feet, Y is the length in
feet 120000/49 = (X/x) * (Y/y) You have X/x = Y/y
Math help give example
Third degree means that it has one term to the power of three. One example: x^3 + x^2 + x + 1
Look up how the vehicle should be serviced -- i.e., checkup every 5000 miles? etc. Determine the best method of doing so--consider all options available, such as hiring one person to do the job,
scheduling periodic checkups at a repair shop, etc.
Math help storie problem
let g be the price of green peppers and r be the price of red peppers. 3*g + 2*r = 3.45 4*g + 3*r = 5.00 Use algebra to solve for g and r
The derivative of sinh(x) is cosh(x), and the derivative of cosh(x) is sinh(x) So the nth derivative of sinh(x) is cosh(x) if n is odd; and the nth derivative of sinh(x) is sin(x) if n is even The
nth derivative of cosh(x) is sinh(x) if n is odd; and the nth derivative of cosh...
rho1*V1 = rho2*V2 where rho1 is the density of the object; V1 is the volume of the ship; rho2 is the density of the water; V2 is the volume of the water mass of object = 0.97 106/g = 0.97 106/9.8
density of object = 0.97 106/(9.8*0.03*A) where A is the area (0.97 106/(9.8*0.03...
Determine the approximate midpoint of all the areas serviced, both in terms of distance, and with respect to traffic speed limits, road conditions, etc.
College Algebra
x*y = 480 2x + 2y = 88 solve for x and y, the width and length
v = v0 - 1/2*g*t^2 where v is speed as a function of time, v0 is the inital speed, and g is the acceleration due to gravity v = 24 - 4.9*t^2 = 0 Solve for t When it returns to its starting point, the
time will be 2*t, using t from the first question. Calculate v at this time (...
tan 60 = 120/x; x = 120/tan60
London dispersion force is a weak intermolecular force between two atoms or molecules in close proximity of each other. The force is a quantum force generated by electron repulsion between the
electron clouds of two atoms or molecules as they approach each other. The London di...
A = P *r*((1+r)^n)/((1+r)^n-1) where A = payment Amount per period P = initial Principal (loan amount) r = interest rate per period n = total number of payments or periods n = 30*12 = 360 P = 250000
4 = 0.045 A = 250000*0.045 * (1.045^360)/(1.045^360 - 1)
The tungsten filament emits white light, or the entire spectrum of colors of light. (look at any light bulb when its on). When the light is viewed with a spectroscope, it separates the white light
into the different colors.
The formula for calculating the payment amount is shown below. A = P * ((r(1+r)^n)/(((1+r)^n)-1) Simple Amortization Calculation Formula where A = payment Amount per period P = initial Principal
(loan amount) r = interest rate per period n = total number of payments or periods...
The probability of this combination occurring is (0.23^175) * (0.77 ^625)
Δx/3*[f(x0)+4f(x1)+2f(x2)+4f(x3)+........+2f(xn−2)+4f(xn−1)+f(xn)] = 30/3 * (76 + 4*118 + 2*130 + 4*143 + 2*139 + 4*136 + 2*137 + 4*139 + 2*130 + 4*122 + 60)
The density of maple syrup is 1.33g/mL. a bottle of maple Syrup contains 740mL of syrup. What is the mass of maple syrup?
An arrow pointing to the bottom left
I think that you graph both; if you were asked to graph f>3 AND f< -2, there would be nothing to graph, because there is no intersection of the graphs. But because the selector is OR, you graph all
the points for which f>3 (y > 3) or f<-2 (y<-2)
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jennifer&page=2","timestamp":"2014-04-21T05:57:54Z","content_type":null,"content_length":"33527","record_id":"<urn:uuid:7ccf38d6-8cb1-468d-844d-5646ece00f49>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of an Ellipse
October 1st 2008, 08:21 PM
Area of an Ellipse
x^2/16 + y^2/0=1
Find the function y=f(x) that gives the curve bounding the top of the ellipse.
Use deltax=1 and midpoints to approximate the area of the part of the ellipse lying in the 1st quadrant
Approximate the total area (which I assume is just c x 4)
October 1st 2008, 10:36 PM
Chris L T521
I think you meant 9 for the denominator in the $y^2$ term...for you were originally dividing by zero...
First, solve for y:
$\frac{y^2}{9}=1-\frac{x^2}{16}\implies y^2=\tfrac{9}{16}(16-x^2)\implies y=\pm\tfrac{3}{4}\sqrt{16-x^2}$
Since we are told to find the area under the ellipse in the first quadrant, we take $y=f(x)=\tfrac{3}{4}\sqrt{16-x^2}$
Now, we are to use a midpoint Riemann Sum:
We are told that $\Delta x=1$, and the ellipse's major axis is along the x-axis and has a length of 4 units. Thus, we are applying the Riemann sum over the interval $[0,4]$.
What is the midpoint of each section?
Our midpoint x values are $\tfrac{1}{2},~\tfrac{3}{2},~\tfrac{5}{2},\text{ and }\tfrac{7}{2}$
So our Riemann sum can be expressed as $\sum_{k=1}^{4}f\left[\tfrac{1}{2}(2k-1)\right]\Delta x$. Evaluating, we see that we have:
$A=\tfrac{3}{4}\left[\sqrt{16-\left(\tfrac{1}{2}\right)^2}+\sqrt{16-\left(\tfrac{3}{2}\right)^2}+\sqrt{16-\left(\tfrac{5}{2}\right)^2}+\sqrt{16-\left(\tfrac{7}{2}\right)^2}\right]$$\approx \color
Thus, the total area would be 4 times this amount.
Does this make sense? | {"url":"http://mathhelpforum.com/pre-calculus/51633-area-ellipse-print.html","timestamp":"2014-04-21T14:11:10Z","content_type":null,"content_length":"7686","record_id":"<urn:uuid:458aa743-8648-4114-827f-57d23a3740d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
PreClass_05 (email me your answers by 12 noon Thursday)
PreClass_05 (email me your answers by 12 noon Thursday ... taylormp@hiram.edu)
Reading assignment: Styer handout, (Chapts. 1-6)
Please provide brief answers to the following:
1) What is the difference between explanation and description? What role do each of these play in science?
2) What is the Stern-Gerlach experiment and why are the results surprising? What does Styer mean by the "conundrum of projections"?
3) If you flip one coin four times in a row, what is the probability that you will get all heads? What is the probability that you will get exactly two heads?
4) Explains the differences/similarities between the EPR experiment as described by Nick Herbert (photon pair-calcite detectors) and Dan Styer (atom pair-SG detector).
Everyone also needs to send me a specific question you had about the reading. | {"url":"http://home.hiram.edu/physics/QReality/PreClass_05.htm","timestamp":"2014-04-17T07:44:44Z","content_type":null,"content_length":"5562","record_id":"<urn:uuid:6f9e63aa-52eb-4110-b6a0-924c05925dcd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Citation for the Award of the Poincaré Prize 2006 to Edward Witten
Read on Arthur Jaffe's behalf at the IAMP meeting in Rio on 9 August 2006
Edward Witten is in the midst of an enormously productive career as a mathematical physicist. Born in 1951 in Baltimore, he began his undergraduate studies by majoring in history. Edward certainly
had the opportunity for prior exposure to sophisticated physics as his father Louis is a noted expert on relativity and gravitation. After his undergraduate studies, Edward returned to physics,
working with David Gross at Princeton, and receiving his doctorate in 1976.
Edward’s early work left an immediate impression on experts. He discovered a new class of instanton solutions to the classical Yang-Mills equations, very much a central subject at the time. He
pioneered work on field theories with N components and the associated “large-N limit” as N tends to infinity. Three years later as a Junior Fellow at Harvard he had already established a solid
international reputation—both in research and as a spell-binding lecturer. That year several major physics departments took the unusual step, at the time an extraordinary one, to attempt to recruit a
young post‑doctoral fellow to join their faculty as a full professor! At that point Edward returned to Princeton with Chiara Nappi, my post-doctoral fellow and Edward’s new wife. Edward has been in
great demand ever since.
Edward already became well-known in his early work for having keen mathematical insights. He re-interpreted Morse theory in an original way and related the Atiyah-Singer index theorem to the concept
of super-symmetry in physics. These ideas revolved about the classical formula expressing the Laplace-Beltrami operator in terms of the de Rham exterior derivative, Δ as the square of d+d*. This
insight was interesting in its own right. But it inspired his applying the same ideas to study the index of infinite-dimensional Dirac operators D and the self-adjoint operator Q = D+D*, known in
physics as super-charges, related to the energy H by ita representation as the square of Q analogous to the formula for Δ. This led to the name “ Witten index” for the index of D, a terminology that
many physicists still use.
In 1981 Witten also discovered an elegant approach to the positive energy theorem in classical relativity, proved in 1979 by Schoen and Yau. What developed as Witten’s hallmark is the insight to
relate a set of ideas in one field to an apparently unrelated set of ideas in a different field. In the case of the positive energy theorem, Witten again took inspiration from super-symmetry to
relate the geometry of space-time to the theory of spin structures and to an identity due to Lichnerowicz. The paper by Witten framed the new proof in a conceptual structure that related it to old
ideas and made the result immediately accessible to a wide variety of physicists and mathematicians.
In 1986 Witten’s had a spectacular insight by giving a quantum-field theory interpretation to Vaughan Jones’ recently-discovered knot invariant. Witten showed that the Jones polynomial for a knot can
be interpreted as the expectation of the parallel transport operator around the knot in a theory of quantum fields with a Chern-Simons action. This work set the stage for many other geometric
invariants, including the Donaldson invariants, being regarded as partition functions or expectations in quantum field theory. In most of these cases, the mathematical foundations of the functional
integral representations can still not be justified, but the insights and understanding of the picture will motivate work for many years in the future.
With the resurgence of “super-string theory” in 1984, Witten quickly became one of its leading exponents and one of its most original contributors. His 1987 monograph with Green and Schwarz became
the standard reference in that subject. Later Witten unified the approach to string theory by showing that many alternative string theories could be regarded as different aspects of one grand theory.
Witten also pioneered the interpretation of symmetries related to the electromagnetic duality of Maxwell’s equations, and its generalization in field theory, gauge theory, and string theory. He
pioneered the discovery of SL(2,Z) symmetry in physics, and brought concepts from number theory, as well as geometry, algebra, and representation theory centrally into physics.
In understanding Donaldson theory in 1995 Seiberg and Witten formulated the equations named after them which have provided so much insight into modern geometry. With the advent of this point of view
and fueled by its rapid dissemination over the internet, many geometers saw progress in their field proceed so rapidly that they could not hope to keep up.
Not only is Witten’s own work in the field of super-symmetry, string theory, M-theory, dualities and other symmetries of physics legend, but he has trained numerous students and postdoctoral
coworkers who have come to play leading roles in string theory and other aspects of theoretical physics.
I could continue on and on about other insights and advances made or suggested by Edward Witten. But perhaps it is just as effective to mention that for all his mentioned and unmentioned work, Witten
has already received many national and international honors and awards. These include the Alan Waterman award in 1986, the Field’s Medal in 1990, the CMI Research Award in 2001, the U.S. National
Medal of Science in 2002, and an honorary degree from Harvard University in 2005. Witten is a member of many honorary organizations, including the American Philosophical Society and the Royal
Society. While Witten may not need any additional recognition, it is an especially great personal pleasure and honor, as one of the original founders of IAMP, to present Edward Witten to receive the
Poincaré prize in 2006.
Arthur Jaffe
Harvard University | {"url":"http://www.arthurjaffe.com/Assets/documents/Witten%20Citation.htm","timestamp":"2014-04-21T00:37:23Z","content_type":null,"content_length":"6685","record_id":"<urn:uuid:6ea38b83-67b6-4705-9bd3-086212de090f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove that radius of convergence is at most 1
February 29th 2012, 02:03 PM
Prove that radius of convergence is at most 1
I need some help starting a proof that asks the following.
Suppose that the coefficients of the power series [Summation of a_n * z^n] are integers, infinately many of which are disctinct from zero. Prove that the radius of convergence is at most 1.
This question is 10 in baby rudin's chapter 3.
Any help and hints as to how to start this problem will be greatly appreciated.
March 2nd 2012, 01:47 AM
Re: Prove that radius of convergence is at most 1
If the radius of convergence is strictly greater than $1$, then the series converges at $1$ so the $a_n$ are $0$ for $n$ large enough. | {"url":"http://mathhelpforum.com/differential-geometry/195508-prove-radius-convergence-most-1-a-print.html","timestamp":"2014-04-19T08:58:36Z","content_type":null,"content_length":"4725","record_id":"<urn:uuid:938572d5-711a-42c2-804f-65361cbb3dcb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'basic function definition with sums/lists' topic
Author Comment/Response
I am a beginner, and I'm sure this is easy for you veterans out there, but...
I'm working on an optimization program with a number of deisgn variables. My question is how to define a function of a number if lists.
I have lists of constants involved in the system, along with a single list of the design variables. I am trying to minimize an objective function which is two sums. I have each of the lists
indexed to the sum iterators, and when I simply use f=(my function), I get a correct answer for my test case. But, I cannot seem to find the correct format/syntax for the form, f[x_, y_, z_,
a_]:= (myfunction).
I will continue to pore over the Mathematica book, but I would appreciate any help.
I've attached a portion of a notebook.
Attachment: help note!!.nb, URL: , | {"url":"http://forums.wolfram.com/student-support/topics/7366","timestamp":"2014-04-19T09:26:14Z","content_type":null,"content_length":"27004","record_id":"<urn:uuid:63adcdd6-dc68-486d-9083-b15b22e7e4cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
defining operator+
If you have operator+( const T& lhs, int rhs ) defined, then
1 T operator+( int lhs, const T& rhs )
2 { return rhs + lhs; }
BTW, boost has already solved all these problems. Consider using boost::operators
to generate all these. For example:
1 #include <boost/operators.hpp>
3 class MyBigInt : private boost::ordered_euclidean_ring_operators<MyBigInt>,
4 boost::ordered_euclidean_ring_operators<MyBigInt, int >
5 {
6 friend bool operator==( const MyBigInt& lhs, const MyBigInt& rhs );
7 friend bool operator<( const MyBigInt& lhs, const MyBigInt& rhs );
8 MyBigInt& operator+=( const MyBigInt& rhs );
9 MyBigInt& operator-=( const MyBigInt& rhs );
10 MyBigInt& operator*=( const MyBigInt& rhs );
11 MyBigInt& operator/=( const MyBigInt& rhs );
13 friend bool operator==( const MyBigInt& lhs, int rhs );
14 friend bool operator<( const MyBigInt& lhs, int rhs );
15 MyBigInt& operator+=( int rhs );
16 MyBigInt& operator-=( int rhs );
17 MyBigInt& operator*=( int rhs );
18 MyBigInt& operator/=( int rhs );
19 };
(Note that private inheritance is preferred because boost declares all the remaining functions
as friends, and you probably don't want people doing polymorphic upcasts to
boost::ordered_euclidean_ring_operators<MyBigInt> and such).
If you implement the above methods, then you should get all other permutations for free
(through the inheritance) that allow operation on two MyBigInts or a MyBigInt and an int.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/29444/","timestamp":"2014-04-16T19:05:26Z","content_type":null,"content_length":"22062","record_id":"<urn:uuid:a326fcd5-62ff-41ff-b420-a053522ae753>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability portable
Stability experimental
Maintainer Edward Kmett <ekmett@gmail.com>
NB: this contradicts another common meaning for an Associative Category, which is one where the pentagonal condition does not hold, but for which there is an identity.
class Bifunctor p k k k => Associative k p whereSource
A category with an associative bifunctor satisfying Mac Lane's pentagonal coherence identity law:
bimap id associate . associate . bimap associate id = associate . associate
Associative Hask Either
Associative Hask (,)
Associative Hask (Const2 t)
Coassociative Hask p => Associative Hask (Flip p)
class Bifunctor s k k k => Coassociative k s whereSource
A category with a coassociative bifunctor satisyfing the dual of Mac Lane's pentagonal coherence identity law:
bimap coassociate id . coassociate . bimap id coassociate = coassociate . coassociate
Coassociative Hask Either
Coassociative Hask (,)
Coassociative Hask (Const2 t)
Associative Hask p => Coassociative Hask (Flip p) | {"url":"http://hackage.haskell.org/package/category-extras-0.53.3/docs/Control-Category-Associative.html","timestamp":"2014-04-21T02:41:18Z","content_type":null,"content_length":"7247","record_id":"<urn:uuid:4197427f-84cb-41d1-8bbf-7c1c5551b359>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 115
Quiz 2 Name: key
YOU MUST SHOW ALL WORK TO RECEIVE CREDIT
In family voting, Mom (M) gets 3 votes, Dad (D) gets 2 votes and Junior (J) gets 1 vote.
1. Find the smallest and largest possible quotas for this weighted voting system.
Since there are a total of 4 votes, a majority must be greater than 3, so quota must be at least 4. The largest possible quota is when all votes are required, so 6.
2. List all possible coalitions for the family voting system, and give the weight of each.
There are 7 possible coalitions: {M}, {D}, {J}, {M,D}, {M,J}, {D,J}, {M,D,J}
3. List all sequential coalitions for the family voting system.
There are 6: {M,D,J}, {M,J,D}, {D,M,J}, {D,J,M}, {J,M,D}, {J,D,M}
4. Find the Banzhaf Power Index for each family member if the quota is set to 4.
{M} = 3 votes
{D} = 2 votes
{J} = 1 vote
{M,D} = 5 votes
{M,J} = 4 votes
{D,J}=3 votes
{M,D,J}=6 votes
Only the coalitions in red are winning coalitions, so they are the only ones which can have critical members.
The critical members are underlined.
M is critical 3 times, while D and J are critical once each. So M has 3/5 = 60% of the power, while D and J each have 1/5 = 20% of the power.
Extra Credit: Is it possible to pick a quota that results in Junior being a dummy? If so, find one. If not, explain why not. In either case, carefully explain your answer in 1 or 2 complete
English sentences.
Yes, if the quota is 5 there are only 2 winning coalitions: {M,D} and {M,D,J}. J is not a member of the first winning coalition, and is not critical in the 2^nd, so he is never critical, making him
a dummy. | {"url":"http://www.montgomerycollege.edu/~rpenn/201420/115/q2a.htm","timestamp":"2014-04-19T05:22:50Z","content_type":null,"content_length":"23705","record_id":"<urn:uuid:b9e78d3d-66ed-4c7a-bcb9-d7321f72da76>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1.Objective function: C = 7x – 3y. 2. Your computer-supply store sells two types of inkjet printers. The first, type A, costs $237 and you make a $22 profit on each one. The second, type B, costs
$122 and you make a $19 profit on each one. You can order no more than 120 printers this month, and you need to make at least $2,400 profit on them. If you must order at least one of each type of
printer, how many of each type of printer should you order if you want to minimize your cost? 69 of type A : 51 of type B 40 of type A : 80 of type B 51 of type A : 69 of type B 80 of type A : 40 of
type B
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50759fbde4b009782ca59614","timestamp":"2014-04-19T02:19:01Z","content_type":null,"content_length":"35547","record_id":"<urn:uuid:0d5cc4d4-0c84-4d02-939c-24c9c8a41f65>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Array variable transformation system employing subscript table mapping to scalar loop indices - International Business Machines Corporation
1. Field of the Invention
This invention relates generally to a program compiler optimization method and particularly to a technique for generating efficient object code for intrinsic Fortran 90 array variable transformation
2. Description of the Related Art
Recently, the X3J3 subcommittee of the American and National Standards Institute (ANSI), in collaboration with a corresponding International Standards Organization (ISO) group ISO/IEC JTC1/SC22/WG5,
approved a new standard for the Fortran programming language. This new Fortran programming language standard is generally denominated the "Fortran 90" language standard and is also in the art
denominated the "Fortran 90 Array" language standard. While maintaining compatibility with and providing support for the previous "FORTRAN 77" language standard, this new Fortran 90 Array language
standard defines many new programming constructs and functions.
Among these new features are the "array language" protocols. Fortran programs in the Fortran 90 language may specify operations to be performed on entire arrays or on specific sections of arrays. To
facilitate these new array operations, the Fortran 90 standard defines a new class of intrinsic array functions denominated "transformational functions". The Fortran 90 standard is promulgated by the
ISO as the International Fortran Standard specification number ISO/IEC 1539:1991 and is promulgated by the ANSI as specification number ANSI X3.198-199x.
The new features promulgated in the Fortran 90 language standard create new challenges for existing Fortran compiler and preprocessor technology. The existing FORTRAN 77 compilers do not address the
array transformational functions and must be completely rewritten and restructured to accommodate the Fortran 90 standard. The new problems created by Fortran 90 array constructs can be appreciated
with reference to the FORTRAN compiler art.
FIG. 1 illustrates a procedure for translating a FORTRAN program 10 to create an executable binary object program 12. A lexical/syntax analysis 14 is conducted to transform source program 10 to a
first intermediate language program 16. First intermediate language program 16 is then processed by an optimization routine 18 to create a second intermediate language program 20, which is then
directly interpreted by the code generation routine 22 to create object program 12.
Lexical/syntax analysis routine 14 and code generation routine 22 are easily defined in terms of the Fortran 90 specification and the machine binary code set, respectively. Thus, it is optimization
routine 18 that is primarily affected by the new Fortran 90 standard. Optimization routine 18 is illustrated in FIG. 2 as it is understood in the art. Optimization processing is achieved by first
performing a control flow analysis in routine 24 of first intermediate language 16. Control flow analysis routine 24 provides the control flow data 26, which are then passed to a data-flow analysis
routine 28 wherein first intermediate language program 16 is analyzed for data flow. Data-flow analysis routine 28 produces the data-flow data 30. Finally, a program transformation procedure 32
accepts control flow data 26, data-flow data 30 and first intermediate language program 16 to produce second intermediate language program 20.
Many methods for analyzing relationships between definitions and uses of variables and arrays are known in the art. For instance, in U.S. Pat. No. 4,773,007, Yasusi Kanada et al. disclose a program
translation method for obtaining appropriate array definition and use relationships in a DO-loop that contains a conditional statement or control structure. Kanada et al. teach a process where data
flow analysis procedure 28 checks for the presence of intra-loop changes to array variables before passing control to program transformation procedure 32. In their method, program transformation
procedure 32 is executed only if the array definition/use relationship data indicate that the elements of an array variable will be rewritten within a loop. Kanada et al. suggest comparing subscripts
associated with the array definition and the array use to test this indication.
In U.S. Pat. No. 4,833,606, Kyoko Iwasawa et al. disclose a compiling method for vectorizing multiple DO-loops. Their method detects variables that are defined in one loop and referenced by another
and maps the variable information into a dependency graph that is used to analyze data dependencies of each loop level. Iwasawa et al. disclose a compiler procedure that inserts control statements to
assure preservation of initial and end values for the loops, thereby minimizing the size of the working (temporary) arrays. The object of this method is to make it possible to perform vector
operations for an outer loop by transforming variables into arrays, permitting the variables having values defined in the outer loop to be used in the inner loop and also permitting these variables
with values defined in the inner loop to be replaced by the arrays so that the vectorization process can be performed for the outer loop. Iwasawa et al. teach a method for detecting connected
components linked together by an arc in a data dependency graph indicative of the sequence of definitions and use of variables in multiple loops.
Similarly, in U.S. Pat. No. 5,109,331, Kazuhisa Ishida et al. disclose a method for source program compilation by analyzing a subscript in an array variable included in a loop. They optimize program
execution by employing an "induction variable" represented by a standard form expressed by an initial value in a first loop iteration and an increment value for each subsequent loop iteration. A
subscript in a loop array is represented by linear coupling to the standard form. Subscript independency and dependency within the loop is tested during compilation. Basically, Ishida et al. search
for identical array elements having different names and force them to the identical storage location in the executable binary code, thereby saving memory and processing steps.
Thus, practitioners in the art generally employ "subscript tables" during compilation of array variables. A subscript table is a data structure commonly employed in the optimization process 18 (FIG.
1 ) and consists of a two-dimensional array containing elements of type integer and pointers to expressions that collectively encode all of the information pertaining to the sub script expressions
and their enclosing DO- loops. Practitioners in the compiler art have developed formal "dependency-analysis" procedures using constructs such as subscript tables to decompose nested DO-loops into
parallel strings suitable for simultaneous execution in multi-processor arrays.
For instance, Zhiyuan Li et al. ("Program Parallelization with Interprocedural Analysis", The Journal of Supercomputing, vol. 2, pp 225-44, Kluwer Academic Publishers, Boston, Mass., 1988) provide a
useful general discussion of interprocedural analysis for parallel computing that introduces several useful formal concepts related to the use of sub script tables in dependency analysis. Li et al.
examine several methods for interprocedural data dependency analysis, including "atom images". The atom images method is useful for resolving cases of difficult and inefficient data dependency.
Also, E. D. Kyriakis-Bitzaros et al. ("An Efficient Decomposition Technique for Mapping Nested Loops with Constant Dependencies into Regular Processor Arrays", Journal of Parallel and Distributed
Computing, vol. 16, pp. 258-264, Academic Press, Inc., 1992) discuss a method for mapping nested loops with constant dependencies into distributed memory multiprocessors. Kyriakis-Bitzaros et al.
discuss "loop index space", which is a concept used in compiler optimization that leads to improved binary code efficiency. They introduce the "Augmented Dependence Graph" (ADG) as a device for
separating different variables having equal indices in multiple nested loop statements.
Despite extensive effort in the compiler art related to array variable and nested DO-loop optimization, the array transformation functions introduced by the new Fortran 90 standard bring with them
new inefficiencies in storage and processing and there is an accordingly clearly-felt need in the art for more efficient compiling procedures suitable for application to these new intrinsic Fortran
90 array transformation functions.
This problem can be better understood by considering an example. The Fortran 90 standard specification of the SPREAD function is provided in TABLE 1 below and the Fortran 90 specification of the
TRANSPOSE function is provided in TABLE 2 below.
TABLE I
13.13.101 SPREAD (SOURCE, DIM, NCOPIES)
Description. Replicates an array by adding a dimension. Broadcasts
several copies of SOURCE along
a specified dimension (as in forming a book from copies of a single page)
and thus forms an array of
rank one greater.
Class. Transformational function.
SOURCE may be of any type. It may be scalar or array valuesd. The rank of
must be less than 7.
DIM must be scalar and of type integer with value in the range 1
≤ DIM ' n + 1,
where n is the rank of SOURCE.
must be scalar and of type integer.
Result Type, Type Parameter, and Shape. The result is an array of the
same type and type parameters
as SOURCE and of rank n + 1, where n is the rank of SOURCE.
Case (i):
If SOURCE is scalar, the shape of the result is (MAX, (NCOPIES,
Case (ii):
If SOURCE is array valued with shape (d[1], d[2], . . .,
d[n]), the shape is (d[1], d[2], . . ., d[DIM]-1,
MAX (NCOPIES, 0), d[DIM], . . ., d[n]).
Result Value.
Case (i):
If SOURCE is scalar, each element of the result has a value equal
to SOURCE.
Case (ii):
If SOURCE is array valued, the element of the result with
subscripts (r[1], r[2], . . ., r[n]+1)
has the value source (r[1], r[2], . . ., r[DIM]-1,
r[DIM]+1, . . ., r[n]+1).
if NC has the value 3 and is a zero-sized array of NC has the value
TABLE 2
13.13.109 TRANSPOSE (MATRIX)
Transpose an array of rank two.
Class. Transformational function.
MATRIX may be of any type and must have rank two.
Result type, Type Parameters, and Shape. The result is an array of the
same type and type
parameters as MATRIX and with rank two and shape (n, m) where (m, n) is
the shape of
Result Value.
Element (i, j) of the result has the value MATRIX (j, i) = 1, 2, .
. ., n;
j = 1, 2, . . ., m.
Consider the following exemplary Fortran 90 program. ##EQU1##
Examination of the semantics of the SPREAD function in TABLE 1 suggest the Fortran 90 program translation provided below in TABLE 3.
TABLE 3
REAL A(100, 100), B(100), C(100, 100, 100) integer new[-]- loop[-]- 1 integer new[-]- loop[-]- 2 integer new[-]- loop[-]- 3 real temporary[-]- array[-]- 1(100, 100, 100) real temporary[-]- array[-]-
2(100, 100) real temporary[-]- array[-]- 3(100, 100, 100) do new[-]- loop[-]- 3 = 1,100 do new[-]- loop[-]- 2 = 1,100 do new[-]- loop[-]- 1 = 1,100 temporary[-]- array[-]- 1 (new[-]- loop[-]- 1, new
[-]- loop[-]- 2, new[-]- loop[-] - 3) = * A(new[-]- loop[-]- 1, new[-]- loop[-]- 3) enddo enddo enddo do new[-]- loop[-]- 2 = 1,100 do new[-]- loop[-]- 1 = 1,100 temporary[-]- array[-]- 2 (new[-]-
loop[-]- 1, new[-]- loop[-]- 2) = B(new[] -- loop[-]- 1) enddo enddo do new[-]- loop[-]- 3 = 1,100 do new[-]- loop[-]- 2 = 1,100 do new[-]- loop[-]- 1 = 1,100 temporary[-]- array[-]- 3 (new[-]- loop
[-]- 1, new[-]- loop[-]- 2, new[-]- loop[-] - 3) = * temporary[-]- array[-]- 2(new[-]- loop[-]- 2, new[-]- loop[-]- 3) enddo enddo enddo do new[-]- loop[-]- 3 = 1,100 do new[-]- loop[-]- 2 = 1,100 do
new[-]- loop[-]- 1 = 1,100 C(new[-]- loop[-]- 1, new[-]- loop[-]- 2, new[-]- loop[] -- 3) = * temporary[-]- array[-]- 1 (new[-]- loop[-]- 1, new[-]- loop[-]- 2, new[-]- loop[-]- 3) + * temporary[] --
array[-]- 3 (new[-]- loop[-]- 1, new[-]- loop[-]- 2, new[-]- loop[-]- 3) enddo enddo enddo END
The intermediate program in TABLE 3 contains a total of 11 DO-loops herein denominated "scalarized loops". Note that three temporary arrays, occupying 2,010,000 storage elements, are created by the
scalarizing translation process. The translation approach leading to the program in TABLE 3, although easy for the Scalarizer component of a compiler or preprocessor, is intolerably efficient at
execution time in both storage space and execution steps. Nevertheless, this approach is the only translation approach known in the art for the new SPREAD function introduced by the Fortran 90
standard. Similar problems are known for the usual translation procedures applied to the TRANSPOSE function shown above in TABLE 2 as well as the CSHIFT function specified below in TABLE 4 and the
EOSHIFT function specified below in TABLE 5.
TABLE 4
13.13.25 CSHIFT (ARRAY, SHIFT, DIM)
Optional Argument. DIM
Description. Perform a circular shift on an array expression of rank one
or perform circular
shifts on all the complete rank one sections along a given dimension of
an array expression
of rank two or greater. Elements shifted out at one end of a section are
shifted in at the
other end. Different sections may be shifted by different amounts and in
different directions.
Class. Transformational function.
ARRAY may be of any type. It must not be scalar.
SHIFT must be of type integer and must be scalar if ARRAY has rank one;
otherwise, it must be scalar or of rank n - 1 and of shape
d[2], . . ., d[DIM]-1, d[DIM]+1, . . ., d[n]) where
(d[1], d[2], . . ., d[n]) is the shape of ARRAY.
DIM must be a scalar and of type integer with a value in the range
1 ≤ DIM ≤ n, where n is the rank of ARRAY. If DIM is
omitted, it is
as if it were present with the
value 1.
Result Type, Type parameter, and Shape. The result is of the type and
type parameters
of ARRAY, and has the shape of ARRAY.
Result Value.
Case (i):
If ARRAY has rank one, element i of the result is ARRAY
(1 + MODULO (i +SHIFT - 1, SIZE (ARRAY))).
Case (ii):
If ARRAY has rank greater than one, section (s[1], s[2], . .
., s[DIM]-1, :,
S[DIM]+1, . . ., s[n]) of the result has a value equal to
CSHIFT (ARRAY (s[1], s[2],
. . ., s[DIM]-1, :, s[DIM]+1, . . ., s[n]), 1, sh), where
sh is SHIFT or SHIFT (s[1],
s[2], . . ., s[DIM]-1, s[DIM]+1, . . ., s[n]).
Case (i):
If V is the array [1, 2, 3, 4, 5, 6], the effect of shifting V
circularly to the
left by two positions is achieved by CSHIFT (V, SHIFT = 2) which
has the value [3, 4, 5, 6, 1, 2]; CSHIFT (V, SHIFT = -2) achieves
circular shift to the right by two positions and has the value (5,
6, 1, 2, 3, 4].
Case (ii):
The rows of an array of rank two may all be shifted by the same
amount or by
TABLE 5
13.13.32 EOSHIFT (ARRAY, SHIFT, BOUNDARY, DIM)
Optional Argument. BOUNDARY, DIM
Description. Perform an end-off shift on an array expression of rank one
or perform
end-off shifts on all the complete rank-one sections along a given
dimensions of an
array expression of rank two or greater. Elements are shifted off at one
end of a sections
and copies of a boundary value are shifted in at the other end. Different
sections may have
different boundary values and may be shifted by different amounts and in
different directions.
Class. Transformational function.
ARRAY may be of any type. It must not ne scalar.
SHIFT must be of type integer and must be scalar if ARRAY has rank one;
otherwise, it
must be scalar or fo rank n - 1 and of shape (d[1], d[2], .
. ., d[DIM]-1,
d[DIM]+1, . . ., d[n]) where (d[1], d[2], . . .,
d[n]) is the shape of ARRAY.
must be of the same type and type parameters as ARRAY and must be
scalar if
ARRAY has rank one; otherwise it must be either scalar or of rank
n - 1
and of shape (d[1], d[2], . . ., d[DIM]-1, d[DIM]+1,
. . ., d[n]). BOUNDARY may be omitted for the data types in the
table end, in this case, it is as if it were present with the
scalar value
Type of ARRAY Value of BOUNDARY
Integer 0
Real 0.0
Complex (0.0, 0.0)
Logical false
Character (len) len blanks
DIM (optional)
must be scalar and of type integer with a value in the
range ≤ DIM ≤ n, where n is the rank
of ARRAY. If DIM is omitted, it is as if it were present with the value
Result Type, Type Parameter, and Shape. The result has the type, type
parameters, and shape of
Result Value. Element (s[1], s[2], . . ., s[n]) of the result
has the value ARRAY (s[1], s[2], . . ., s[n])
of the result has the value ARRAY (s[1], s[2], . . ., s[DIM]-1,
s[DIM] + sh, s[DIM]+1, . . ., s[n])
where sh is SHIFT or SHIFT (s[1], s[2], . . ., s[DIM]-1,
s[DIM]+1, . . ., s[n]) provided the
inequality LBOUND (ARRAY, DIM) ≤ s[DIM] + sh ≤ UBOUND
(ARRAY, DIM) holds and is
otherwise BOUNDARY or BOUNDARY (s[1], s[2], . . ., s[DIM]-1,
. . ., s[n]).
Case (i):
If V is the array [1, 2, 3, 4, 5, 6], the effect of shifting V
end-off to the left by 3
positions is achieved by EOSHIFT (V, SHIFT = 3) which has the
value [4, 5,
6, 0, 0, 0]; EOSHIFT (V, SHIFT = -2, boundary = 99) achieves and
end-off shift to the right by 2 positions with the boundary value
of 99 and has
the value [99, 9, 1, 2, 3, 4].
Case (ii):
The rows of an array of rank two may all be shifted by the same
amount or by
different amounts and the boundary elements can be the same or
different. If M
The millions of steps and millions of temporary storage elements required for the relatively simple application of the Fortran 90 SPREAD function discussed above in connection with TABLE 3 suggest
that there is a clearly-felt need in the art for improved optimization procedures for the intrinsic Fortran 90 array transformational functions. The related unresolved problems and deficiencies are
clearly felt in the art and are solved by this invention in the manner described below.
This invention eliminates the processing and storage inefficiencies of the compiled array functions by extending the Subscript Table (ST) dependency analysis techniques with a new Subscript Mapping
(SM) transform to restructure loop scalarizing procedures during compiler optimization processing. The essential object of this invention is to transform the ST from the beginning array state to the
ultimate array state without the explicit intermediate temporary array storage steps normally required in the art. This invention arises from the unexpectedly advantageous discovery that the new
Fortran 90 array transformations each remap the storage of array elements from an actual (source) to a final (object) array by way of one or more abstract (temporary) arrays. It is an advantageous
feature of this invention that a new SM transformation is introduced to convert the initial ST to a final ST without actually materializing the intermediate (abstract) array distributions.
It is yet another advantage of this invention that the remapping of the array Subscript Tables is no longer treated as an actual data transfer operation, thereby eliminating the storage space
requirement for temporary (abstract) arrays during execution.
The foregoing, together with other objects, features and advantages of this invention, will become more apparent when referring to the following specification, claims and the accompanying drawing.
For a more complete understanding of this invention, reference is now made to the following detailed description of the embodiments as illustrated in the accompanying drawing, wherein:
FIG. 1 shows a functional block diagram of an exemplary compiling method from the prior art;
FIG. 2 shows a functional block diagram of an exemplary compiling optimization method from the prior art;
FIG. 3 shows a functional block diagram of the Subscript Table mapping transformation method of this invention;
FIGS. 4A-4C provide exemplary Subscript Tables for an illustrative array variable transformation example;
FIGS. 5A-5C provide the Subscript Maps for the array variable transformation example of FIGS. 4A-4C;
FIGS. 6A-6C provide the transformed Subscript Maps for the array variable transformation example of FIGS. 4A-4C; and
FIG. 7 shows a functional block diagram of an exemplary embodiment of the compiling system of this invention.
The method of this invention extends the use of subscript tables in the processing of Fortran 90 transformational functions. As discussed above, a subscript table (ST) is a data structure commonly
used in the traditional optimizing Fortran compiler/preprocessor. STs are two-dimensional arrays consisting of elements of type integer and pointers to expressions that collectively encode all of the
information pertaining to subscript expressions and their enclosing DO-loops. A subscript expression may be reconstructed in its enclosing DO-loops from the ST.
For example, consider the following exemplary Fortran 90 program, which was discussed above in connection with TABLE 3. ##EQU2##
FIG. 4A provides a subscript table ST[c] that describes the subscript expression C from this program example. The reference to the array variable C in this Fortran 90 program example is equivalent to
a reference to C(i,j,k) where the three array sections correspond to three enclosing DO-loops in accordance with the mapping specified by ST[c] in FIG. 4A.
Similarly, FIG. 4B provides a subscript table ST[A] that describes the array variable A in the above program example. Note that the first two DO-loops in subscript table ST[A] correspond to the two
array sections in A(i,j) as shown. Similarly, FIG. 4C provides a subscript table ST[B] that defines the equivalency between the enclosing DO-loops and the single array section in array variable B.
An important feature of this invention is that the STs illustrated in FIGS. 4A-4C can be extended to incorporate the processing of the Fortran 90 Array Construction and Array Manipulation
transformation functions. This concept arises from the unexpectedly advantageous observation that certain Array Construction (such as SPREAD) and Array Manipulation (such as TRANSPOSE)
transformational functions merely remap the storage of array elements from an actual array variable to an abstract array. That is, the subscript expressions appearing in the abstract array are
expressible as functions of those appearing in the original array. These functions of this invention are herein denominated "subscript maps (SM's)".
With this observation, the traditional method for processing these classes of transformational functions, discussed above in connection with TABLE 3, can be reinterpreted as the "materialization" and
storage of the "abstract" arrays. Such reinterpretation also, for the first time, explains why the traditionally-generated code for these classes of transformational functions is so inefficient in
time and space requirements; that is, the traditional method does not consider or suggest any advantages arising from remapping of these array elements but instead uniformly processes such remapping
as real data transfer operations.
The key element of the method of this invention arises from the unexpected discovery that this concept of remapping can be formulated and simply encoded in the STs. The inventors herein introduce the
concept of a SM that maps the original scalarized loop numbers directly to a set of final Fortran 90 scalarized DO-loop numbers.
The procedure of this invention is now described. For every scalarized loop in a ST (e.g., FIGS. 4A-4C), there may be defined a new function herein denominated the "Subscript Map (SM)", which maps
the scalarized loop number into the (still abstract) final Fortran 90 scalarized DO-loop number. For example, consider FIGS. 5A-5C. FIG. 5A specifies a Subscript Map SM[c] that maps the old
scalarized loops into the new Fortran 90 DO-loops for the array variable C from the above program example. FIGS. 5B-5C provide Subscript Maps SM[A] and SM[B], which similarly map old loop numbers to
new DO-loop numbers for the array variables A and B from the above program example. The preferred mapping relationship is simply the identity function ; that is, every scalarized loop number in the
subscript table is mapped to the same Fortran 90 final DO-loop number. The identity function is chosen for convenience and the method of this invention does not require any particular mapping
As the Array Construction and Array Manipulation transformational functions are processed, the subscript maps from FIGS. 5A-5C are revised according to a simple, predefined mapping rule derived from
the definition of the particular Fortran 90 transformational function. Thus, the SPREAD function gives rise to one predefined mapping rule and the TRANSPOSE function gives rise to another. For
instance, to process the SPREAD function, the corresponding predetermined mapping rule requires that all SM elements greater than or equal to the SPREAD function dimension (the DIM parameter) must be
incremented by one. This can be understood with reference to FIGS. 6A-6C for the above program example.
In FIG. 6A, the SPREAD (A, 2, 100) construct is found to require the second element of SM[A] to be incremented because it is equal to or greater than DIM=2. In FIG. 6B, no element of SM[B] is equal
to or greater than DIM=2 so there is no change to the subscript map for SPREAD (B, 2, 100). Finally, in FIG. 6C, the single SM element is incremented because it is equal to or greater than DIM=1 for
the compound function SPREAD (SPREAD (B, 2, 100), 1,100).
Note that these SM transformations are accomplished without the materialization of abstract arrays into temporary storage and without changes to the original STs (FIGS. 4A-4C). The results of the SM
transformations imposed by the predetermined mapping rule for the SPREAD operations in the above program example are shown as the transformed subscript maps SM'[A] and SM'[B] in FIGS. 6A-6C.
These transformed SMs next employed to transform the STs discussed above in connection with FIGS. 4A-4C into new STs suitable for use in generating the final object code. The combination of new SMs
with old STs to create new STs is summarized below. ##EQU3##
From the above, the final translated code generation may be simply written as follows in TABLE 6 below.
TABLE 6
REAL A(100, 100), B(100), C(100, 100, 100) integer new[-]- loop[-]- 1 integer new[-]- loop[-]- 2 integer new[-]- loop[-]- 3 do new[-]- loop[-]- 3 = 1,100 do new[-]- loop[-]- 2 = 1,100 do new[-]- loop
[-]- 1 = 1,100 C(new[-]- loop[-]- 1, new[-]- loop[-]- 2, new[-]- loop[] -- 3) * A(new[-]- loop[-]- 1, new[-]- loop[-]- 3) + B(new[-]- loop[-]- 2) enddo enddo enddo END
Comparing TABLE 6 to TABLE 3 above, note that the procedure in TABLE 6 requires only three DO-loops instead of eleven, saving millions of instruction executions as well as saving 2,010,000 elements
of temporary storage during binary code execution. In general, no temporary array storage is required with the mapping transform compilation method of this invention.
The method of this invention for processing the Fortran 90 Array Construction and Array Manipulation transformational functions using subscript tables and subscript mapping functions can be
generalized to functions other than the SPREAD function discussed above. In all such cases, the STs, together with the predetermined subscript mapping rules defining the SMs, provide a compact,
centralized, effective and efficient means for processing such functions by capturing the essence of the remapping of array element storage from actual abstract array in the form of a SM.
The predetermined mapping rule for the SPREAD function can be formally written as follows: ##EQU4##
The predetermined mapping rule for the TRANSPOSE function can be formally written as: SM'(1)=SM(2) SM'(2)=SM(1)
Mapping rules for other Array Manipulation transformational functions such as CSHIFT and EOSHIFT can be similarly defined. For example, if the shift value is S, then the resulting subscript
expression for DIM can generally be computed by incrementing the value of the Fortran 90 DO-loop variable corresponding to SM(DIM) by S. Some minor exceptions to this rule exist near the boundaries
of the array variable being shifted, where the SMs must be specially defined to conform with the particular semantics of the CSHIFT and EOSHIFT functions.
Loop interchange or loop reversal operations can sometimes be applied to improve code efficiency by avoiding the use of temporary arrays. When using either loop interchange or reversal, the subscript
tables are transformed accordingly to reflect the new sub script values.
FIG. 3 provides a functional block diagram of an illustrative embodiment of the processing method of this invention. After beginning, the first source array variable (e.g., A in the above-discussed
example) is selected in a selection step 34 and a J by J Subscript Table ST is generated using, preferably, the identity function in a ST creation step 36.
Without the method of this invention, the method of the existing art would then proceed directly to step 38 where the Subscript Table is immediately used to generate the scalarized DO-loops necessary
to load the abstract arrays to temporary storage and therefrom to step 40 for the generation and storage of the binary code. However, with the method of this invention, several additional steps are
provided in procedure 42 as shown in FIG. 3.
The first step of procedure 42 is the creation of a 1 by J subscript map SM for the selected source array variable in creation step 44. Subscript map SM is preferably generated by the identity
function from Subscript Table ST but may be created by another useful transformational function. After creation, Subscript Map SM is then transformed to SM' in accordance with the predetermined
mapping rule corresponding to the particular Fortran 90 transformation function being decoded at step 46. Finally, in step 48, the transformed Subscript Map SM' and the original Subscript Table ST
are combined to create a transformed K by K Subscript Table ST'. The last two steps 50 and 52 illustrate the situation where the Fortran 90 transformational functions are nested one within the other,
such as is the case in the above program example. First, the transformed Subscript Table ST' is renamed and then steps 44-48 are repeated for the exterior transformation function, assuming the
interior function to be an array variable represented by ST=ST'.
FIG. 7 provides a simple functional block diagram of a compiling system organized to operate in accordance with the method of this invention. A Central Processing Unit (CPU) 54 is coupled to a memory
56 in the usual manner. Within the memory 56 are several objects, including a source program object 58, the code object 60 and several objects associated with the method of this invention. Source
program 58 is coupled to a parsing object 62, which recognizes the particular Fortran 90 transformational function F and provides necessary information to a scalarizing object 64. Scalarizing object
64 generates the Subscript Table (ST) discussed above.
A transforming object 66 converts the initial Subscript Table ST[A] into a final Subscript Table ST[B] representing the final scalarized DO-loop such as shown above in TABLE 6. Within transforming
object 66 are a mapping object 68, which generates the Subscript Map SM[A], preferably by application of the identity function to Subscript Table ST[A] from object 64. Object 70 provides the
predetermined mapping rule RF associated with the function F identified in parsing object 62. Combining object 72 applies the predetermined mapping rule RF from object 70 to the SM[A] from object 68
to provide a transformed Subscript Map ST'[A], which is then employed to transform the ST[A] from object 64 into ST'[A], which is equivalent to the finally desired Subscript Table ST[B].
Final Subscript Table ST[B] from transforming object 66 is then provided to the encoder object 74, which generates the final scalarized DO-loops in the binary executable code required for object code
The inventors conducted experiments designed to compare performance of the compiling method of this invention with the existing art. One such experiment used the following Fortran 90 program: ##EQU5#
The results of the measurements performed on an IBM RASC/System 6000 Model 540 are provided in TABLE 7 below.
TABLE 7
Compiler Used User CPU Execution Time
NAGware f9O 43.38 sec
IBM xlf (VAST-2)
3.87 sec
This Invention 1.80 sec
Although subscript tables are traditionally limited to data dependency analysis, the subscript mapping extension described herein permits subscript tables to be substantially extended to include the
important loop restructuring techniques such as loop reversal and loop interchange. Also, traditional data dependency analysis may be refined with this technique to consider the special roles assumed
by the scalarized loops, resulting in improved data dependence and analysis accuracy.
Clearly, other embodiments and modifications of this invention will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, this invention is to be limited only by
the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawing. | {"url":"http://www.freepatentsonline.com/5485619.html","timestamp":"2014-04-19T10:20:15Z","content_type":null,"content_length":"72790","record_id":"<urn:uuid:18cad3c5-e383-4ee2-8e0b-098a923ca059>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with syntax to reverse a positive integer
I am new to java.
I am learning with a book and on line videos.
an assignment i am trying to complete requires me to read in a positive integer and to print out the integer in reverse
heres what I have so far
thanks in advance for any help hints or suggestions :)
Java Code:
* File: ReverseDigits.java
* -------------------
* Programming exercise 7 from Page 97 (Chapter 4)
* The Art and Science of Java by Eric S. Roberts
import acm.program.*;
public class ReverseDigits extends ConsoleProgram {
public void run() {
println("This program reverses the digits in an integer.");
int n = readInt("Enter a positive integer: ");
int nReversed = 0;
int digits = 0;
int countdown = n; // ive duplicated (int n) cos i need (int n) again later
while (countdown > 0) { // this while loop counts the digits in (int n)
countdown /= 10;
* what i had in mind to do next was a for loop repeated the same amount
* of times as there are digits in (int n)
* then use n % 10 to get the last digit in its own
* if int n was lets say 1234
* I could say that (4E+3) plus (3E+2) plus (2E+1) plus (1E+0) would give me 4321
int Eplus = digits;
for (int i = 0;i<digits;i++){
nReversed += n%10;// how do i do this bit // nReversed += (n%10)E+(Eplus)
println("The reverse of the digits is " + nReversed);
Join Date
Feb 2010
Rep Power
In your code, nReversed will be 10 (4+3+2+1).
You need to multiply each digit by the corresponding multiple of 10. In this case, you can use Math.pow() for instance :
Java Code:
for (int i = 0;i<digits;i++){
nReversed += n%10[COLOR="Red"][B]*(Math.pow(10, Eplus-1))[/B][/COLOR];
Many thanks, for your help, brilliant, works great :)
I couldnt work out why you used Eplus-1 but then i realised you moved the Eplus--; below the nReversed bit where as i had it above.
you would think that they might have mentioned Math.pow
before setting the task eh!
All they have mentioned so far is scientific notation with the E+ which is why i was going along that way. I had tried assigning nReverse as a double, casting it as a double and using all
manner of parenth variations trying to get the syntax to work.
I saw Math.pow while I was looking round but i thought i would have to import some math library to use it (we're only using console program at the moment)
Join Date
Jun 2008
Blog Entries
Rep Power
Java Code:
String forwardString = Integer.toString(n);
for (int i = 0; i < forwardString.length(); i++) {
System.out.print(forwardString.charAt(forwardString.length() - i - 1));
Java Code:
System.out.println(new StringBuffer(String.valueOf(n)).reverse());
thanks Fubarable, and Darryl,
I shall endeavour to play with those bits of code and see what it does,
The math pow was what i was aiming for, we haven't done anything with strings as yet.
Gee you guys in here are so helpful, thanks very much.
While you're at it, consider this simple recursive solution:
Java Code:
private static int reverse(int n, int r) {
if (n == 0) return r;
return reverse(n/10, 10*r+n%10);
public static int reverse(int n) {
return reverse(n, 0);
It doesn't use Strings nor Math.pow or whatever; all you need to do is call it with one parameters: reverse(number)
kind regards,
Cute eh? Most of the time people forget (me included) that you can use additional parameters for all sorts of purposes while in the middle of recursive calls; e.g. you can pass entire Lists
to collect intermediate results or whatever; it greatly simplifies recursion and the used data structures.
kind regards,
Thanks Jos
That looks really simple although I hasten to add I'm not that sure exactly what it is doing however the maths bits look familiar.
some questions I have:
Why is there is a public and a private method both called reverse; when it is called how does it know which one to execute?
also where did (int r) come from?
the return command is something I have not yet encountered and using static int to me means using a constant. Do please excuse my ignorance but if you could outline what is happening in
simple English (my dutch is worse than my Java :) ) I should be most grateful
kind regards
The first method is private because I don't want anyone in the outside world to call this method; it only makes sense if it's called with the second parameter equal to zero. Nothing can
garantee that so I made a second method (that can be called by everyone) that takes care of it.
Please start reading Sun's Java Tutorial, you'll be glad you studied it afterwards. That 'return' business is an essential part of Java; you'll cripple the language without it and there is no
hocus pocus hidden in it.
kind regards,
I will look at that tutorial
I am currently following Stanford University's "CS106a Programming Methodology" course on YouTube, together with the course text book The Art and Science of Java by Eric S. Roberts, I'm using
Eclipse cos Stanford provide it on course web site together all the course handouts.
I have just finished Lecture 5 and chapter 4 in the text
Thanks again Jos, I've stuck that code into eclipse and saved with some of my other study tasks. and i'll come back to it when ive progressed some more.:)
kind regards
I will look at that tutorial
I am currently following Stanford University's "CS106a Programming Methodology" course on YouTube, together with the course text book The Art and Science of Java by Eric S. Roberts, I'm using
Eclipse cos Stanford provide it on course web site together all the course handouts.
I have just finished Lecture 5 and chapter 4 in the text
Thanks again Jos, I've stuck that code into eclipse and saved with some of my other study tasks. and i'll come back to it when ive progressed some more.:)
Yep, keep it for later and in the mean time play with Karel the Robot ;-) That Stanford course seems like fun (I just watched (part of) a few editions). The course starts off a bit slower
than the tutorials but both are good study material; have fun.
kind regards,
Okay im up to the chapter and lecture on methods, despite there being a lot of confusion going on in my head i think i have figured this out.:D
Java Code:
private static int reverse(int n, int r) {
if (n == 0) return r;
return reverse(n/10, 10*r+n%10);
public static int reverse(int n) {
return reverse(n, 0);
public static int reverse is called because (like my program) that has the variable (int n or 1234)
the public method returns two variable to the private method 1234, 0
if (n == 0) return r;
this bit is like a stop now im done and the answer is r and only one variable is returned
return reverse(n/10, 10*r+n%10) returns two variables to the private
1234/10=123 ,,,,, 10*0+n%10=4
then it goes again to the private
123/10=12 ,,, 10*4 + n%10 = 43
and again
12/10=1 ,,, 10*43 + n%10 = 432
until n==0 and r is the answer
jos that is genius
Cute eh? Most of the time people forget (me included) that you can use additional parameters for all sorts of purposes while in the middle of recursive calls; e.g. you can pass entire Lists
to collect intermediate results or whatever; it greatly simplifies recursion and the used data structures.
kind regards,
Rather than pass a List to be appended to, I prefer having the recursive method return a List and using addAll(...) to consolidate the results in one List. That way, the external caller
doesn't have to pass in a List to begin with. Of course, there could be a public method that calls the (private) recursive method with an appropriate List, but that just adds one more method.
Could you possibly expound on the advantages of passing the List over this approach? Thanks.
My approach can be seen in many of the methods of this class:
Swing Utils « Java Tips Weblog | {"url":"http://www.java-forums.org/new-java/25925-help-syntax-reverse-positive-integer.html","timestamp":"2014-04-20T07:06:40Z","content_type":null,"content_length":"122980","record_id":"<urn:uuid:ba3e6144-7143-4132-a6e2-7060fc76a7a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Propagation of acoustic waves near an ocean surface with mean flow
ASA 127th Meeting M.I.T. 1994 June 6-10
3aUW10. Propagation of acoustic waves near an ocean surface with mean flow inhomogeneities.
Kai Ming Li
Eng. Mech. Discipline, Faculty of Technol., The Open Univ., Milton Keynes MK7 6AA, UK
There is considerable interest in the model of underwater sound propagation in a moving-stratified ocean as it has long been established that variations in current speed have significant effects on
the propagation of acoustic waves. This is due to the fact that these variations modify the sound-speed structure. This paper describes a mathematically rigorous method to study the propagation of
acoustic waves in a vertically stratified ocean with a mean flow. Much of the significant theoretical work in this field makes use of a high frequency approximation and the so-called plane-wave
ansatz. In this classical method, one substitutes the ansatz into the governing wave equation in order to determine an approximate solution. The first-order approximation leads to an eikonal equation
that defines the ``rays'' and the second-order approximation leads to a transport equation that gives the wave amplitude. However, a different approach is used in this paper. It is demonstrated that
the acoustic pressure can be represented by a twofold Fourier integral. The sound pressure is then estimated asymptotically by the method of stationary phase. The solution is particularly useful to
provide a better physical understanding of the problem at much reduced computational cost. | {"url":"http://www.auditory.org/asamtgs/asa94mit/3aUW/3aUW10.html","timestamp":"2014-04-16T22:24:58Z","content_type":null,"content_length":"1958","record_id":"<urn:uuid:225e1382-838f-420a-90a6-005dd9b8989c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Foundations
This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
We begin with a fundamental approach to quantum mechanics based on the unitary representations of the group of diffeomorphisms of physical space (and correspondingly, self-adjoint representations of
a local current algebra). From these, various classes of quantum configuration spaces arise naturally.
Ideal measurements are described in quantum mechanics textbooks by two postulates: the collapse of the wave packet and Bornâ  s rule for the probabilities of outcomes. The quantum evolution of a
system then has two components: a unitary (Hamiltonian) evolution in between measurements and non-unitary one when a measurement is performed. This situation was considered to be unsatisfactory by
many people, including Einstein, Bohr, de Broglie, von Neumann and Wigner, but has remained unsolved to date.
I consider systems that consist of a few hot and a few cold two-level systems and define heat engines as unitaries that extract energy. These unitaries perform logical operations whose complexity
depends on both the desired efficiency and the temperature quotient. I show cases where the optimal heat engine solves a hard computational task (e.g. an NP-hard problem) [2]. Heat engines can also
drive refrigerators and use the temperature difference between two systems for cooling a third one. I argue that these triples of systems define a classification of thermodynamic resources [1].
Usually, quantum theory (QT) is introduced by giving a list of abstract mathematical postulates, including the Hilbert space formalism and the Born rule. Even though the result is mathematically
sound and in perfect agreement with experiment, there remains the question of why this formalism is a natural choice, and how QT could possibly be modified in a consistent way. My talk is on recent
work with Lluis Masanes, where we show that five simple operational axioms actually determine the formalism of QT uniquely. This is based to a large extent on Lucien Hardy's seminal work.
We present a new formulation of quantum mechanics for closed systems like the universe using an extension of familiar probability theory that incorporates negative probabilities. Probabilities must
be positive for alternative histories that are the basis of settleable bets. However, quantum mechanics describes alternative histories are not the basis for settleable bets as in the two-slit
experiment. These alternatives can be assigned extended probabilities that are sometimes negative. We will compare this with the decoherent (consistent) histories formulation of quantum theory.
The nature of antimatter is examined in the context of algebraic quantum field theory. It is shown that the notion of antimatter is more general than that of antiparticles. Properly speaking, then,
antimatter is not matter made up of antiparticles --- rather, antiparticles are particles made up of antimatter. We go on to discuss whether the notion of antimatter is itself completely general in
quantum field theory. Does the matter-antimatter distinction apply to all field theoretic systems? The
Recently rediscovered results in the theory of partial differential equations show that for free fields, the properties of the field in an arbitrarily small volume of space, traced through eternity,
determine completely the field everywhere at all times. Over finite times, the field is determined in the entire region spanned by the intersection of the future null cone of the earliest event and
the past null cone of the latest event. Thus this paradigm of classical field
Symmetric monoidal categories provide a convenient and enlightening framework within which to compare and contrast physical theories on a common mathematical footing. In this talk we consider two
theories: stabiliser qubit quantum mechanics and the toy bit theory proposed by Rob Spekkens. Expressed in the categorical framework the two theories look very similar mathematically, reflecting
their common physical features.
Quantum mechanics does not allow us to measure all possible combinations of observables on one system. Even in the simplest case of two observables we know, that measuring one of the observables
changes the system in such way, that the other measurement will not give us desired precise information about the state of the system.
Nonlocality is arguably one of the most remarkable features of quantum mechanics. On the other hand nature seems to forbid other no-signaling correlations that cannot be generated by quantum systems.
Usual approaches to explain this limitation is based on information theoretic properties of the correlations without any reference to physical theories they might emerge from. However, as shown in
[PRL 104, 140401 (2010)], it is the structure of local quantum systems that determines the bipartite correlations possible in quantum mechanics. We | {"url":"http://perimeterinstitute.ca/video-library/collection/quantum-foundations?page=7&qt-seminar_series=0","timestamp":"2014-04-17T15:56:04Z","content_type":null,"content_length":"66078","record_id":"<urn:uuid:39e40025-a6cf-49b9-ab9f-5b54c3fafd25>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bending and 2D Elasticity: Going Back in Time
Submitted by Ajit R. Jadhav on Thu, 2009-03-12 02:19.
The following is a (relatively minor) question which had occurred to me more than two decades ago. By now I have forgotten precisely when it was... It could have been when I was in my TE (third year
engineering) at COEP. ... Or, perhaps, it was later on, when I as at IIT Madras (studying stress analysis on my own). ... I don't remember precisely when it occurred to me, only *how* it did---it was
when I was poring over the first part of Dieter's book.
IMHO, a matter like this should have been explicitly dealt with by the undergraduate texts on solid mechanics / elasticity. But, none does. Without straining your curiosity any further, let me tell
you what that (minor) problem is:
Consider a horizontal cantilever beam as shown in the accompanying figure (A).
The beam has the length of L. Suppose that it has a uniform rectangular cross section, say of height h, and thickness t. Suppose the beam is loaded by nothing but a point load P at its free end.
Analysis of stresses/deflections in a cantilever beam like this involves considering the bending moments existing along the length of the beam. Bending moment is nothing but another name for torque.
The simple Euler-Bernoulli theory for such a beam is given in any introductory book on solid mechanics.
Now, suppose you increase h such that its magnitude becomes comparable to that of L, say, h = L. This circumstance is shown in the figure (B).
Suddenly, the beam problem now looks like one from the plane elasticity.
Three closely related questions follow:
(A) Now, checking the formulae or detailed derivations from 2D elasticity theory, we find no mention of the term "bending moment" anywhere in them. Why is it so?
(B) Why do torques seem to be present in the beam, but not in the plate? Don't the forces in the plate (say those associated with stresses) also form couples? After all, these forces also do act
across finite moment-arms, right? If so, precisely where, in the act of "stretching" the beam into the plate (or of "compressing" the plate into the beam), do they torques get vanished (or
(C) To make the matter even more confusing: Does the beam theory include couple-stresses as in contrast to the Cauchy definition (which, obviously, doesn't)?
What would be your own answers to the above questions (A), (B) and (C)?
Note that despite the length of the description preceding these questions, one-line answers are possible (though by no means mandatory!)
A little more on it all
Surprising, but I haven't ever found a single person thinking along the above lines---neither a professor, nor a postdoc, nor a student. My personal interactions with mechanicians have been limited,
and so, in a way, this is not a big deal.
But, still, I found it surprising that no textbooks write about such matters either. Neither Beer (of Lehigh, and guru to more than one Timoshenko winner), nor Popov (of Berkeley, a student of
Timoshenko's, I suppose), nor Shames (of SUNY Buffalo, a winner of several outstanding teacher awards) nor Crandall (MIT(?)), nor Timoshenko himself (later, of Stanford), nor AEH Love (of the 19th
century, the author of what is probably the longest in-print title in the solid mechanics field) mention any such relation or contrast between these two theories directly and explicitly.
I could be wrong, but at least I don't remember having run into a comparison like this during my browsing of any of these books...
So, the question also becomes: Why don't textbooks mention the above matter even if they do cover the two topics separately in great detail and depth?
Is it the case that the matter behind my questions is so trivial and obvious that any competent engineer could be assumed to have known and mastered it if he has mastered the these textbooks?
Or is it that what we bank on, in engineering education, is an indirect implication, namely, that if the student knows how to work out solutions to numerical (i.e. mathematical) problems from each of
the two areas taken separately, then all must be well with the state of his overall theoretical integrations, too? ...
Comments on this more general issue, as well as answers to the specific questions (A) through (C) above, are both welcome!
Also, if you remember having seen something like a comparison of the two theories in one of the books mentioned above, or any other book, then do feel absolutely free to correct me---I will
appreciate your help.
And also, no, I won't mind being told (even very bluntly) that I was making a mountain out of a mole-hill, if that's what you honestly feel about this issue...
Thanks in advance for your answers/comments!
(Update on March 12, 2009 only: Made better my use of the English language, and streamlined the writing.)
I think the reason is that the Euler-Bernoulli beam theory is valid only for slim and long beams, which requires a large ratio of L/h. (for example, assuming zero through thickness stress) .For the
2D plate, the bending moment, force, stresses are still there, but the classical beam theory doesn't apply.
Rest assured that there are others who think along these lines. They're just taciturn. :)
(A) Professor Dr. Vijay K. Varadan used to teach "Theory of Elasticity"
at Penn State University. In this course, he showed how the bending
moment of a cantilever could be used as a basis for establishing the
order of the Airy stress polynomial. I believe he studied at Madras in
India, too.
(B) For beams, moment has long been a convenient stepping stone to
obtaining approximate displacements and stresses, courtesy of
Euler-Bernoulli beam theory. For walls, a more general elasticity
theory is necessary to find displacements & stresses. A sound
knowledge of membrane theory is prerequisite to establishing a zone of
distinction, and, thanks to student debt and lack of funding, no one's
working in this field. Well, almost no one. ;)
(C) A stress is a force per area, so a couple-stress would be a moment
per area? In that case, a uniform couple-stress across a surface would
amount to forces canceling each other out at every point except the
ends, and a non-uniform couple-stress would essentially be a force
distribution. The result is perhaps a beam in transverse compression
with a few moments here and there.
(D) Well, who do you think wrote the textbooks in the first place?
People who had the understanding but not the funding, or people who had
the funding but not the understanding? Perhaps they were the select
few with both and they just wanted to keep it that way.
EDIT: Hooke's Law seems to suggest that
another problem can arise for *wide* beams. Hint: it has something to do with Poisson's ratio.
Hi David,
Interesting points you have raised... Here we go in brief...
-- Do you know if Mr. Varadan's notes are available on the Internet? I would have loved to have at least browsed through them. ...
-- About couple stresses, my own knowledge also is very limited, but I know enough about them from Sadd's book that I can fake things around a liitle bit as you can evidently see this one page by
Patrizio Neff to be extremely informative, comprehensive, and an excellent starting point to many downloads
-- About writers... I think that rather than any other attribute such as funds or talents, the folks who write books only have a very special kind of patience... I think that is one requirement that
is special to writing of books, above anything else...
I know of people in Pune who will write "textbooks" for the local engineering colleges market, for as little as Rs. 50000/- or USD 1000/- one-time royalty... They address these books to the
merit-wise downscale student population who must, somehow, pass their examinations, and get that BE degree from the University of Pune, and thereby enter either the job market or the marriage market
(whichever pays first or better) ... There are more than 20 engineering colleges in Pune city alone, and a majority of student population often requires books like these... These books are sort of
like diluted version of Cliff's Notes... Similar authors exist in every other major university of India, e.g., JNTU. When these engineering colleges advertise their "library holdings", they mean
these books. .... Since such books are specifically addressed to a particular syllabus of a particular university in a particular time-period, the market they have is strictly local in both space and
time. But yes, people are there to supply such books...
So, funding is not an issue relevant to authorship... Neither is talent... Anybody can write... Even I am thinking of writing one, though it won't be for that local market.
It's hard to write a hierarchically well-ordered book, though, speaking in general terms....
Kindly accept my apology for referring to information that, as I'm just now realizing, is practically impossible to share. If it is any consolation, Varadan's solution to that particular problem
suffers a strain compatibility problem; sigy should be non-zero at the support for non-zero values of Poisson's ratio.
Thanks for the links, though this is far too much information for me to take on. I've seen "three additional, independent degrees of freedom, related to the rotation" used in (matrix methods of)
space frame analysis.
Thanks also for reminding me not to generalize so quickly. Pune sounds like a very different place than, say, where I went to school in Pennsylvania, where engineering students pay $1000+ each year
on books. Funding is relevant to authorship if the level of complexity of the topic requires one to devote all of their time and attention to understanding, organizing, writing, and formatting the
content. This might just be me, since I took everything but the kitchen sink in my undergrad years.
In "Karl Girkmann, Flächentragwerke. Einführung in die Elastostatik der Scheiben, Platten, Schalen und Faltwerke, Vienna, Springer 1946" (it's in German, a book about theory of plates and shells) an
example is given, where a plate simply supported at left and right end under a sinus-shaped loading on its top is treated by using an ansatz for Airy's stress polynomial.
Furthermore, it is shown that even for a side ratio of l/w = 2 the solution for stress distributions is very similar to results of beam theory (parabolic shear stress, linear normal stress in cross
Hi Manfred,
Thanks.... BTW, even in Shames (or Beer and Johnston---I forgot which book) there is this example where they take the L/w ratio up to 3 or 2. But then, it occurs only in the context of highlighting
the fact the contribution of shear stress in producing the final displacements is much smaller as compared to that due to the normal stresses, due to the difference of the 4th order and 2nd order...
Unfortunately, the authors don't notice the point that I meant to highlight... Pl. see my general reply below too...
(A) Moments and forces are vectorial (6-scalars in all) characterizations of distributions of vector valued (traction/body) loadings. When the aspect ratio of the plate "becomes" beam-like then one
finds that this low order charaterization is effective in describing the behavior. When you have plate-like dimensions this is not true so no one bothers with it; Though you are free to define it if
you want; it just is not that useful.
(B) See answer to (A)
(C) A beam theory is a special case of a Cosserat medium so it does contain "couple stresses" (in a manner of speaking) but they are not reductions of couple stresses from the 3D/2D theory.
Prof. Dr. Sanjay Govindjee
University of California, Berkeley
Dear Sanjay,
Thanks... Pl. see my reply below...
I have a couple of things to add:
Theoy of elasticity deals with deformations and forces. Even if there is a moment applied, it is decomposed into forces (statically equivalent).
Bending and plate theories have been developed making some assumptions (like neglecting some stress components).
In plate theory also, there are moments due to bending stresses (like in the beam). These are expressed per unit width.
With regards,
- Ramdas
Hi Ramdas,
Thanks... On the first point, I almost agreed with you except that I am not sure: Aren't moments supposed to be as "primitive" as forces, for static equilibrium? (In the sense, doesn't conservation
of angular momentum stand on its own, without any reference to conservation of linear momentum?) So, can you decompose moments? Probably, "decomposition" is not the term... But, of course, I got the
main direction of your point that the moments just translate into stresses... Also pl. see my general reply below.
Thank you all very much for your replies, and also let me say sorry for the delay from my side... I was reading your replies as they came, but also thought it best to wait just a while longer before
jotting down my replies and clarifications...
(1) First of all, what I wanted to point out was not, really speaking, the mathematical relation between a beam and a plate. What I wanted to emphasize was something more conceptual in nature than
It was: this big difference of terminology that a typical student runs into---a difference which is never explained to him at all.
Consider, just for example, either Popov's book or Beer and Johnston's. ... Some 1/2 to 2/3 portions of these books involve a very prominent usage of the term: "bending moment." Most other books are
similar in terms of emphasis.
Just a semester or a year later, the same student enters the class-room (or is referred to some other advanced books), and bingo! Now, none of the stress analysis theories he reads will involve
anything like a "moment" in them. There are potentials and stress functions and complex analysis and path integrals and multiply connected domains... But no moments. The torques simply disappear. In
principle. In fact, he is sternly reminded that the moments, of course, cancel out---wasn't he attentive in the first lecture?....
Why this difference of treatment? That was the crux of the matter I wanted raised.
Sometimes people explain it away as the "Strength of Materials" approach vs. the "Solid Mechanics" approach. But this still is beating around the bush, I felt.
Thus, I wanted to highlight this abovementioned confusing part. The addition of the couple-stress related thingie was just to confuse the reader a little bit---just to add a little bit of spice to
this main question, that's all!
So, while everyone's reply was valuable, IMHO, it was David who really addressed the crux of the issue(s) that I sought to highlight...
(2) Now, here are my answers to the technical part of it
The beam theory does not include couple stresses. Not at least the beam theory for the simpler homogeneous class of materials like metals. (Sanjay, I wasn't talking about micropolar materials,
composites, or metals with extensive presence of micro-voids or microcracks in them... Thus, I didn't really have a Cosserat medium in mind.)
For the normal homogeneous (metal-like) materials, couple stresses are absent in the beam theory just the way they are absent in their 2D/3D elasticity theory.
The sole purpose of bending moments in the beam theory is to act as a vehicle or an intermediate concept (or a link) to translate the load boundary conditions into stresses---esp., the normal
stresses. That's all!
A main likely confusion here is the following. Students see moments present across the sections of the beams, and so, inadvertently, they might conclude that couples exist in the sense of
couple-stresses. This is wrong. Books should highlight this. But none does.
Here, it's useful to distinguish between an infinitesimal element and a finite section. Couple-stresses involve resistive torques across infinitesimal elements. The couples which the beam theory
considers, actually, are considered only across finite sections. The infinitesimal elements inside a beam do not carry torques. Bending moments are just a convenient short-hand for a special pattern
involving the usual stresses in the vertical cut.
(3) The concerns I expressed in the second part also are relevant.
In fact, to go further, I would say that the teaching of solid mechanics has actually suffered because it has traditionally been considered a responsibility of the Civil departments (yet another
controversial statement from me) and not to a separate TAM department (see my comments related to the recent closure of the Cornell TAM).
Civil folks think teaching of beams is important. But, it isn't. Not if the response of solids to a variety of mechanical forces is your real aim. A 50% to 66% weightage to the beam theory in the
first (introductory) courses is summarily uncalled for. It only helps pace out the Civil curriculum better---but hampers the preparation of mechanical, aerospace, electrical, metallurgical and other
The absence of topics like Airy's function from the introductory (first) courses on solid mechanics is hard to explain.
The absence of plasticity theories also is very hard to explain---and very immediately required by the metallurgical/materials students.
(A similar thing happens to teaching of fluids, too. Give it to Civil engineers, and they will unnecessarily over-emphasize the flow through channels... Give it to Aero engineers, and they will
over-emphasize external flows wherein the solid body is tiny compared to the fluid... Give it to mechanical engineers and they will overemphasize the empirical performance characteristics of pumps
and turbines... So on and so forth...)
So, what I am thinking aloud here is about changing the sequence and emphasis of topics for teaching of Solid Mechanics... I will post my thoughts again, later on...
Thanks again for reading and do let me know if I am going wrong in any technical part...
There's no such thing as a "concentrated moment load" at the fudamental level.
Whatever you see in approximate analyses, pointloads, moments, are idealizations.
All we have at the "basic" or "primitive" level are
1. surface tractions (surface loading),
2. body force distributions (volume loading) and
3. couple stress distributions (as someoen described these are surface/volume force distributions that cancel out at every point but produce a turning effect.\
Surface loadings (surface tractions and surface couple stresses) are the direct result of "bodies" interacting with each other in direct contact, while body force loadings (body forces AND volume
couple stresses) are the direct result of bodies interacting with each other NOT in direct contact.
The moments yous ee at the ends of beams are "idealizations" of the axial stress integrated over the whole cross section (taking orientation of cross section) into account :d
Not sure what the confusion here is. Whenever you try convert a distribution into a concentrated load, these bending and torsional moments pop up. Not to say that couple stresses are not present. If
you have the E-B approximation, there is nothign that states that the end moment load ISN't due to a couple stress distribution.
Recent comments | {"url":"http://imechanica.org/node/5040","timestamp":"2014-04-19T22:05:52Z","content_type":null,"content_length":"52078","record_id":"<urn:uuid:49448794-c83a-4d6b-8d6a-80339eff0b66>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
February 11th 2010, 03:19 AM #1
Dec 2009
Hi! I´m working on some problems and can´t get through this one.
Look at the following $[{a_{n}}]_{n=1}^{\infty}$
$a_{n+1} = \sqrt{2+a_n}$
$a_1 = \sqrt{2}$
Show that it converges and find the $lim_{n\rightarrow \infty}$
1) Show by induction that the sequence is monotone increasing and bounded above: $a_n\leq a_{n+1}\,,\,\,a_n\leq 2\,\,\forall\,n\in\mathbb{N}$ ,and deduce from that the sequence converges.
2) Now use arithmetic of limits to find the limit: if $\lim_{n\to\infty}a_n=\alpha$ , then $\alpha=\lim_{n\to\infty}a_{n+1}=\lim_{n\to\infty}\ sqrt{2+a_n}=\sqrt{\alpha +2}$
1) Show by induction that the sequence is monotone increasing and bounded above: $a_n\leq a_{n+1}\,,\,\,a_n\leq 2\,\,\forall\,n\in\mathbb{N}$ ,and deduce from that the sequence converges.
2) Now use arithmetic of limits to find the limit: if $\lim_{n\to\infty}a_n=\alpha$ , then $\alpha=\lim_{n\to\infty}a_{n+1}=\lim_{n\to\infty}\ sqrt{2+a_n}=\sqrt{\alpha +2}$
I´m with you so far. Limit of $a_n$ and $a_{n+1}$ are the same. And how do I find $\alpha$? How did you get a $_n< 2$?
Im new to this concept of induction.
Here is my work, is it somewhat right?
We want to prove that:
$a_{n+1} > a_n$
for all $a_n$ in N
$a_1 = \sqrt {2}$
$a_{n+1} = \sqrt{2+\sqrt{2}}$
= $a_{n+1} = \sqrt{2(1+2^{-0.5})}$
$= \sqrt2 \times [\frac {\sqrt{2}+1}{\sqrt{2}}]$
since $[\frac {\sqrt{2}+1}{\sqrt{2}}]$ is larger then 1 then $a_{n+1} < a_{n+2}$ by "induction"? Correct or something missing?
Im new to this concept of induction.
Here is my work, is it somewhat right?
We want to prove that:
$a_{n+1} > a_n$
for all $a_n$ in N
$a_1 = \sqrt {2}$
$a_{n+1} = \sqrt{2+\sqrt{2}}$<......??
= $a_{n+1} = \sqrt{2(1+2^{-0.5})}$
$= \sqrt2 \times [\frac {\sqrt{2}+1}{\sqrt{2}}]$
since $[\frac {\sqrt{2}+1}{\sqrt{2}}]$ is larger then 1 then $a_{n+1} < a_{n+2}$ by "induction"? Correct or something missing?
are you sure about that ?
Noo not at all,
No I am not and I just spotted an error you did too (I think).
$a_{n+1} = \sqrt{2+\sqrt{2}}$
= $a_{n+1} = \sqrt{2(1+2^{-0.5})}$
$= \sqrt2 \times [\frac {\sqrt{2}+1}{\sqrt{2}}]$
since $[\frac {\sqrt{2}+1}{\sqrt{2}}]$ is larger then 1 then $a_{n+1} < a_{n+2}$ by "induction"? Correct or something missing?[/quote] this is wrong---
It should be: $a_{n+1} = \sqrt{2+\sqrt{2}}$
$= \sqrt{2(1+\frac{1}{\sqrt{2}})}$
$= \sqrt{2} \times \sqrt{1+\frac{1}{\sqrt{2}}}$
And since $\sqrt{1+\frac{1}{\sqrt{2}}}$ can only be larger then 1 by induction $a_{n+1} >a_n$ ?? (I think)
you're quoting yourself
anyway, $a_{n+1} = \sqrt{2+\sqrt{2}}$ doesn't mean anything
it's $a_{2} = \sqrt{2+\sqrt{2}}$
Well, I thank tonio and you for your help, but Iám sorry I do not fully understand. That "quote" wasnt suppose to be there
$a_2 >a_1$
So far so god.
Now I have to show that the upper bound is 2, and that is where I fail.
How do I know the upperbound?
Tonio did $lim_{n\rightarrow \infty} = \sqrt{2+ \alpha}$
How do I solve find this $\alpha$ It must be 2, but I cant prove that..
By induction all $a_{n}>a_{n-1}$ and the $lim [a_n] = \sqrt{2+2}$ but I dont know how to prove it.
You dont mean that anyone is suppose to do the exercise for me, but I would prefer explanation by baby steps...
Last edited by Henryt999; February 11th 2010 at 05:34 AM.
well the exercise should mention that the upper bound is $2$.
and all you have to do is prove it.
anyway,try to prove $a_n\leq 2,\forall n\in \mathbb{N}$.
How ?
it's right for $n=1$ we have $\sqrt{2}\leq 2$,suppose you have $a_n\leq 2$ for some rank $n$ and try to prove $a_{n+1}\leq 2$ for some rank $n+1$.
if you succeeded in proving $a_n\leq 2,\forall n\in \mathbb{N}$
what can you say about $a_{n+1}-a_n$.
Like this?
because $\sqrt{2}<2$
$2>$$\sqrt{2 +\sqrt {2}}$
for $a_4>a_{3}$
$2>\sqrt{2+\sqrt{2 +\sqrt{2}}}$
Therefor it is true for all $n \in N$
Now I want to find where it converges too.
Since the $\lim [a_{n+1}] = \lim [a_n]$ we have
$a = \sqrt{2+a}$
$a^2 = 2+a$
$a^2 -a-2=0$
$a = 0.5 \pm \sqrt{\frac{1}{2}+2}$
$a = 0.5 \pm 1.5$
$a_1 = -1$
$a_2 = 2$
Since a can not be negative cause $\sqrt{2} <a_n<a_{n+1}\leq {2}$
Then it must converge towards $\sqrt {2+2} = 2$
Does this make sence or is my prof going to $\Rightarrow$
If then........
because $\sqrt{2}<2$
$2>$$\sqrt{2 +\sqrt {2}}$
for $a_4>a_{3}$
$2>\sqrt{2+\sqrt{2 +\sqrt{2}}}$
Therefor it is true for all $n \in N$ <..... you still need to check until GOD KNOWS WHERE.
Now I want to find where it converges too.
Since the $\lim [a_{n+1}] = \lim [a_n]$=some real number <.... we can't say that until we are sure that the limit exists.
we have
$a = \sqrt{2+a}$
$a^2 = 2+a$
$a^2 -a-2=0$
$a = 0.5 \pm \sqrt{\frac{1}{2}+2}$
$a = 0.5 \pm 1.5$
$a_1 = -1$
$a_2 = 2$
Since a can not be negative cause $\sqrt{2} <a_n<a_{n+1}\leq {2}$
Then it must converge towards $\sqrt {2+2} = 2$
Does this make sence or is my prof going to \Rightarrow
you agree that $a_n\leq 2,\forall n\in \mathbb{N}$ (assuming that you proved it by induction)
$a_{n+1}-a_{n}=\sqrt{2+a_n}-a_n=\frac{2+a_n-a_n^2}{\sqrt{2+a_n}+a_n}\geq 0,\forall a_n\leq 2$
Henryt999 ,first of all you must write down a few terms of the sequence to get a "feeling" of what the sequence look like.
And we have that:
$a_{1} = \sqrt{2}$
$a_{2} = \sqrt{2+\sqrt{2}}$
$a_{3} = \sqrt{2+\sqrt{2+\sqrt{2}}}$.
From the above we have that: $0<a_{1}<a_{2}<a_{3}$.
So we see that the sequence is positive and increasing ,hence can we prove in general that :
1) $a_{n}>0$?
2) $\frac {a_{n}}{a_{n+1}}<1$?
The first one we can prove by induction .
For the 2nd one we must prove: $\frac{a_{n}}{\sqrt{2+a_{n}}}<1$ ,or $\frac{(a_{n})^2}{2+a_{n}}<1$,since we have proved that $a_{n}>0$.
...............or................................. ...........
For that you must prove that: $a_{n}<2$ for all nεN ,since $a_{n}+1>0$ for all nεN.
So far having proved that the sequence is psitive increasing and bounded from above by 2 ,the sequence definitely converges to a limit x>0
Now if a sequence converges to x ,every subsequence of that sequence converges to x ,hence :
$lim_{n\to\infty}a_{n} = lim_{n\to\infty}a_{n+1} = x$,since $(a_{n+1})$ is a subsequence of $a_{n}$.
Thus $lim_{n\to\infty}a_{n+1}= \sqrt{2+lim_{n\to\infty}a_{n}}\Longrightarrow x=\sqrt{2+x}$.
AND x=-1 or x=2
Yes thank you
I´m trying to prove it by induction that $a_n<a_{n+1}$
$a_1 = \sqrt{2}$
$a_{2} = \sqrt{2+\sqrt{2}}$
hence $a_2>a_1$
and $a_3 = \sqrt{2+\sqrt{2+\sqrt{2}}}$
And then it follows by "induction magic" that for all $a_{n+1}>a_n$
Is this some proof?
So we see that the sequence is positive and increasing ,hence can we prove in general that :
1) $a_{n}>0$?
2) $\frac {a_{n}}{a_{n+1}}<1$?
The first one we can prove by induction .
For the 2nd one we must prove: $\frac{a_{n}}{\sqrt{2+a_{n}}}<1$ ,or $\frac{(a_{n})^2}{2+a_{n}}<1$,since we have proved that $a_{n}>0$.
I feel that your second statement is much too complicated. It is possible to show that $a_{n-1}<a_{n}$for all $n$ via induction, as well.
You have already checked some base cases, so suppose that $a_{k-1}<a_k$. Now, add two to each side. $a_{k-1}+2<a_k+2$. Now take the square root of both sides, keeping in mind that these terms are
positive and square root is monotonic. $\sqrt{a_{k-1}+2}<\sqrt{a_k+2}$. But, by definition, $\sqrt{a_{k-1}+2}=a_k$ and $\sqrt{a_k+2}=a_{k+1}$. So we have $a_k<a_{k+1}$. This fulfills the
induction step and hence, monotonicity is proved.
February 11th 2010, 03:36 AM #2
Oct 2009
February 11th 2010, 04:04 AM #3
Dec 2009
February 11th 2010, 04:31 AM #4
Super Member
Jun 2009
February 11th 2010, 04:45 AM #5
Dec 2009
February 11th 2010, 04:54 AM #6
Super Member
Jun 2009
February 11th 2010, 05:22 AM #7
Dec 2009
February 11th 2010, 05:39 AM #8
Super Member
Jun 2009
February 11th 2010, 06:28 AM #9
Super Member
Jun 2009
February 11th 2010, 06:35 AM #10
Dec 2009
February 11th 2010, 06:44 AM #11
Dec 2009
February 11th 2010, 06:56 AM #12
Super Member
Jun 2009
February 11th 2010, 07:22 AM #13
Mar 2009
February 11th 2010, 08:05 AM #14
Dec 2009
February 11th 2010, 08:08 AM #15 | {"url":"http://mathhelpforum.com/differential-geometry/128333-convergence.html","timestamp":"2014-04-20T14:24:04Z","content_type":null,"content_length":"104396","record_id":"<urn:uuid:ea50f91b-1ef3-446f-a259-a1f3a7d93322>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Screen coordinates to world coordinates [Archive] - OpenGL Discussion and Help Forums
04-20-2011, 07:54 AM
I have a question about a special case translating from screen coordinates to world coordinates.
I wanna do with a interactive particle system.
So that we can click on the screen and emit some particles.
I've come through a lot of comments on this topic. However my case is a little tricky..
I just wanna get the coordinates with a given z axis value of world coordinate.
for example, I wanna get the world coordinate from the screen given the z = -5.0.
How can I do that? Can you help me with some codes?
My initialized and configuration function is as the following:
void initRendering()
void handleResize(int w,int h)
void drawscene()
float a = 0.0f;
float b = 0.0f;
float c = 0.0f;
particle *p;
particle *p2;
p = e->anchorparticle->next;
p2 = e2->anchorparticle->next;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Thx in advance for people who will help me.Thx very much. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-174335.html","timestamp":"2014-04-17T04:14:30Z","content_type":null,"content_length":"5824","record_id":"<urn:uuid:c5c4bb16-8f63-4b45-8ee0-d1442ec51449>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orinda Algebra 1 Tutor
Find an Orinda Algebra 1 Tutor
...In addition, I also can help the students to understand to basic concept of Physics like motions, pressures, force, wave, energy and light. I helped one of my friend improve her grade in
Introduction to Physics class from D to B.I have a brother who is in grade 6, and I always help him to do Math and check his work. Besides, I'm a tutor of two girls who are in grade 5 and 6.
18 Subjects: including algebra 1, calculus, trigonometry, statistics
...I also have organized astronomical star parties to educate interested persons about astronomy and navigating the night sky. Each year that I taught Astronomy/Earth Science, I took my students
on a field trip to a major science center, including the Chabot Space and Science Center (CA), Goddard S...
32 Subjects: including algebra 1, reading, ACT Math, elementary math
...As a substitute teacher and former tutor, I am dedicated to supporting both students’ academic success and personal sense of well-being. My training in Nonviolent Communication is a core part
of my tutoring style; I connect well with young folks, can zero in on emotional obstacles and limiting b...
37 Subjects: including algebra 1, reading, English, Spanish
...Real-world applications bring a sense of purpose to the room along with an understanding of how the material fits into life outside the classroom. Structure provides a framework to set goals in
a clear, consistent manner. Most of all, I just love to help people learn and gain a sense of connection to the academic world.
22 Subjects: including algebra 1, chemistry, reading, biology
...Whether you need help in one, two, or all three areas, I can guide you to master the subject matter and ace the test. Working with me will give you confidence you need, we'll have fun learning
the material, and we can accomplish your goal quickly. I have three CA teaching credentials and I have tutored this test before.
53 Subjects: including algebra 1, English, reading, Spanish | {"url":"http://www.purplemath.com/Orinda_algebra_1_tutors.php","timestamp":"2014-04-20T13:33:50Z","content_type":null,"content_length":"23950","record_id":"<urn:uuid:f5b7c2a7-3aa2-46ce-b579-20a3ed39bb1c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework 4: Curvature Flow
In this homework we’ll take a closer look at curvature flow. We already saw one example of curvature flow (mean curvature flow) while studying the Poisson equation. The general idea behind curvature
flow is that we have an energy \(E\) measuring the smoothness of our geometry, and can reduce this energy by traveling along the direction of steepest descent. Conceptually, you can imagine that \(E
\) is some kind of potential — surfaces with many wrinkles have a lot of energy, and want to reduce this energy by “relaxing” into a smoother state. This description suggests a sort of “energy
landscape,” where high peaks correspond to wrinkly surfaces and low valleys correspond to smooth ones. Here’s a two-dimensional cartoon of what this landscape might look like:
To smooth out the geometry we can “ski” downhill, discovering a sequence of smoother and smoother surfaces along the way. Visually the effect is akin to a pat of butter slowly melting away on a hot
piece of toast, or a water droplet buckling into a perfectly round sphere.
To be more concrete, let \(f\) be an immersion of a manifold \(M\) (e.g., a curve or surface) into Euclidean space, and suppose that \(E\) is a real-valued function of \(f\). Then a curvature flow is
the solution to the partial differential equation
\[ \dot{f} = -\nabla E(f) \]
starting with some initial immersion \(f_0\), where \(\dot{f}\) denotes the derivative in time. In words, this equation just says that the difference in position of the surface at two consecutive
points in time is equal to the change in position that reduces the energy quickest.
For surfaces, two common energies are the Dirichlet energy
\[ E_D(f) = \frac{1}{4} \int_M |\nabla f|^2 dA \]
and the Willmore energy
\[ E_W(f) = \frac{1}{4} \int_M (\kappa_1 - \kappa_2)^2 dA. \]
where as usual \(\kappa_1\) and \(\kappa_2\) are the principal curvatures induced by \(f\). Both energies somehow measure the “wrinkliness” of a surface, but how are they related?
Exercise 4.1
For a surface \(M\) without boundary show that, up to an additive constant, the Willmore energy can be expressed as
\[ E_W = \int_M H^2\ dA \]
and explain why this constant does not matter in the context of curvature flow. Hint: Gauss-Bonnet.
In other words, Dirichlet energy looks like the (squared) \(L^2\) norm of the gradient, whereas Willmore energy looks like the (squared) \(L^2\) norm of mean curvature. Superficially these quantities
look quite different, but in fact they are quite similar!
Exercise 4.2
For a surface \(M\) without boundary show that (again up to constant factors)
\[ E_D = \langle \Delta f, f \rangle, \]
\[ E_W = \langle \Delta^2 f, f \rangle, \]
Hint: Green’s first identity and the definition of the mean curvature normal.
From these expressions, it may appear that Dirichlet and Willmore energy are nice, simple, quadratic functions of \(f\). Don’t be fooled! The Laplace-Beltrami operator \(\Delta\) depends on the
immersion \(f\) itself, which means that the corresponding gradient flows are rather nasty and nonlinear. Later on we’ll look at a couple ways to deal with this nonlinearity.
Gradient Descent
Now that have a couple energies to work with, how do we derive gradient flow? Previously we defined the gradient of a function \(\phi: \mathbb{R}^n \rightarrow \mathbb{R}\) as
\[ \nabla\phi = \left[ \begin{array}{c} \partial\phi/\partial x^1 \\ \vdots \\ \partial\phi/\partial x^n \end{array} \right], \]
i.e., as just a list of all the partial derivatives. This definition works pretty well when \(\phi\) is defined over a nice finite-dimensional vector space like \(\mathbb{R}^n\), but what about
something more exotic like the Willmore energy, which operates on an infinite-dimensional vector space of functions? In general, the gradient of a function \(\phi\) at a point \(x\) can be defined as
the unique vector \(\nabla \phi(x)\) satisfying
\[ \langle \nabla \phi(x), u \rangle = \lim_{h \rightarrow 0} \frac{\phi(x+hu)-\phi(x)}{h}, \]
for all vectors \(u\), where \(\langle \cdot, \cdot \rangle\) denotes the inner product on the vector space. In other words, taking the inner product with the gradient should yield the directional
derivative in the specified direction. Notice that this definition actually serves as a definition of differentiability: a function is differentiable at \(x\) if and only if all directional
derivatives can be characterized by a single vector \(\nabla\phi(x)\). Geometrically, then, differentiability means that if we “zoom in” far enough the function looks almost completely flat.
Exercise 4.3
Explain why the gradient is the direction of steepest ascent.
Exercise 4.4
Consider the function
\[ \phi: \mathbb{R}^2 \rightarrow \mathbb{R}; (x_1, x_2) \mapsto x_1^2 - x_2^2. \]
Confirm that the gradient found using the expression above agrees with the usual gradient found via partial derivatives.
Exercise 4.5
Let \(M\) be a surface without boundary. Assume that the Laplace-Beltrami operator \(\Delta\) is constant with respect to the immersion \(f\) and use our definition of \(\nabla\) above to show that
\[ \nabla E_D(f) \approx HN. \]
In other words, gradient flow on Dirichlet energy looks roughly like the mean curvature flow \(\dot{f} = -HN\) that we studied in the previous assignment.
The approximate gradient \(HN\) might be called a linearization of the true gradient — in general the idea is that we keep some quantity or some piece of an equation fixed so that the rest comes out
to be a nice linear expression. This trick can be quite helpful in a practical setting, but it is typically worth understanding where and how the approximation affects the final result.
One final thing to mull over is the fact that the gradient depends on our particular choice of inner product \(\langle \cdot, \cdot \rangle\), which appears on the left-hand side in our definition.
Why does the inner product matter? Intuitively, the gradient picks out the direction in which the energy increases fastest. But what does “fastest” mean? For instance, if we use a vector of real
numbers \(\mathsf{x} \in \mathbb{R}^m\) to encode the vertices of a discrete curve, then what we really care about is the energy increase with respect to a change in the length of the curve — not the
Euclidean length of the vector \(\mathsf{x}\) itself. In terms of our energy landscape we end up with a picture like the one below — you can imagine, for instance, that arrows on the left have unit
norm with respect to the standard Euclidean inner product whereas arrows on the right have unit norm with respect to the \(L^2\) inner product on our discrete curve. As a result, gradient descent
will proceed along two different trajectories:
Exercise 4.6
Consider an inner product \(\langle \cdot, \cdot \rangle\) on \(\mathbb{R}^n\) defined by a positive definite matrix \(\mathsf{B} \in \mathbb{R}^{n \times n}\), i.e., \(\langle u, v \rangle = \mathsf
{u^T B v}\). Show that the gradient \(\nabla_B\) induced by this inner product is related to the standard gradient \(\nabla\) via
\[ \nabla_\mathsf{B} = \mathsf{B}^{-1} \nabla. \]
In the discrete setting, the matrix \(\mathsf{B}\) is sometimes referred to as the mass matrix, because it encodes the amount of “mass” each degree of freedom contributes to the total. When working
with discrete differential forms, one possible choice mass matrix is given by (an appropriate constant multiple of) the diagonal Hodge star. This choice corresponds to applying piecewise-constant
interpolation and then taking the usual \(L^2\) inner product. For instance, here’s what piecewise constant interpolation looks like for a primal 1-form on a triangulated surface — the integrated
value stored on a given edge gets “spread out” over the so-called diamond region associated with that edge:
In general, let \(\star_k\) be a real diagonal matrix with one entry for each \(k\)-dimensional simplex \(\sigma_i\). The nonzero entries are
\[ \left( \star_k \right)_{ii} = \frac{|\sigma_i^\star|}{|\sigma_i|}, \]
where \(\sigma_i^\star\) is the circumcentric dual of \(\sigma\), and \(|\cdot|\) denotes the (unsigned) volume. The corresponding mass matrix on primal discrete \(k\)-forms in \(n\) dimensions is
\[ B_k = \left( \begin{array}{c} n \\ k \end{array} \right) \star_k, \]
i.e., a binomial coefficient times a (primal) diagonal Hodge star. The mass matrices for dual \(k\)-forms can likewise be expressed as constant multiples of the inverse:
\[ B^\star_k = \left( \begin{array}{c} n \\ k \end{array} \right) \star_{n-k}^{-1}. \]
These matrices will come in handy when deriving equations for discrete curvature flow.
Flow on Curves
For the remainder of this assignment, we’re going to make life simpler by working with planar curves instead of surfaces. As discussed earlier, we can describe the geometry of a curve via an
\[ \gamma: I \rightarrow \mathbb{R}^2; s \mapsto \gamma(s)\]
of some interval \(I = [0,L] \subset \mathbb{R}\) into the Euclidean plane \(\mathbb{R}^2\). A common energy for curves is simply the integral of the curvature \(\kappa\), squared:
\[ E(\gamma) = \int_0^L \kappa^2\ ds. \]
Let’s first establish some facts about curvature in the smooth setting.
Exercise 4.7
The unit tangent field on a smooth curve \(\gamma\) can be expressed as \(T = (\cos\theta,\sin\theta)\) for some function \(\theta: I \rightarrow \mathbb{R}\). Show that the normal curvature can be
expressed as
\[ \kappa = d\theta(X) \]
where \(X\) is a positively-oriented unit vector field. In other words, the scalar curvature is change in the direction of the tangent.
Exercise 4.8
Explain in words why the total curvature of any closed immersed curve \(\gamma\) (whether discrete or not) is an integer multiple of \(2\pi\), i.e., \(\gamma(0) = \gamma(L)\)
\[ \int_0^L \kappa ds = 2\pi k,\ k \in \mathbb{Z}. \]
(The number \(k\) is called the turning number of the curve.)
A stronger statement is the Whitney-Graustein theorem which says that the turning number of a curve will be preserved by any regular homotopy, i.e., by any continuous motion that keeps the curve
immersed. For instance, here’s an example of a motion that is not a regular homotopy — note that the curve gets “pinched” into a sharp cusp halfway thorugh the motion, at which point the turning
number goes from \(k=2\) to \(k=1\):
We’ll keep these ideas in mind as we develop algorithms for curvature flow.
Discrete Curves
In the discrete setting, \(\gamma\) is simply a collection of line segments connecting a sequence of vertices with coordinates \( \gamma_1, \ldots, \gamma_n \in \mathbb{R}^2 \):
Note that in the provided code framework, a curve is represented by a half edge mesh consisting of a single polygon. Therefore, to iterate over the curve you might write something like
FaceIter gamma = mesh.faces.begin();
HalfEdgeIter he = gamma->he;
do {
// do something interesting here!
he = he->next;
while( he != gamma->he );
As with surfaces, we can consider both a primal and dual “mesh” associated with this curve — this time, each primal edge is associated with a dual vertex at its midpoint, and each primal vertex is
associated with a dual edge connecting the two adjacent dual vertices:
In the language of discrete exterior calculus, then, \(\gamma \in (\mathbb{R}^2)^n\) is a \(\mathbb{R}^2\)-valued primal 0-form, i.e., a value associated with each vertex.
Exercise 4.9
Show that the nonzero entries of the diagonal Hodge star on primal 0-forms are given by
\[ (\star_0)_{ii} = L_i, \]
\[ L_i = \frac{1}{2}( |\gamma_{i+1}-\gamma_i| + |\gamma_i-\gamma_{i-1}| ). \]
Coding 4.1
Implement the methods Edge::length() and Vertex::dualLength(), which should return the primal edge length and the circumcentric dual edge length, respectively. (The latter should be a one-liner that
just calls the former!) Finally, implement the method IsometricWillmoreFlow1D::buildMassMatrix(), which builds the diagonal Hodge star on primal 0-forms.
Exercise 4.10
Show that on a discrete curve the total curvature along a dual edge \(e^\star_{ij}\) is equal to the exterior angle \(\varphi_{ij} \in \mathbb{R}\) at the corresponding vertex, i.e., the difference
in angle between the two consecutive tangents:
\[ \varphi_{ij} = \theta_j - \theta_i = \int_{e^\star_{ij}} \kappa\ ds. \]
(Hint: Stokes’ theorem!)
In other words, the exterior angle \(\varphi\) gives us the integrated curvature. Applying the discrete Hodge star yields pointwise curvatures \(\kappa\), which we will use as the degrees of freedom
in our numerical curvature flow:
\[ \kappa = \star\varphi. \]
Coding 4.2
Implement the method Vertex::curvature(), which returns the pointwise curvature as defined above. Hint: in the language of discrete exterior calculus, what kind of quantity is \(\varphi\)? And what
kind of quantity is \(\kappa\)?
The next exercise should solidify your understanding of where all these different quantities live, and which operators take you back and forth from one space to another!
Exercise 4.11
Show that for a discrete curve \(E(\gamma)\) can be written explicitly as
\[ E(\gamma) = \sum_i \varphi_i^2 / L_i, \]
assuming we use piecewise constant interpolation of curvature.
Now that we have a discrete curvature energy, let’s derive an expression for the gradient.
Exercise 4.12
Let \(\varphi\) be the angle made by two vectors \(u,v \in \mathbb{R}^2\). Show that the gradient of \(\varphi\) with respect to \(u\) can be expressed as
\[ \nabla_u \varphi = -\frac{v_{\perp u}}{2A} \]
where \(v_{\perp u}\) denotes the component of \(v\) orthogonal to \(u\) and \(A\) is the area of a triangle with sides \(u\) and \(v\).
Exercise 4.13
Let \(L\) be the length of a vector \(u = b-a\), where \(a\) and \(b\) are a pair of points in \(\mathbb{R}^2\). Show that
\[ \nabla_a L = -\hat{u} \]
\[ \nabla_b L = \hat{u}. \]
Exercise 4.14
Collecting the results of the past few exercises, show that the gradient of the \(i\)th term of our curvature energy
\[ E_i = \varphi_i^2 / L_i \]
with respect to vertex coordinates \(\gamma_{i-1}\), \(\gamma_i\), and \(\gamma_{i+1}\) is given explicitly by
\nabla_{\gamma_{i-1}} E_i &=& \frac{\varphi_i}{L_i L_{i-1}} \left( \frac{v_{\perp u}}{A_i} + \frac{\varphi_i}{2L_i} \hat{u} \right) \\
\nabla_{\gamma_{i+1}} E_i &=& \frac{\varphi_i}{L_i^2} \left( \frac{u_{\perp v}-v_{\perp u}}{A_i} + \frac{\varphi_i}{2L_i} (\hat{v}-\hat{u}) \right) \\
\nabla_{\gamma_i} E_i &=& -\frac{\varphi_i}{L_i L_{i+1}} \left( \frac{u_{\perp v}}{A_i} + \frac{\varphi_i}{2L_i} \hat{v} \right) \\
where \(\varphi_i\) is the exterior angle at vertex \(i\), \(L_i\) is the dual edge length, and \(A_i\) is the area of a triangle with edges \(u = \gamma_i-\gamma_{i-1}\) and \(v = \gamma_{i+1}-\
gamma_i\). The gradient of \(E_i\) with respect to all other vertices \(\gamma_j\) is zero. Why? (Hint: remember to take the gradient with respect to the right metric!)
Coding 4.3
Implement the method WillmoreFlow1D::computeGradient() using the expressions above. The gradient of energy with respect to a given vertex should be stored in the member Vertex::energyGradient.
Remember that the overall energy is a sum over the terms \(E_i\), which means you will need to add up the contributions to the gradient at each vertex.
Coding 4.4
Implement the method WillmoreFlow1D::integrate(), which should integrate the flow equation
\[ \dot{\gamma} = -\nabla E(\gamma) \]
using the forward Euler scheme. (See the end of the previous assignment for a brief discussion of time integration.) Run the code on the provided meshes, and report the maximum stable time step in
each case, i.e., the largest time step for which the flow succeeds at smoothing out the curve. (The time step size can be adjusted using the keys ‘-‘, ‘=‘, ‘\_‘, and ‘+.’)
Curvature Flow in Curvature Space
If you feel exhausted at this point in the assignment, you’re not alone! Taking derivatives by hand can be a royal pain (and it gets even worse when you want second derivatives, which are required
for more sophisticated algorithms like Newton descent). But it’s worth grinding out this kind of expression at least once in your life so that you really understand what’s involved. In practice there
are a variety of alternatives, including numerical differentiation, automatic differentiation, and symbolic differentiation — these methods all have their place, and it’s well-worth understanding the
tradeoffs they offer in terms of accuracy, efficiency, and code complexity.
But before getting mired in the bedraggled business of computer-based derivatives, it’s worth realizing that there is a tantalizing fourth alternative: come up with a simpler formulation of your
problem! In particular, the mention of a convex-quadratic energy should make your mouth water and your heart beat faster, since these things make your life easier in a number of ways.
“Convex-quadratic” means that your energy can be expressed as a real-valued homogeneous quadratic polynomial, i.e., as
\[ E(x) = \langle Ax, x \rangle \]
for some positive-semidefinite self-adjoint linear operator \(A\) that does not depend whatsoever on the argument \(x\). For instance, suppose that in the discrete setting the degrees of freedom \(x
\) of our system are encoded by a vector \(\mathsf{x} \in \mathbb{R}^n\). Then a quadratic energy can always be represented as
\[ E(x) = \mathsf{x^T A x} \]
for some fixed symmetric positive-semidefinite matrix \(\mathsf{A} \in \mathbb{R}^{n \times n}\). Earlier we visualized definiteness in terms of the graph of the energy in two dimensions:
Independent of definiteness, the gradient of a quadratic energy has a simple linear expression
\[ \nabla E(x) = 2\mathsf{B^{-1}Ax}, \]
where the matrix \(\mathsf{B} \in \mathbb{R}^{n \times n}\) encodes the inner product. This setup not only simplifies the business of taking derivatives, but also makes things inexpensive to evaluate
at the numerical level — for instance, we can apply the backward Euler method by just solving a linear system, instead of performing some kind of nasty nonlinear root finding. Moreover, since the
matrix \(\mathsf{A}\) is constant, we can save a lot of computation by prefactoring it once and applying backsubstitution many times. This setup also has some nice analytical features. For one thing,
any local minimum of a convex energy is guaranteed to be a global minimum, which means that gradient descent will ultimately lead to an optimal solution. For another, there is an extremely
well-established theory of linear PDEs, which allows one to easily answer questions about things like numerical stability. In contrast, the general theory of nonlinear PDEs is kind of a zoo.
Ok, enough religion! Let’s see how a quadratic formulation can help us with the specific problem of curvature flow. Actually, we already have a quadratic energy — it’s
\[ E(\kappa) = \int_0^L \kappa^2 ds. \]
The only difference between this energy and the one we’ve been working with all along is that it’s a function of the curvature \(\kappa\) rather than the immersion \(f\) — as a result, we avoid all
the nonlinearity associated with expressing \(\kappa\) in terms of \(f\). More concretely, at the discrete level we’re going to store and manipulate a single number \(\kappa_i\) at each vertex,
rather than computing it indirectly from the vertex coordinates \(\gamma_i \in \mathbb{R}^2\).
Coding 4.5
Implement the method IsometricWillmoreFlow1D::getCurvature(), which simply evaluates the (pointwise) curvature at each vertex and stores it in the member Vertex::kappa. This method should call
One nice consequence of this setup is that the gradient becomes extremely simple! Taking the gradient with respect to the \(L^2\) inner product on 0-forms, we get
\[ \nabla E(\kappa) = -2\kappa. \]
Gradient flow then becomes a simple, linear equation involving no spatial derivatives:
\[ \dot{\kappa} = -2\kappa. \]
Coding 4.6
Implement the methods IsometricWillmoreFlow1D::computeFlowDirection() and IsometricWillmoreFlow1D::integrate(), which integrate the above flow equation using the forward Euler scheme. Hint: this step
should be very easy!
If we want to actually draw the curve, we can integrate curvature to get tangents, then integrate tangents to get positions. In other words, we can recover the direction \(\theta\) of the tangent via
\[ \theta(s) = \theta_0 + \int_0^s d\theta = \theta_0 + \int_0^s \kappa\ ds, \]
where \(\theta_0\) specifies the direction of the first tangent on our curve. The tangent vectors themselves are given by \(T(s) = (\cos\theta(s),\sin\theta(s))\) as before. Once we have the
tangents, we can recover the immersion itself via
\[ f(s) = f_0 + \int_I T(s)\ ds, \]
where again \(f_0\) specifies a “starting point” for the curve. In the discrete setting, these two steps correspond to a very simple reconstruction procedure: start out at some initial vertex and
join the edges end to end, rotating by the exterior angles \(\varphi_i\) at each step:
More explicitly, let
\[ \theta_i = \sum_{k=0}^i \varphi_k \]
and let
\[ T_i = L_i (\cos\theta_i,\sin\theta_i), \]
where \(L_i\) is the length of the \(i\)th primal edge. Then the vertex positions along the curve can be recovered via
\[ \gamma_i = \sum_{k=0}^i T_k, \]
mirroring the continuous formulae above. (Some questions to ponder: can you interpret these sums as piecewise integrals? What kind of quantity is \((\cos\varphi_i,\sin\varphi_i)\)? What kind of
quantity is \(T_i\)? What’s the relationship between the two?)
Coding 4.7
Implement the methods IsometricWillmoreFlow1D::recoverTangents() and IsometricWillmoreFlow1D::recoverPositions(), which should compute the values \(T_i\) and \(\gamma_i\), respectively. If you use an
\(O(n^2)\) algorithm to implement either of these methods you will get zero points! In other words, do not just evaluate the whole sum once for each vertex — there is obviously a better way to do it!
Note that the length of each edge is preserved by construction — after all, we build the curve out of segments that have the same length as in the previous curve! In other words, we get not only a
curvature flow but an isometric curvature flow (in the smooth case, isometry is reflected in the fact that \((\cos\alpha,\sin\alpha)\) is always a unit vector).
Ok, sounds pretty good so far: we simply subtract some fraction of the curvature each vertex, and compute a couple cumulative sums. As an added bonus, we preserve length. Why haven’t people been
doing this all along? The answer is: when something sounds too good to be true, it probably is!
In particular, let’s take a look at what happens when we work with a closed curve, i.e., a loop in the plane. If we make a completely arbitrary change to the curvature \(\kappa\), there’s no reason
to expect that segments joined end-to-end will close back up. In other words, the final vertex may be somewhere different from where the inital vertex appeared:
A fancy way of describing this situation is to say that the tangents we recover from this procedure are not integrable — they do not “integrate up” to form a closed loop. Similarly, the curvature
itself is not integrable: the cumulative curvature function \(\alpha\) does not describe the tangent direction of any closed loop. Why did this happen? Well, let’s go back and take a look at our
condition on total curvature. We said that the curvature \(\kappa\) of any closed loop \(\gamma\) satisfies
\[ \int_{0}^L \kappa\ ds = 2\pi k \]
for some turning number \(k \in \mathbb{Z}\). Another way of saying the same thing is that the first and last tangents of our curve must match up: \(T(0) = T(L)\). But if we change \(\kappa\)
arbitrarily, this condition will no longer hold.
Exercise 4.15
Suppose that at time zero, the curvature function \(\kappa\) on a smooth curve \(\gamma\) satisfies our condition on total curvature. Show that any change in curvature \(\dot{\kappa}\) orthogonal to
the constant function \(1: [0,L] \rightarrow \mathbb{R}; s \mapsto 1\) with respect to the \(L^2\) inner product will preserve this condition.
We also need a condition that ensures the endpoints will meet up, i.e., \(\gamma(0) = \gamma(L)\). Although we will not derive it here, this condition again turns out to have a simple form:
\[ \int_0^L \kappa \gamma = 0; \]
equivalently, \(\dot{\kappa}\) must be (\(L^2\)-)orthogonal to the \(x-\) and \(y-\) coordinate functions of the immersion. Overall, then, we’re saying that the change in curvature must avoid a
three-dimensional linear subspace of directions:
\[ \langle \dot{\kappa}, 1 \rangle = \langle \dot{\kappa}, \gamma_x \rangle = \langle \dot{\kappa}, \gamma_y \rangle = 0. \]
Like convex-quadratic energies, linear constraints are particularly easy to work with — in the case of our flow, we can simply remove the component of \(\dot{\kappa}\) that sits in this “forbidden”
space. More specifically, suppose that this space is spanned by an orthonormal basis \(\{c_i\}\). Then we can simply travel in the augmented direction
\[ \dot{\kappa}_c = \dot{\kappa} - \sum_{i=1}^3 \langle \dot{\kappa}, \hat{c}_i \rangle \hat{c}_i. \]
Coding 4.8
Implement the method IsometricWillmoreFlow1D::buildConstraints(), which constructs the three constraint directions \(1\), \(\gamma_x\), \(\gamma_y\) as dense column vectors.
Coding 4.9
Implement the method IsometricWillmoreFlow1D::orthogonalizeConstraints(), which builds an orthonormal basis \(\{\hat{c}_1,\hat{c}_2,\hat{c}_3\}\) spanning the same space as the three constraint
directions. Hint: use the Gram-Schmidt process — remember to use the correct inner product!
Coding 4.10
Implement the method IsometricWillmoreFlow::enforceConstraints(), which removes the forbidden directions from the flow using the orthogonal basis and the procedure outlined above. Try running the
isometric Willmore flow (you can switch to this flow in the GUI either by right-clicking to get the contextual menu, or by hitting the ‘i‘ key). Report the maximum stable time step that can be
achieved for each of the provided meshes. Does the flow preserve the turning number of each of the input curves? (In other words, did we faithfully capture the Whitney-Graustein theorem in the
discrete setting?) Try running the flow with and without constraints (by modifying IsometricWillmoreFlow::enforceConstraints()). What happens if you turn off all the constraints? Is there a strict
subset of the constraints that is sufficient to keep the loop closed, or are they all needed?
One thing you might have noticed about this new flow is that, while it still smooths out the curve, it looks very different from the one you implemented in the WillmoreFlow1D class. Why is there a
difference? In either case, aren’t we doing gradient descent on the same energy? Well, if you paid close attention, you might already know the answer: yes, the energy stays the same, but the metric
we used to define the gradient is different! (And if you really paid close attention, you may even know how to modify the second flow to make it look like the first one — and how to implement it!)
Beyond that, there are all sorts of nice ways to improve the algorithm that involve discrete Laplacians and Poisson equations and… you know what? You’ve worked hard enough already. Enjoy the break,
and see you next year!
Skeleton code: ddg_hw4_skeleton.zip
8 Responses to “Homework 4: Curvature Flow”
1. Homework 4 (which will serve as your final) has been assigned and is due at 5:30pm on Friday, December 14th. As you can probably tell there is a lot to do, so get started early!
Since I suspect there will be a lot of questions this time around, I will be holding office hours every work day until the 14th from 4-6pm in my office, 329 Annenberg.
Finally, the code has changed somewhat since the previous homework. I would suggest modifying the new Makefile using whatever flags, etc., were necessary for your previous assignments. If you
have any trouble, let it be known either here on the blog (which is more helpful to your fellow students) or via email.
Good luck!
2. Are there solutions available for the previous homeworks?
□ Yes – please come to office hours if you’d like a copy.
3. If you are having trouble compiling under Linux (due to new features), here’s a couple things:
1) Since this assignment uses GLSL shaders, if you’re having issues with OpenGL shader commands, just put
#define GL_GLEXT_PROTOTYPES
where applicable
2) If there’s issues with umfpack, add -lumfpack to your Makefile
□ Great, thanks for the info!
4. It may be useful to know that there is a method inner( x, B, y ) (declared in DenseMatrix.h) which evaluates the inner product with respect to a given matrix $B$. Using this routine, you should
be able to implement the routines IsometricWillmoreFlow1D::orthogonalizeConstraints() and IsometricWillmoreFlow1D::enforceConstraints() with a very small number of lines of code — something like
eight lines for the former and three lines for the latter (depending on your indentation style!).
5. I’m having some difficulty with the gradient expression in exercise 4.12 (which then gets used in exercise 4.14 and the coding exercise). Isn’t this an expression for the gradient of the unsigned
angle $|\varphi|$ rather than the signed angle $\varphi$?
□ You make a great point, Kevin. The function \(|\varphi|\) isn’t differentiable at zero, so it’s no surprise that the gradient of the signed angle is ill-defined when \(u\) and \(v\) are
If, as you suggest, you use the signed angle then things behave better — in this case, the gradient is something like
\[ \nabla_u \varphi = \frac{u^\perp}{|u^\perp|^2} \]
where \(u^\perp\) means \(u\) rotated a quarter-turn in the counter-clockwise direction.
For the assignment, you can get things to work by simply checking whether \(A\) is close to zero. But I’ll be sure to incorporate this observation into future versions of the assignment. | {"url":"http://brickisland.net/cs177fa12/?p=320","timestamp":"2014-04-17T10:05:06Z","content_type":null,"content_length":"56566","record_id":"<urn:uuid:1fedbbf1-7515-4f16-a9e3-ebcecaa4796c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: The Geometry Around Us
Description: This is an introduction to geometry with a technology-based project in which students will go out looking for geometric shapes in the world around them. Students will capture pictures or
video of the objects they find using a digital camera/camcorder and put all of them together into a multimedia presentation. The students will then upload their projects to edmodo.com.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: Let's Go Hunting!
Description: Students will work in groups of 3 or 4 scouring the campus for items of various shapes, sizes and angles. The items that students will be looking for are outlined on the rubric. Students
will use technology to create a creative digital presentation representing all of their captured scenes. Students will work in class to create this presentation and then be prepared to present it on
the assigned viewing date.
Subject: Mathematics (9 - 12), or Science (9 - 12)
Title: Minerals
Description: The students will gain information on the 5 characteristics of minerals. The information can be related to nonrenewable resources. This lesson should facilitate discussion on the
difference in precious gems and semi-precious gems.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: I Can Determine The Height Of A Rocket!
Description: The lesson is intended to give students a fun real-world experience in applying their math skills. They will use trigonometric ratios to calculate heights of tall structures. They will
also use the Internet to convert their calculations from standard to metric units and visa versa.
Subject: Mathematics (7 - 12), or Technology Education (9 - 12)
Title: Water Tank Creations Part I
Description: In this lesson students will study the surface area and volume of three-dimensional shapes by creating a water tank comprised of these shapes. Students will work in groups of 4-5 to
research water tanks, develop scale drawings and build a scale model. Teacher will evaluate the project using a rubric and students will assess one anothers cooperative skills using a rubric.
Subject: Mathematics (7 - 12), or Technology Education (9 - 12)
Title: Creating a Water Tank - Part II "Selling the Tank"
Description: Working in groups of 4-5 students will take the information,pictures and 3-D model of the water tank they assembled in Part I of Creating a Water Tank and develop a web page and a video
presentation. The web page will be a tool to advertise their water tank construction company and must include hyperlinks and digital pictures. The video presentation will be a "sales pitch" to a city
council. The web page and video will be scored using a rubric. The web page and video must include the surface area, volume and cost of construction.
Subject: Mathematics (6 - 12)
Title: Swimming Pool Math
Description: Students will use a swimming pool example to practice finding perimeter and area of different rectangles.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Percent Slope Tool
Description: This reproducible activity, from an Illuminations lesson, provides a template by which students can create a tool for calculating the slope of real-world inclines.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Building Height
Description: In this Illuminations lesson, students use a clinometer (a measuring device built from a protractor) and isosceles right triangles to find the height of a building. The class compares
measurements, talks about the variation in their results, and selects the best measure of central tendency to report the most accurate height.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Cubes Everywhere
Description: In this Illuminations lesson, students use cubes to develop spatial thinking and review basic geometric principles through real-life applications. Students are given the opportunity to
build and take apart structures based on cubes.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Circle Packing
Description: In this unit of three Illuminations lessons, students explore circles. In the first lesson students apply the concepts of area and circumference to explore arrangements for soda cans
that lead to a more efficient package. In the second lesson they then experiment with three-dimensional arrangements to discover the effect of gravity on the arrangement of soda cans. The final
lesson allows students to examine the more advanced mathematical concept of curvature. There are also links to online interactives that are used in the lessons.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Soda Rack
Description: In this lesson, one of a three-part unit from Illuminations, students consider the arrangement of cans placed in a bin with two vertical sides and discover an interesting result. They
then prove their conjectures about the interesting results. In addition, there are links to online activity sheets and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Triangula Island
Description: This student reproducible, from an Illuminations lesson, contains an activity that asks students to conjecture the best location of a point inside a regular triangle such that the sum of
the distances to each side is a minimum.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Circle Packing and Curvature
Description: In this lesson, one of a three-part unit from Illuminations, students investigate the curvature of circles. Students apply definitions and theorems regarding curvature to solve circle
problems. In addition, there are links to an online activity sheet and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Location, Location, Location
Description: In this Illuminations lesson, students use a dynamic geometry applet to investigate the relationship between the distances from a point inside a regular polygon to each side. In
addition, there are links to online activity sheets and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Triangula Island Overhead
Description: This reproducible transparency, from an Illuminations lesson, contains an activity that asks students to conjecture the best location of a point inside a regular polygon such that the
sum of the distances to each side is a minimum.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Soda Cans
Description: In this lesson, one of a three-part unit from Illuminations, students investigate various designs for packaging soda cans and use geometry to analyze their designs. Students work to
create more efficient arrangements that require less packaging material than the traditional rectangular arrays. In addition, there are links to online activity sheets and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Soda Cans
Description: This reproducible activity sheet, from an Illuminations lesson, guides students through a simulation in which they try different arrangements to make the most efficient use of space and
thus pack the most soda cans into a rectangular packing box.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Web Resources
Geometry Pad App
Create geometric shapes, explore/change their properties, and calculate metrics.
Thinkfinity Learning Activities
Subject: Mathematics
Title: Fractal Tool
Description: This student interactive, from Illuminations, illustrates iteration graphically. Students can view preset iterations of various shapes and/or choose to create their own iterations.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Canada Data Map
Description: Investigate data for the Canadian provinces and territories with this interactive tool. Students can examine data sets contained within the interactive, or they can enter their own data.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Flowing Through Mathematics
Description: This student interactive, from Illuminations, simulates water flowing from a tube through a hole in the bottom. The diameter of the hole can be adjusted and data can be gathered for the
height or volume of water in the tube at any time.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12 | {"url":"http://alex.state.al.us/all.php?std_id=54237","timestamp":"2014-04-17T18:34:08Z","content_type":null,"content_length":"126763","record_id":"<urn:uuid:d29c9d74-ad5c-47a1-bd7b-cd35d93d6ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH GAMES FOR ADULT AND CHILD
HOW MUCH IS ...?
TOPIC and LEVEL: Addition/Subtraction: Intermediate, Advanced
PLAY AFTER: HOW MANY WOULD YOU LIKE?, SHOW ME, STONES ON MY LEGS
PLAY WITH: STONES ON MY LEGS
Key Questions:
• How many are ...
• How much is 2 plus 3?
• How much is 7 minus 1?
• How much is 4 plus 8 plus 1?
• How much is 5 million plus 2 million?
• How much is four dollars plus two dollars?
• How much is ten cents plus two cents plus five cents?
• How much is one half plus one half?
• How much is three more than two?
The types of questions asked in HOW MUCH IS ...? include:
1. How many are ...
2. How much is 2 plus 3? How much is 7 minus 1?
3. How much is 4 plus 8 plus 1?
4. How much is 5 million plus 2 million?
5. Others
Types 1 and 2 need no explanation.
Types 3 and 4, I recommend highly and prefer this type of question even for older children because mental computation of this sort is so important.
Type 3 includes questions involving more than two numbers. Questions range from "how much is two plus one plus one more," (which is a difficult question in itself), to "how much is one plus one plus
one" to "how much is five plus four, then, take one away from this?"
If a child says he or she doesn't know the answer, try STONES ON MY LEGS and slowing down the speed with which the question is posed before you try doing any sort of explanation or forgetting about
that kind of question.
"How much is five million plus two million?" is typical of a lovely group of questions. Listen.
The Adult: "How much is five plus three?"
The Child: "Eight."
The Adult: "How much is ten minus one?"
The Child: "Nine."
The Adult: "How much is two hundred plus three hundred?"
The Child: "Five hundred, maybe."
The Adult: "That's right. How much is four hundred plus three hundred?"
The Child: "Seven hundred."
The Adult: "Good. How much is five million plus three million?"
The Child: "Eight million?"
The Adult: "That's right."
The Child is quite pleased with herself.
The Adult: "How much is nine million plus two million?"
The Child: "Eleven million." ...
Then or perhaps,
The Adult: "How much is four dollars plus two dollars?"
The Child: "Six dollars."
The Adult: "How much is nine dollars minus three dollars?"
The Child: "Six dollars." ...
Or perhaps,
The Child: "How much is three dollars plus two dollars plus three dollars?"
The Adult: "Eight dollars."
The Child: "How much is ten cents plus two cents plus five cents?"
The Adult: "Seventeen cents."
If a child asks a question he or she can't answer, don't be surprised. One might explain the problem through rearranging the addends or the use of concrete objects or mental images, such as stones,
or one might offer the child a calculator. Either is desirable.
Other questions, as identified by type 5, take two separate forms: "How much is one half plus one half?" and "How much is three more than two?" THESE ARE NOT REALLY QUESTIONS FOR THE YOUNG CHILD, but
rather for a child who has experience with BATHTUB ACTIVITIES or discussing FILL IT vocabulary. "How much is one-half plus one-half?" or "how much is one-fourth plus one-fourth?" are really hard
questions for a young child. Don't try them as part of a mental arithmetic game unless a concrete experience foundation has proceeded it and has been successful.
Children may gain a familiarity with fractions through measurements of time: two Mr. Rogers (half-hour shows) are as long a one Knight Rider (an hour-long show); measurements of money: two
half-dollars have the same value as one dollar; or, more likely, four quarters have the same value as one dollar; or measurements of amount: a pile of four things has only half as many things as a
pile of eight things. A child's age is very important to him or her. Try using terms like five and one-half, five and one-quarter, five and three-quarters, or even five and one-twelfth or five and
seven-twelfths when teaching or discussing a child's age.
Of course, encounters with the use of fractions will only become valuable examples if, as they occur, some adult takes the time to explain the meaning of the words or the examples and is prepared to
answer the questions later arising from the experiences.
Questions like: "how much is three more than two?" or, "how much is five bigger than seven?" or, "how many things are in two piles of six things?" require knowledge of the phrases "is five bigger
than." These ideas and questions may be considered by some young children, but most young children will not find them understandable or entertaining. It is much better to leave a child with positive
experiences than to frustrate him or her with questions clearly too difficult to answer. | {"url":"http://www.mathnstuff.com/math/games/mg12.htm","timestamp":"2014-04-16T23:00:39Z","content_type":null,"content_length":"7633","record_id":"<urn:uuid:45a63768-9703-49fe-b725-668461ab94b7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear system: am I choosing the good procedure? Thanks!
March 12th 2011, 05:58 AM #1
Feb 2011
Hi. I'm trying to understand some functions of production from an economics book by Piero Sraffa. Since this is an algebraic question, I have not put it into "business math". Hope I did well.
On page 3 Sraffa writes:
280 quarters of wheat and 12 tons of iron are used to produce 400 qr. of wheat. 120 quarters of wheat and 8 tons of iron are used to produce 20 tons of iron.
280 qr. wheat + 12 t iron --> 400 qr. wheat
120 qr. wheat + 8 t iron --> 20 t iron
Next, he writes: «There is a unique set of exchange values which if adopted by the market restores the original distribution of the product and makes it possible for the process to be repeated.
This set is: 1 ton of iron for 10 quarters of wheat.»
It was quite easy for me to get this result by myself.
280 w + 12 i = 400 w
120 w + 8 t = 20 t
12 i = 120 w => 1 i = 10 w
After that, he makes an example with 3 goods, adding pigs
He says that we have the next functions of production:
240 qr. wheat + 12 t of iron + 18 pigs --> 450 qr. wheat
90 qr. wheat + 6 t of iron + 12 pigs --> 21 tons of iron
120 qr. wheat + 3 t of iron + 30 pigs --> 60 pigs
And he adds that the only exchange value set in this case is the following:
10 qr. wheat = 1 t iron = 2 pigs
Now... I'm trying to get the same result by using a system of linear equations, as in the 2 goods model, but I can't get this result. Should I change my system to solve the equations? Or is the
linear system a good solution and maybe I'm just a little rusty with algebra and calculations?
Of course I'm not asking for the solution, I'm just asking if, in your opinion, it's a good idea to use a system like this:
Thank you in advance. | {"url":"http://mathhelpforum.com/algebra/174340-linear-system-am-i-choosing-good-procedure-thanks.html","timestamp":"2014-04-17T10:12:55Z","content_type":null,"content_length":"31909","record_id":"<urn:uuid:e58e2b27-d942-42e2-8bf0-a49f55401405>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
equivalence of submodules
up vote 1 down vote favorite
I have Z^3/M = Z^3/N = Z_k where M,N are submodules of Z^3 and Z_k is cyclic order k.
I would like to say some SL_3(Z) transformation takes M to N. Is this true? How to show?
add comment
1 Answer
active oldest votes
It is enough to show that
if $M\subseteq \mathbb Z^3$ be a subgroup such that $\mathbb Z^3/M$ is a cyclic group of order $k$, then there exists $g\in\mathrm{SL}(3,\mathbb Z)$ such that $g(M)=\langle
Let $M\subseteq \mathbb Z^3$ be a subgroup such that $\mathbb Z^3/M$ is a cyclic group of order $k$. Then $M$ is free of rank $3$, and there exists $A\in M(3,\mathbb Z)$ such that $M=A
\cdot\mathbb Z^3$. Using the Smith normal form, we know that there exists $3\times 3$ matrices $P$ and $Q$, invertible over $\mathbb Z$, such that $PAQ=D$ with $D=\left(\begin
up vote 1 down {smallmatrix}a\\\&b\\\&&c\end{smallmatrix}\right)$ and $a\mid b\mid c$. Then $PM=PAQ\mathbb Z^3=D\mathbb Z^3$.
vote accepted
It follows that $P\in\mathrm{SL}(3,\mathbb Z)$ is such that $PM$ is generated by $(a,0,0)$, $(0,b,0)$ and $(0,0,c)$ with $a\mid b\mid c$. Since $\mathbb Z^3/g(M)$ is cyclic of order
$k$, we must have $a=b=1$ and $c=k$. This tells us that the claim above is true.
(I've done everything at the level of generality which your problem needs, and I'll leave the fun of finding the correct general statement for you...)
Just a comment, Mariano -- it's much awesomer, when we get questions that are "too localized" like this, if you give an extremely general answer, than if you, ahem, "do someone's
homework". – Scott Morrison♦ Feb 12 '10 at 5:13
This answer was very helpful, particularly pointing me to Smith normal form. Should I be explaining how this isn't homework? Or disguising my question in a form that's more general?
– AndrewLMarshall Feb 12 '10 at 12:11
@mathuni, oh, probably just ignore me, I was being grumpy, sorry. I guess the better advice is: when you're asking a question that could be mistaken as homework, provide a little
more context, background or motivation, just so we can all recognize the question immediately as legitimate use of the site. Many people here feel that "homework questions" are abuse
of the site, and react poorly. I shouldn't have in this case, but did. – Scott Morrison♦ Feb 12 '10 at 16:39
I understand completely. This little speed bump came up as I'm trying to classify a certain set of symplectic toric orbifolds by looking at primitive vectors at vertices of the
moment polytope in Z^3. Of course, none of that's relevant to solving this bit here, but I see why stating the context makes it more legitimate. – AndrewLMarshall Feb 12 '10 at 23:35
add comment
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/15047/equivalence-of-submodules?sort=votes","timestamp":"2014-04-17T13:19:10Z","content_type":null,"content_length":"55343","record_id":"<urn:uuid:f93520f4-d9cf-4107-850c-7c649a3f1169>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of the Roggenkamp-Scott theorem ?
up vote 4 down vote favorite
In 1987 Roggenkamp and Scott published a solution of the integral isomorphism problem for $p$-groups, i.e. if $G,H$ are $p$-groups and $\mathbb{Z}[G] \cong \mathbb{Z}[H]$ as rings then $G \cong H$.
However, in practice I guess it is at least as hard to show that two group rings aren't isomorphic than to show that the groups itself aren't isomorphic. Therefore I wonder if this theorem (or one of
its variants or generalizations) have found applications in group theory. Any idea ?
add comment
1 Answer
active oldest votes
The Annals paper by Roggenkamp and Scott was certainly a landmark in the ongoing study of the isomorphism problem for integral group rings of finite groups, which apparently goes back
to the thesis work of Graham Higman and later related work by Richard Brauer. Zassenhaus refined and extended the underlying problem of whether two finite groups with isomorphic group
rings over $\mathbb{Z}$ must necessarily be isomorphic.
I'm not at all a specialist in this line of work, which has spawned numerous papers and at least one book, including positive and negative answers to versions of the original problem.
up vote 6 But as far as I know the question itself is mainly theoretical (though quite natural), not likely to have direct concrete applications one way or the other. Rather, the "applications"
down vote would involve related areas of integral representation theory and possibly algebraic topology where integral group rings come up naturally.
Eventually in a 2001 Annals paper, Martin Hertweck arrived at a negative answer to the initial problem: see the extensive review by Donald Passman in Mathematical Reviews. But questions
of this type continue to be explored.
Thanks for your answer. Do you know the title of the book on the isomorphism problem you mentioned ? – Todd Leason May 25 '12 at 23:00
My recollection (not precise) is that S.K. Sehgal wrote an older book, which is probably outdated in many directions, but also co-authored a more recent textbook which includes much
of the work on group rings and related matters. – Jim Humphreys May 26 '12 at 0:07
Roughly speaking (for one possible direction of application) It is sometimes possible to show the existence of central units in integral or modular group rings which, if they could be
shown to be genuine group elements, would give a proof of some interesting general group-theoretic conjectures. – Geoff Robinson May 26 '12 at 5:36
@Geoff: Thanks for the wider perspective on this area of research. I was commenting more narrowly just on the positive solution of the isomorphism problem for group rings given for
certain groups by Roggenkamp and Scott (though their work goes farther than this problem). – Jim Humphreys May 26 '12 at 18:40
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/97988/applications-of-the-roggenkamp-scott-theorem?sort=oldest","timestamp":"2014-04-17T04:43:19Z","content_type":null,"content_length":"54878","record_id":"<urn:uuid:a07edb1a-6e55-4a96-b075-4ca0094c6f37>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Changing the subject of formua with squares and roots
September 27th 2009, 07:10 AM #1
Junior Member
Aug 2009
Derbyshire, central UK
Changing the subject of formua with squares and roots
I have tried answering the following question by squaring everything (I think), but I am not right.
c = $\sqrt{a^2+b^2}$ (Change the subject of
the formula to a)
My try:
$c^2$ = $(a^2)^2+(b^2)^2$
$c^2$ = $a^4+b^4$
$a^4$ = $c^2-b^4$
a = $sqrt[4]{c^2-b^4}$
Can you see my error please?
you are correct by squarring both sides to get rid of the square root but you worked it out wrongly .
$<br /> a=\sqrt{c^2-b^2}<br />$
Last edited by Meggomumsie; September 27th 2009 at 08:06 AM. Reason: I have put new question in a new thread
September 27th 2009, 07:17 AM #2
MHF Contributor
Sep 2008
West Malaysia
September 27th 2009, 07:34 AM #3
Junior Member
Aug 2009
Derbyshire, central UK | {"url":"http://mathhelpforum.com/algebra/104563-changing-subject-formua-squares-roots.html","timestamp":"2014-04-17T01:32:18Z","content_type":null,"content_length":"38805","record_id":"<urn:uuid:a2afd514-7b5f-41d3-8236-ebbbace5c937>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: conditional logistic
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: conditional logistic
From wgould@stata.com (William Gould, StataCorp LP)
To statalist@hsphsun2.harvard.edu
Subject Re: st: conditional logistic
Date Thu, 25 Oct 2007 09:55:55 -0500
Ricardo Ovaldia <ovaldia@yahoo.com> asks,
> What is the difference between conditional logistic
> regression grouping on clinic and unconditional
> logistic regression including clinic as a dummy
> (indicator) variable? That is, what is the difference
> in model assumptions and parameter estimates?
The difference is that the logistic regression estimates are inconsistent
and bad.
Let's deal with inconsistent first. Think of what happens as the number of
observations goes to infinity. Let's denote the number of clinics as n and,
just to make things easy, let's assume the number of observations within
clinic is the same for each clinic, and is m. Then the total number of
observations is N = n*m.
What happens as N->ininity? Presumably, the number of clinics increases.
In this thought experiment, you are presumably imagining a replication
of the world as we observe it, with clinics serving roughly the same
number of patients, so as number of patients grows, so do the number of
clinics. Said in our notation, we are imagining n going to infinity and
m remaining constant. In standard logistic regression, that means we are
estimating n-1 coefficients for the clinics. The number of coefficients
is incrasing at the same rate as the number of observations, with the
result that there is no convergence to all the usual statistical properties
you are used to estimators having.
This may sound arcane, but it isn't, as you can show via simulation. Even
easier, however, is to think about a simpler problem. Consider standard
logistic regression with a standard problem -- no clinics, nothing odd. We'll
assume one RHS variable, say sex. It will not surprise you to hear that with
just 4 observations, the estimates produced by the standard logistic
regression estimator are bad. The estimates would turn good if we added
more observations, but it turns out that with just 4, the asymptotics have not
yet kicked in and the estimates produced by the standard logistic regression
estimator are bad, not merely poor. By poor, I mean noisy. By bad, I mean
biased, wrong, and having no good properties.
Now let's consider the clinic. Let's pretend we have 1,000 clinics and
4 observations per clinic. What running
. xi: logistic outcome sex i.clinic
amounts to as running separate logistic regressions for each clinic, but with
the constraint the the coefficient on sex is the same across them. I just
told you that with 4 observations, standard logistic is bad. Combining 1,000
bad results does not improve them; they are still bad. If the results were
merely poor -- noisy -- then combining them would help, but that's not our
On the other hand, if by N = n*m -> infinity we held n constant and let
m->infinity, we would get good results. By m going to infinity, you will have
a world in which the number of clinics remains fixed but the number of
observations within clinic increases. Under those circumstances, each
logistic regression would turn good once m got large enough, and combining
the results will make them even better.
So does it matter which thought experiment is in your mind? No. Whether you
imagine n->infinity or m->infinity, if you have m=4, you have insufficient
observations for the standard logistic gression estimator, and results will be
bad. If you have m=20, then in most circumstances you do have sufficient
observations for the logistic estimator to work. But if you were to get more
data and the first thought experiment is the correct one, meaning the number
of clinics increase, the estimates will not get better, and that should
distrurb you. More data usually means better estimates.
Due to mathematical trickery, the conditional logistic estimator does not
estimate the individual coefficients for each clinic and so avoids the problem
of the number of estimates increasing at the same rate as the number of
observations goes to infinity regardless of the decomposition of the increase.
I told you that, with just 4 observations, standard logistic regression is
bad. So would be the conditional logistic regression with just one clinic.
But unlike the standard logisitic estimator, if you hold the size of clinics
constant and increase the number of them, results get better and better.
Give me a dataset with 20 clinics, and in most cases, I'm in asymptopia.
Results are trustworth and, given more data, they just get better and better.
-- Bill
P.S. Let me add a footnote to the argument above. The footnote is
unimportant for the argument made, but is important in linear
regression problems.
The gist of the problem in the standard logistic regression estimator
is that the number of estimated parameterse increases as the same
rate as the number of observations. The same could be said of
the linear regression estimator and yet there is no problem because
of it. Why? Because in the LR estimator, the problem of estimating
the clinic intercepts can be separated from the problem of estimating
the sex coefficient. It just turns out that way because of the
linear nature of the linear-regression estimator. The same is not
true of logistic.
The logic, "if the number of estimates increases at the same rate as
number of observations, there will be problems" is generally true,
the exception being cases where there is a particular kind of
separability, which happens only in the linear case.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-10/msg00935.html","timestamp":"2014-04-19T07:00:08Z","content_type":null,"content_length":"10938","record_id":"<urn:uuid:fcbf6e34-6089-4e0b-9e80-ab7c2f3457d3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breadcrumbs Navigation
1995-RC-11.jpg 68.99Kb JPEG image
Creators: Chiu, Lue-Yung Chow; Moharerrzadeh, Mohammad
Issue Date: 1995
Four center integrals of a general two electron irregular solid spherical harmonic operator, i.e. $Q_{lm}(r_{12})=\sqrt{\frac{4\pi}{2l+1}}Y_{lm}(\hat{r}_{12})r^{-(l+1)}_{12}$, over the
homogeneous solid spherical harmonic Gaussian type functions, i.e. $r^{2n_{\alpha}+l_{m}}_{i\alpha} Y_{l_{\alpha} m_{\alpha}}(\hat{r}_{i\alpha})\exp(-\alpha r^{2}_{1\alpha})(i = I or 2;
\alpha = a, b, c, or d)$, have been evaluated analytically. When l = 2, 1 or 0 the operator $Q_{1m} (r_{12})$ is respectively the operator for spin-spin interaction, spin-other-orbit
Abstract: interaction and Coulomb repulsive interaction. Through coincidence of centers, the four-center integral is first transformed into a linear combination of two-center integrals which are
then integrated analytically by Fourier transformation convolution theorem. The integral results are in terms of nuclear wave functions of the relative coordinates. All of the nuclear
wave functions are in the format of spherical Laguerre Gaussian type function, except one term which is the product of solid spherical harmonic and F-function (error type function). The
expressions, which are similar to that obtained by Talmi transformation, are simpler than the previous results obtained by expansion $method.^{1}$ Two-center and three-center overlap,
three-center Coulomb repulsion and three-center nuclear attraction integrals needed in the context of density functional formalism have also been integrated explicitly.
URI: http://hdl.handle.net/1811/29602
Other 1995-RC-11
Items in Knowledge Bank are protected by copyright, with all rights reserved, unless otherwise indicated.
This item appears in the following Collection(s)
Sidebar Navigation
My Account | {"url":"http://kb.osu.edu/dspace/handle/1811/29602","timestamp":"2014-04-17T07:32:54Z","content_type":null,"content_length":"25096","record_id":"<urn:uuid:dcf97a5f-34e2-4858-8df2-3a0bbee116c5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
just a quick particular integral question..
May 22nd 2009, 06:34 AM #1
Sep 2007
just a quick particular integral question..
if i have a problem...
$<br /> y'' + 2y' + 4 = x^2 e^{ - 2x} <br />$
would the substitution be..
$<br /> y(x) = e^{ - 2x} v<br />$
I know it would be for = xe^-2x..
Quick question is it supposed to be
$<br /> y'' + 2y' + 4{\color{red}y} = x^2 e^{ - 2x} <br />$
Or what you have typed above
It's not clear what you are asking. You seem to be conflating "variation of parameters" and "undetermined coefficients".
Assuming you mean $y"+ 2y'+ 4y= x^2e^{-2x}$, which has $e^{-2x}$ and $xe^{-2x}$ as independent solutions to the associated homogenous equation, in order to use "variation of parameters" you would
have to use $y(x)= u(x)e^{-x}+ v(x)x^2e^{-x}$. If you are referring to "undetermined coefficients", you would try $y(x)= (Ax^2+ bx+ C)x^2e^{-x}= (Ax^4+ Bx^3+ Cx^2)e^{-x}$.
May 22nd 2009, 06:49 AM #2
May 22nd 2009, 07:11 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/differential-equations/90064-just-quick-particular-integral-question.html","timestamp":"2014-04-17T13:07:45Z","content_type":null,"content_length":"40626","record_id":"<urn:uuid:07932f82-ef11-473c-ad2d-3c41c2154764>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
I highly recommend these beautifully written texts.
This was my introduction to the meme of Darwinism, and to memes themselves. I read it when I was 13 or 14 years old. My world view was fundamentally changed ever since.
I read this book when I was 18 years old. I liked the book so much that I bought many copies of this book and gave it to my friends as a present. It motivated me to study logic and computation theory
as means to understand the mind. Although the core idea is flawed, the book overall brought me great joy of thinking about what human minds can do, and how they can do it.
In times of despair, when I though I couldn’t understand this seemingly illogical world and frustrated by its complexity, this book talked to be dearly. I was 19 or 20 years old.
Before this book, I was a pure reductionist (since I was little; my father is a physicist), trying to understand the world by going into the smaller scale of things. Now, I also think about what
abstraction can bring to the table—understanding in a different, more humane level. I was in graduate school when it came out.
I (Memming) presented Eliasmith et al. “A Large-Scale Model of the Functioning Brain” Science 2012 for our computational neuroscience journal club. The authors combined their past efforts for
building various modules for solving cognitive tasks to build a large-scale spiking neuron model called SPAUN.
They built a downloadable network of 2.5 million spiking neurons (leaky-integrate-and-fire (LIF) units) that has a visual system for static images, working memory for sequence of symbols (mostly
numbers), motor system for drawing numbers, and perform 8 different tasks without modification. I was impressed by the tasks it performed (video). But I must say I was disappointed after I found out
that it was “designed” to solve each problem by the authors, and combined with a central control unit (basal ganglia) which uses its “subroutines” to solve. Except for the small set of weights
specific for the reward task, the network has…
View original 255 more words
NIPS 2012 (proceedings) was held in Lake Tahoe, right next to the state line between California and Nevada. Despite the casino all around the area, it was a great conference: a lot of things to
learn, and a lot of people to meet. My keywords for NIPS 2012 are deep learning, spectral learning, nonparanormal distribution, nonparametric Bayesian, negative binomial, graphical models, rank, and
MDP/POMDP. Below are my notes on the topics that interested me. Also check out these great blog posts about the event by Dirk Van den Poel ([S:@:S]dirkvandenpoel), Yisong Yue (@yisongyue), John
Moeller, Evan Archer, Hal Daume III.
Optimal kernel choice for large-scale two-sample tests
A. Gretton, B. Sriperumbudur, D. Sejdinovic, H. Strathmann, S. Balakrishnan, M. Pontil, K. Fukumizu
This is an improvement over the maximum mean discrepancy (MMD), a divergence statistic for hypothesis testing using reproducing kernel Hilbert spaces. The statistical power of the test depends on the
choice of kernel, and previously, it was shown that taking the max value over multiple kernels still results in a divergence. Here they linearly combine kernels to maximize the statistical power in
linear time, using normal approximation of the test statistic. The disadvantage is that it requires more data for cross-validation.
Efficient coding provides a direct link between prior and likelihood in perceptual Bayesian inference
Xue-Xin Wei, Alan Stocker
Several biases observed in psychophysics shows repulsion from the mode of prior which seem counter intuitive if we assume brain is performing Bayesian inferences. They show that this could be due to
asymmetric likelihood functions that originate from the efficient coding principle. The tuning curves, and hence the likelihood functions, under the efficient coding hypothesis are constrained by the
prior, reducing the degree of freedom for the Bayesian interpretation of perception. They show asymmetric likelihood could happen under a wide range of circumstances, and claim that repulsive bias
should be observed. Also they predict additive noise in the stimulus should decrease this effect.
Spiking and saturating dendrites differentially expand single neuron computation capacity
Romain Cazé, M. Humphries, B. Gutkin
Romain showed that boolean functions can be implemented by active dendrites. Neurons that generate dendritic spikes can be considered as a collection of AND gates, hence disjunctive normal form (DNF)
can be directly implemented using the threshold in soma as the final stage. Similarly, saturating dendrites (inhibitory neurons) can be treated as OR gates, thus CNF can be implemented.
Coding efficiency and detectability of rate fluctuations with non-Poisson neuronal firing
Shinsuke Koyama
Hypothesis testing of whether the rate is constant or not for a renewal neuron can be done by decoding the rate from spike trains using empirical Bayes (EB). If the hyperparameter for the roughness
is inferred to be zero by EB, the null hypothesis is accepted. Shinsuke derived a theoretical condition for the rejection based on the KL-divergence.
The coloured noise expansion and parameter estimation of diffusion processes
Simon Lyons, Amos Storkey, Simo Sarkka
For a continuous analogue of a nonlinear ARMA model, estimating parameters for stochastic differential equations is difficult. They approach it by using a truncated smooth basis expansion of the
white noise process. The resulting colored noise is used for an MCMC sampling scheme.
Bayesian estimation of discrete entropy with mixtures of stick-breaking priors
Evan Archer*, Il Memming Park*, Jonathan W. Pillow (*equally contributed, equally presented)
Diffusion decision making for adaptive k-Nearest Neighbor Classification
Yung-Kyun Noh, F. C. Park, Daniel D. Lee
An interesting connection between sequential probability ratio test (Wald test) for homogeneous Poisson process with two different rates and k-nearest neighbor (k-NN) classification is established by
the authors. The main assumption is that each class density is smooth, thus in the limit of large samples, distribution of NN follows a (spatial) Poisson process. Using this connection, several
adaptive k-NN strategies are proposed motivated from Wald test.
TCA: High dimensional principal component analysis for non-gaussian data
F. Han, H. Liu
Using an elliptical copula model (extending the nonparanormal), the eigenvectors of the covariance of the copula variables can be estimated from Kendall’s tau statistic which is invariant to the
nonlinearity of the elliptical distribution and the transformation of the marginals. This estimator achieves close to the parametric convergence rate while being a semi-parametric model.
Classification with Deep Invariant Scattering Networks (invited)
Stephane Mallat
How can we obtain stable informative invariant representation? To obtain an invariant representation with respect to a group (such as translation, rotation, scaling, and deformation), one can
directly apply a group-convolution to each sample. He proposed an interpretation of deep convolutional network as learning the invariant representation, and a more direct approach when the invariance
of interest is known, which is to use group invariant scattering (hierarchical wavelet decomposition). Scattering is contractive, preserves norm, and stable under deformation, hence generates a good
representation for the final discriminative layer. He hypothesized that the stable parts (which lacks theoretical invariance) can be learned in deep convolutional network through sparsity.
Spectral learning of linear dynamics from generalised-linear observations with application to neural population data
L. Buesing, J. Macke, M. Sahani
Ho-Kalman algorithm is applied to Poisson observation with canonical link function, then the parameters are estimated through moment matching. This is a simple and great initializer for EM which
tends to be slow and prone to local optima.
Spectral learning of general weighted automata via constrained matrix completion
B. Balle, M. Mohri
A parameteric function from strings to reals known as rational power series, or equivalently weighted finite automata, is estimated with a spectral method. Since the Hankel matrix for prefix-suffix
values has a structure, a constrained optimization is applied for its completion from data. How to choose rows and columns of Hankel matrix remains a difficult problem.
Discriminative learning of Sum-Product Networks
R. Gens, P. Domingos
Sum-product network (SPN) is a nice abstraction of a hierarchical mixture model, and it provides simple and tractable inference rules. In SPM, all marginals are computable in linear time. In this
case, discriminative learning algorithms for SPM inferences are given. The hard inference variant takes the most probable state, and can overcome gradient dilution.
Perfect dimensionality recovery by Variational Bayesian PCA
S. Nakajima, R. Tomioka, M. Sugiyama, S. Babacan
Previous Bayesian PCA algorithm utilizes the empirical Bayes procedure for sparsification, however, this may not be an exact inference for recovering the dimensionality. They provide a condition for
which the recovered dimension is exact for a variational Bayesian inference using random matrix theory.
Fully bayesian inference for neural models with negative-binomial spiking
J. Pillow, J. Scott
Graphical models via generalized linear models
Eunho Yang, Genevera I. Allen, Pradeep Ravikumar, Zhandong Liu
Eunho introduced a family of graphical models with GLM marginals and Ising model style pairwise interaction. He said the Poisson-Markov-Random-Fields version must have negative coupling, otherwise
the log partition function blows up. He showed conditions for which the graph structure can be recovered with high probability in this family.
No voodoo here! learning discrete graphical models via inverse covariance estimation
Po-Ling Loh, Martin Wainwright
I think Po-Ling did the best oral presentation. For any graph with no loop, zeros in the inverse covariance matrix corresponds to non-conditional dependence. In general, theoretically by
triangulating the graph, conditional dependencies could be recovered, but the practical cost is high. In practice, graphical lasso is a pretty good way of recovering the graph structure, especially
for certain discrete distributions (e.g. Ising model).
Augment-and-Conquer Negative Binomial Processes
M. Zhou, L. Carin
Poisson process over gamma process measure is related to Dirichlet process (DP) and Chinese restaurant process (CRP). Negative binomial (NB) distribution has an alternative (i.e., not gamma-Poisson)
augmented representation as Poisson number of logarithmic random variables, which can be used to constructing Gamma-NB process. I do not fully understand the math, but it seems like this paper
contains gems.
Optimal Neural Tuning Curves for Arbitrary Stimulus Distributions: Discrimax, Infomax and Minimum Lp Loss
Zhuo Wang, Alan A. Stocker, Daniel D. Lee
Assuming different loss functions in the Lp family, optimal tuning curves of a rate limited Poisson neuron changes. Zhuo showed that as p goes to zero, the optimal tuning curve converges to that of
the maximum information. The derivations assume no input noise, and a single neuron. [edit: we did a lab meeting about this paper]
Bayesian nonparametric models for ranked data
F. Caron, Y. Teh
Assuming observed partially ranked objects (e.g., top 10 books) have positive real-valued hidden strength, and assuming a size-biased ranking, they derive a simple inference scheme by introducing an
auxiliary exponential variable.
Efficient and direct estimation of a neural subunit model for sensory coding
Brett Vintch, Andrew D. Zaharia, J. Anthony Movshon, Eero P. Simoncelli
We already discussed this nice paper in our journal club. They fit a special LNLN model that assumes a single (per channel) convolutional kernel shifted (and weighted) in space. Brett said the
convolutional STC initialization described in the paper works well even when the STC itself looks like noise.
Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model
Sander M. Bohte
A multiplicative spike response model is proposed and fit with a fixed post-spike filter shape, LNP based receptive filed, and grid search over the parameter space (3D?). This model reproduces the
experimentally observed adaptation due to amplitude modulation and the variance modulation. The multiplicative dynamics must have a power-law decay that is close to 1/t, and it somehow restricts the
firing rate of the neuron (Fig 2b).
Dropout: A simple and effective way to improve neural networks (invited, replacement)
Geoffrey Hinton, George Dahl
Dropout is a technique to randomly omit units in an artificial neural network to reduce overfitting. Hinton says dropout method is an efficient way of model averaging exponentially many models. It
reduces overfitting because hidden units can’t depend on each other reliably. Related paper is on the arXiv.
Compressive neural representation of sparse, high-dimensional probabilities
Xaq Pitkow
Naively representing probability distributions are inefficient since it takes exponentially growing resource. Using ideas from compressed sensing, Xaq shows that random perceptron units can be used
to represent a sparse high dimensional probability distribution efficiently. The question is what kind of operations on this representation biologically plausible and useful.
The topographic unsupervised learning of natural sounds in the auditory cortex
Hiroki Terashima, Masato Okada
Visual cortex is much more retinatopic than auditory cortex is tonotopic. Unlike natural images, nautral auditory stimuli has harmonics that gives rise to correlations in the frequency domain. Could
both primary sensory cortices have same principle for topographic learning rules but form different patterns because of differences in the input statistics? The authors’ model is consistent with the
hypothesis, and moreover captures the nonlinear response to pitch perception problem.
This concludes my 3rd NIPS (NIPS 2011, NIPS 2010)!
Suppose you mix two Gaussian random variables $\mathcal{N}(-1, 1)$ and $\mathcal{N}(-1, 1)$ equally, that is, if one samples from the mixture, with probability 1/2, it comes from the first Gaussian
and vice versa. It is evident that the mixture of Gaussians is not a Gaussian. (Do not confuse with adding two Gaussian random variables which produces another Gaussian random variable.)
Similarly, mixture of inhomogeneous Poisson processes results in a non-Poisson point process. The figure below illustrates the difference between a mixture of two Poisson processes (B) and a Poisson
process with the same marginal intensity (rate) function (A). The colored bars indicates the rate over the real line (e.g. time); in this case they are constant rate over a fixed interval. The 4
realizations from each process A and B are represented by rows of vertical ticks.
Several special cases of mixed Poisson processes are studied [1], however, they are mostly limited to modeling over-dispersed homogeneous processes. In theoretical neuroscience, it is necessary to
mix arbitrary (inhomogeneous) point processes. For example, to maximize the mutual information between the input spike trains and the output spike train of a neuron model, the entropy of a mixture of
point processes is needed.
In general, a regular point process on the real line can be completely described by the conditional intensity function $\lambda(t|\mathcal{H}_t)$ where $\mathcal{H}_t$ is the full spiking history up
to time $t$ [2]. Let us take the discrete limit to form regular point processes. Let $\rho_k$ to be the probability of a spike (an event) at the $k$-th bin of size $\Delta$, that is,
$\rho_k \simeq \lambda(k \Delta|y_{1:k-1}) \Delta,$
where $y_{1:k-1}$ are the 0-1 responses in all the previous bins. The likelihood of observing $y_k = 0$ or $y_k = 1$, given the history is simply,
$P(y_k|y_{1:k-1}, \lambda) = {\rho_k}^{y_k} \left(1 - \rho_k\right)^{1 - y_k}.$
In the limit of small $\Delta$, this approximation converges to a regular point process. A fun fact is that a mixture of Bernoulli random variables is Bernoulli again, since it’s the only
distribution for 0-1-valued random variables. Specifically, for a family of Bernoulli random variables with probability of 1 being $\rho_z$ indexed by $z$, and a mixing distribution $P(z)$, the
probability of observing one symbol $y=0$ or $y=1$ is
$P(y) = \int P(y|z)P(z) \mathrm{d}z = \int {\rho_z}^{y} \left(1 - \rho_z\right)^{1 - y} P(z) \mathrm{d}z = {\bar\rho}^{y} \left(1 - \bar\rho\right)^{1 - y}$
where $\bar\rho = \int \rho_k P(z) \mathrm{d}z$ is the average probability.
Suppose we mix $\lambda(t|\mathcal{H}_t, z)$ with $P(z)$. Then, similarly, for binned point process representation, above implies that,
$P(y_k|y_{1:k-1},\lambda) = \int P(y_k|y_{1:k-1},\lambda) P(z) \mathrm{d}z = {\bar\rho}_k^{y_k} \left(1 - \bar\rho_k \right)^{1 - y_k}$
where $\bar\rho_k = \int \rho_k P(z) \mathrm{d}z$ is the marginal rate. Moreover, due to causal dependence between $y_k$‘s, we can chain the expansion and get the marginal probability of observing
$P(y_{1:k}) = P(y_k|y_{1:k-1}) P(y_{1:k-1}) = P(y_k|y_{1:k-1}) P(y_{k-1}|P_{1:k-2}) \cdots P(y_1)$
$= \prod_{i=1}^k {\bar\rho}_i^{y_i} \left(1-\bar\rho_i\right)^{1-y_i}.$
Therefore, in the limit the mixture point process is represented by the conditional intensity function,
$\lambda(t|\mathcal{H}_t) = \int \lambda(t|\mathcal{H}_t, z) P(z) \mathrm{d}z$.
Conclusion: The conditional intensity function of a mixture of point processes is given by the expected conditional intensity function over the mixing distribution.
1. Grandell. Mixed Poisson processes. Chapman & Hall / CRC Press 1997
2. Daley, Vere-Johns. An Introduction to the Theory of Point Processes. Springer.
3. Taro Toyoizumi, Jean-Pascal Pfister, Kazuyuki Aihara, Wulfram Gerstner. Generalized Bienenstock–Cooper–Munro rule for spiking neurons that maximizes information transmission. PNAS, 2005.
This was my first time at CNS (computational neuroscience conference, not to be confused with the cognitive neuroscience one with the same acronym). I was invited to give a talk at the “Examining the
dynamic nature of neural representations with the olfactory system workshop” organized by Chris Buckley, Thomas Nowotny, and Taro Toyoizumi. I presented my bursting olfactory receptor neurons can
form instantaneous memory about the temporal structure of odor plume encounter story and a bit of related Calcium imaging study. Below is my summary of the workshop talks I went to (system
identification workshop, information theory workshop on the first day, and olfactory workshop on the second day).
Garrett Stanley talked about system identification of the rat barrel cortex response from whisker deflection. He started by criticizing the white-noise Volterra series approach; it requires too much
data. Instead, by designing a sequence of parametric stimuli that will directly show 2nd order and 3rd order interactions, he could fit a parametric form of firing rate response with good predictive
powers [1]. As far as I can tell, it seemed like a rank-1 approximation of the 3rd order Volterra kernel. However, this model was lacking the fine-temporal latency, as well as stimulus intensity
dependent bimodal responses, which was later fixed by a better model with feedback [2].
Vladimir Brezina talked about modeling of feedback from muscle contractions onto a rhythmic central pattern generator in the crab heart. He used LNL and LN models to fit the response of 9 neurons and
muscles in the crab heart. For the LNL system, he used a bilinear optimization of the squared error. However, for the spiking response of the LN model, instead of using the Bernoulli or Poisson
likelihood (the GLM model), he used least squares to fit the parameters.
Matthieu Louis gave a talk about optogenetically controlling drosophila larva’s olfactory sensory neurons. They built an impressive closed loop system that can control the larva’s behavior as if it
were in an odor gradient. They modeled the system as a black box with odor input and behavior as output, skipping the model of the nervous system, and successfully predicted the behavior and control
it [3].
Daniel Coca talked about how fly photoreceptors can act as a nonlinear temporal filter that is optimized for detecting edges. He fit a NARMAX (nonlinear ARMA-X) model and analyzed it in the frequency
domain and found that the phase response is consistent with phase congruency detection model for edge detection. Also, he explained how the system “linearizes” when stimulated with white Gaussian
noise, although I couldn’t follow the details due to my lack of knowledge in nonlinear frequency domain analysis.
Tatyana Sharpee talked about sphere packing in the context of receptive fields of retina, and conditional population firing rates of song birds. For the receptive fields, she showed that to maximize
the mutual information per unit lattice between a point source of light and the (binary) neural response of ganglion cells, if the lattice is not-perfect, elliptical shapes of receptive fields can
help. For the song bird case, she showed that the noise correlation can change with training to improve separation (classification performance) of the conditional distributions while the irrelevant
stimuli became less separable.
Rava Azeredo da Silveira talked about how finely tuned correlation structure can immensely increase performance. Given two population of neurons, each tuned to a class weakly (slightly higher firing
rate for the preferred class), if cross-population correlation is slightly higher than otherwise, the population response as a whole can be very certain about the class identity. He also talked about
many other related things such as asymptotics on required population size vs noise.
Shy Shoham talked about Linear-Nonlinear-Poisson (LNP) and Linear-Nonlinear-Hawkes (LNH) models, and how to relate spike train (output) correlations to gaussian (input) correlation [4,5]. LNH has a
similar form to GLM but the feedback is added outside the nonlinearity. He referred to the procedure of inferring the underlying latent AR process as correlation-distortion, and proposed to use it
for studying neural point processes as AR models; hence apply Granger causality, and other signal processing tools. He also talked about semi-blind system identification where the goal is to infer
the linear kernel of the model given the autocorrelation of the input and the autocorrelation of the population spike trains are given (the phase ambiguity of the filter is resolved by choosing the
minimal phase filter.)
Maxim Bazhenov talked about modeling the transient synchronization in the locust olfactory system as a network phenomena (interaction between projection neurons (PNs) and local inter-neurons (LNs)).
The pattern of synchronization of PNs over multiple LFP cycles is repeatable, and his model reproduces it. He showed an interesting illustration of the connectivity between LNs posed as the graph
coloring problem [6]. Each cluster of LNs targets everybody outside their cluster, enabling synchrony within. The connectivity matrix is effectively a block diagonal of zeros, and the off-diagonals
are ones, because they are inhibitory neurons.
Nitin Gupta gave a talk on lateral horn (LH) cells. The normative model has been that the inhibitory neurons in LH acts as feed-forward inhibition to limit the integration time within the Kenyon
cells (KCs). He identified a heterogeneous population of neurons in LH (see [7] for beautifully filled neurons). Among the ones that project to mushroom body (where KCs are), he found no evidence of
GABA co-location, suggesting that there is no feed-forward inhibition through LH. He proposed an alternative model for limiting integration time in KCs, namely the feedback inhibition through
(non-spiking) GGNs.
Thomas Nowotny talked about how odor plume structure can help in separating mixture of different sources, based on the the results of [8]. He proposed a simple model of lateral inhibition circuit
among the glomeruli. The model showed counter-intuitive results for temporal mixtures of odor when linear decoding is used.
Kevin C. Daly gave a data packed talk on Manduca sexta (moth) olfactory system [9]. The oscillation he observed had a frequency modulation; starts at a high frequency and quickly falls, and it is
odor dependent. He criticized the use of continuous odor application which may result in pathological responses (my wording), and instead he showed response to odor-puffs. (Interestingly, the blank
puffs decreased the response.) He also emphasized the importance of not cutting the head of the animal, which preserves a pair of histamine neurons.
Aurel A. Lazar talked about precise odor delivery system using laminar flows that can produce a diverse temporal pattern of odor concentration with around 1% of error. Using this system, they showed
that the firing response of the first two stages of drosophila; receptor neurons and projection neurons are both temporally differentiating. This was not simultaneously recorded, but thanks to the
repeatable stimuli and response, it is well supported.
1. R. M. Webber and G. B. Stanley. Transient and steady-state dynamics of cortical adaptation, J. Neurophys., 95:2923-2932, 2006.
2. A. S. Boloori, R. A. Jenks, Gaelle Desbordes, and G. B. Stanley. Encoding and decoding cortical representations of tactile features in the vibrissa system, J. Neurosci., 30(30):9990-10005, 2010.
3. Gomez-Marin A, Stephens GJ, Louis M. Active sampling and decision making in Drosophila chemotaxis. Nature Communications 2:441. doi: 10.1038/ncomms1455 (2011).
4. Michael Krumin, Shy Shoham. Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions. Neural Computation. June 2009, Vol. 21, No. 6, Pages 1642-1664
5. Michael Krumin, Inna Reutsky, Shy Shoham. Correlation-Based Analysis and Generation of Multiple Spike Trains Using Hawkes Models with an Exogenous Input. Front Comput Neurosci. 2010; 4: 147
6. Assisi C, Stopfer M, Bazhenov M. Using the structure of inhibitory networks to unravel mechanisms of spatiotemporal patterning. Neuron. 2011 Jan 27;69(2):373-86.
7. Nitin Gupta, Mark Stopfer. Functional Analysis of a Higher Olfactory Center, the Lateral Horn. Journal of Neuroscience, 13 June 2012, 32(24): 8138-8148; doi: 10.1523/JNEUROSCI.1066-12.2012
8. Paul Szyszka, Jacob S. Stierle, Stephanie Biergans, C. Giovanni Galizia. The Speed of Smell: Odor-Object Segregation within Milliseconds. PLoS ONE, Vol. 7, No. 4. (27 April 2012), e36096,
9. Daly KC, Galán RF, Peters OJ and Staudacher EM (2011) Detailed characterization of local field potential oscillations and their relationship to spike timing in the antennal lobe of the moth
Manduca sexta. Front. Neuroeng. 4:12. doi: 10.3389/fneng.2011.00012
Last Sunday (April 29th) was the Black board day (BBD), which is a small informal workshop I organize every year. It started 7 years ago on
Kurt Gödel
‘s 100th birthday. We discuss logic, computation, math, and beyond. This year happens to be
Alan Turing
100th birth year
, so we had a theme that combines Turing machines and logic. It was a huge success thanks to special guest speakers.
Il Memming Park: On halting problem route to incompleteness
I was trying to give an overview on how certain problems in mathematics that deals with natural numbers are very difficult, and why a mechanized theorem prover was a dream of Hilbert’s. Then I
introduced the devilish
diagonal argument of Cantor’s
in the context of binary strings and languages. Basically, there are more languages (defined as a set of finite binary strings) than there are natural numbers. I introduced Turing machines and their
3 possible outcomes (accept, reject, and infinite loop) as well as the concept of universal turning machines. Then, I constructed the
halting problem
and showed that the diagonal argument prevents us from having a Turing machine that can tell if another Turing machine will stop or not in finite time. Unfortunately, I didn’t have enough time to
elaborate how the halting problem has a similar structure to the proof of incompleteness theorem, and how they could be connected.
Kenneth Latimer: On Roger Penrose’s Emperor’s new mind
The controversial book
Emperor’s new mind (1989)
is famous for extending
Lucas’ idea
that since Turing machines can’t know if the Gödel statement is true, while human does, the computability of human must be greater. He further linked that idea to physics and brain. During our
discussion, we agreed that the Gödel statement is true, but it’s truth can only be judged outside of the system, and human certainly are not using the same set of axioms as the system that the
Gödel statement is constructed on. And also, the fact that we do not understand certain physics doesn’t imply that the is not computable. It was interesting that two people (Memming and Jonathan)
were initially drawn into neuroscience because of this book.
Michael Buice: Algebra of Probable Inference
Michael started talking about adding oracles to Turing machines, and the hierarchy of such oracle-equipped Turing machines, as well as Kleene hierarchy of logical statements, but quickly jumped into
a new topic. Instead of only considering only True or False statements, if we allow things in between, with a reasonable assumptions we can
derive axioms of probability theory
. Heuristically speaking, Gödel’s incompleteness theorem would imply that there are statements that even with infinite observations, the posterior probability for the statement does not converge to 0
or 1 and always stay in between. The derivation is given in Richard Cox’s papers, and the theory was expanded by Jaynes.
Ryan Usher: An Incomplete, Inconsistent, Undecidable and Unsatisfiable Look at the Colloquial Identity and Aesthetic Possibilities of Math or Logic
Ryan started by stating how he finds beauty in mathematical proofs, especially in Henkin’s completeness theorem. But then he was unsatisfied with the fact that how often beautiful results such as
Gödel’s incompleteness theorem are abused in completely irrelevant contexts such as in economics and social sciences. He had numerous quotes and examples showing the current state of sad abuses. He
claimed that this is partly because of the terms like “consistency”, “completeness” have very rigorous meanings in mathematical context but often people associate their meanings to the common
sensical ones.
Jonathan Pillow: Do we live inside a Turing machine?
Jonathan summarized the argument by Bostrom (2003) that it is very probable that we are
living inside a simulation
. Under the assumption that
1. Simulated human brain brings consciousness (“substance independence”)
2. Large scale simulation of human brain + physical world around human is possible
Then, assuming high probability of technological advancement for such simulation, and some grad student in the future wishing to run “ancestor simulation”, a simple counting argument of all humans in
simulation and not shows that we are probably living in a simulation. (Below photo is Jonathan’s writing. It was a white board, but in the spirit of black board day, I inverted the photo.)
• Alan Turing. (1936) On computable numbers with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society. 2 42: 230
• Michael Sipser. Introduction to Computation (Memming’s halting problem proof followed this one)
• Roger Penrose. Emperor’s new mind
• Torkel Franzén. Godel’s Theorem: An Incomplete Guide to Its Use and Abuse (recommended by Ryan)
• Richard T. Cox. Algebra of Probable Inference
• Cox, R. (1946). Probability, frequency and reasonable expectation. American Journal of Physics, 14(1), 1–13.
• E.T. Jaynes. Probability Theory: The Logic of Science
• Martin Davis. The Undecidable (Collection of papers) The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions (Dover Books on Mathematics)
• Martin Davis, Computability and Unsolvability (Michael Buice: One of the most beautiful books written by humankind; introduction to recursive function theory and computability, turing machines.
One of the few books which does so in a complete and rigorous manner, also covers Logic and Gödel’s theorem.)
• Bostrom, N. , 2003, Are You Living in a Computer Simulation?, Philosophical Quarterly (2003), Vol. 53, No. 211, pp. 243-255.
Primary olfactory receptor neurons (ORN) bind to odor molecules in the medium and sends action potentials to the brain. This signaling is not simply ON and OFF, but each ORN has delicate sensitivity
to various odors and shows diverse temporal activation patterns. Using both electrophysiology and Calcium-sensitive dye imaging, my collaborators Yuriy V. Bobkov and Kirill Y. Ukhanov studied the
temporal aspect of Lobster ORNs. The heterogeneous response patterns are well presented in a recent paper published in PLoS One. I was particularly interested in a special type of ORN called bursting
ORNs. Bursting ORNs are spontaneously oscillating, and the Calcium imaging data allows population analysis. I was involved in the analysis to see if there’s any sign of synchrony using resampling
based burst-triggered averaging technique. It turns out that they rarely interact, if any. Moreover, they have a wide range of periods of oscillation. Since they are coupled through the environment
(a filament of odor molecules in the medium), in natural environments or under controlled odor stimulation they sometimes synchronize which is a subject of another paper under review.
Note: the publication actually has my first name as Ill instead of Il which is silly and sick. I asked for a correction, but it seems PLoS One will only publish a note for the correction and not
correct the actual article (because of the inconsistency it will cause for other indexing systems [1][2]). This could have been fixed in the proof, if PLoS did proofs before final publications, but
they don’t (presumably to lower costs). In my opinion, this is a flaw of PLoS journals. EDIT: there’s a note saying that my name is misspelled now.
Recent Comments
memming on Eleksius, the dual of twenty…
A guide to discrete… on Bayesian entropy estimation fo…
jack on NIPS 2013
My favourite papers… on NIPS 2013
memming on CNS 2013 | {"url":"http://memming.wordpress.com/page/2/","timestamp":"2014-04-20T05:53:10Z","content_type":null,"content_length":"105222","record_id":"<urn:uuid:83dea5a1-437f-4f94-8fcc-548dfa8e3ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accepting the null hypothesis?
April 29th 2012, 10:02 AM #1
Oct 2009
Accepting the null hypothesis?
Is it possible to even accept the null hypothesis? In introductory stats course I was taught that that null is never accepted, but rather we fail to reject it (this was done for tests absed only
on p-values.) However, in a Mathematical Statistics course I took (we used Introduction to Mathematical Statistics by Hogg, McKean, Craig) the text often talked about accepting the null. In
online resources, I see differing opinions. I've read that the null is never accepted, and I've also read that it can be accepted if you "accrue" enough evidence in support of it, either through
confidence intervals or power functions - source: http://w3.sista.arizona.edu/~cohen/P...ohenIEEE96.pdf
Re: Accepting the null hypothesis?
"It is important to understand that the null hypothesis can never be proven. A set of data can only reject a null hypothesis or fail to reject it. For example, if comparison of two groups (e.g.:
treatment, no treatment) reveals no statistically significant difference between the two, it does not mean that there is no difference in reality. It only means that there is not enough evidence
to reject the null hypothesis (in other words, the experiment fails to reject the null hypothesis)."
Taken from Null hypothesis - Wikipedia, the free encyclopedia
Re: Accepting the null hypothesis?
Yes, I am aware of the Wikipedia definition of null hypothesis. I agree that in tests based solely on the p-value, one can never accept the null hypothesis. However, I've read several papers/
books which discuss conditions under which the null hypothesis can indeed be accepted. One of those sources was linked in the OP. Such as 'Intro to Mathematical Statistics' by Hogg, McKean,
Craig, and 'Statistical Power Analysis for the Behavioral Sciences' by Cohen.
Re: Accepting the null hypothesis?
ive always assumed that you can deduce that the null null hypothesis is true if you can identify every possible alternative and reject it; but that is based on no official source whatsoever.
eg, if suppose X is normal with unknown mean, but that the mean is definately 0 or 100. if there is enough evidence to reject 100, i would "accept" 0. but in practice this hardly ever happens...
Last edited by SpringFan25; April 29th 2012 at 03:04 PM.
Re: Accepting the null hypothesis?
I think its just the terminology used. When you choose null hypothesis you also have the opposite hypothesis - alternative or research hypothesis. When you reject the null hypothesis this implies
that you accept the alternative. Or when you fail to reject the null hypothesis then one can speak of accepting the null hypothesis - or rejecting the research hypothesis.
I'll say it again - I think its just the terminology used. Fail to reject = accept?! Still, I might be wrong, its just that I don't see how.
Re: Accepting the null hypothesis?
The only way you can accept the null hypothesis is if you perform a census of the population about which the null hypothesis is concerned. E.g. if the null hypothesis claims that "All swans are
white" you would have to investigate all past, present and future swans and note their colour; of course, in this case it is impossible to accept the null hypothesis.
April 29th 2012, 01:50 PM #2
Sep 2010
April 29th 2012, 02:53 PM #3
Oct 2009
April 29th 2012, 02:57 PM #4
MHF Contributor
May 2010
April 29th 2012, 03:02 PM #5
Sep 2010
May 4th 2012, 10:42 AM #6
Apr 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/198103-accepting-null-hypothesis.html","timestamp":"2014-04-17T20:56:08Z","content_type":null,"content_length":"43326","record_id":"<urn:uuid:f0448f8a-b27d-4f77-92ae-18585972bdb6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 2004 [00569]
[Date Index] [Thread Index] [Author Index]
Re: Plot of Elliptic Curve with Grid
• To: mathgroup at smc.vnet.net
• Subject: [mg51537] Re: [mg51512] Plot of Elliptic Curve with Grid
• From: "David Park" <djmp at earthlink.net>
• Date: Thu, 21 Oct 2004 22:21:02 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
I made your plot using DrawGraphics as follows.
I don't think your integer points corresponded to the curve you specified.
So I tried to find some and came up with the following list.
integerpoints = {{0, 1}, {0, -1}, {1, 2}, {1, -2}, {8, 23}, {8, -23}};
xpoints = Union@(First /@ integerpoints);
ypoints = Union@(Last /@ integerpoints);
I then made the plot with the following statement. The grid lines and tick
marks and labels match the values for the integer points.
{ImplicitDraw[y^2 == x^3 + 2x + 1, {x, -1, 9}],
CirclePoint[#, 3, Black, Yellow] & /@ integerpoints},
AspectRatio -> 1.5,
Frame -> True,
FrameTicks ->
{CustomTicks[Identity, databased[xpoints]],
CustomTicks[Identity, databased[ypoints]],
CustomTicks[Identity, databased[xpoints], CTNumberFunction -> (""
CustomTicks[Identity, databased[ypoints],
CTNumberFunction -> ("" &)]},
GridLines ->
{CustomGridLines[Identity, databased[xpoints]],
CustomGridLines[Identity, databased[ypoints]]},
PlotLabel -> SequenceForm["Elliptic Curve ", y^2 == x^3 + 2x + 1],
Background -> Linen,
ImageSize -> 450];
I was going to send you privately the notebook and a gif image of the plot,
but taking a quick look at your email address I have no idea how it is
supposed to be decrypted. If you want to have a usable email account without
being bothered by spam or virus email subscribe to SpamArrest or some
similar service. It works.
David Park
djmp at earthlink.net
From: flip [mailto:flip_alpha at safebunch.com]
To: mathgroup at smc.vnet.net
I would like to plot an elliptic curve over Fp of the form:
y^2 = x^3 + ax + b (1)
I would then like to plot the list of points that satisfy (1). {Note: I
have a way to generate that list).
I would like the continuous plot (like using implicit plot over reals) of
(1) with a grid having points of intersection over Fp (the integer points)
shown on the plot (over a grid).
y^2 = x^3 + 2x + 1 over F5
This curve has 7 points (counting the point at infinity).
The list of points is: S = {{0,1},{0,4},{1,2},{1,3},{3,2},{3,3}}
I would like to show a grid plot with the elliptic curve (continuous over
reals) superimposed over the discrete points given above (with points of
intersection (a dot of some sort shown for each point above)).
I would like to be able to pass in the "a, b, S" and have this automatically
generate the plot.
Is this easy?
Thanks for any input, Flip
****email**** flip %%%% @ %%%%%
Sorry for the crypto in my email, but spam is a killer | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Oct/msg00569.html","timestamp":"2014-04-20T11:11:00Z","content_type":null,"content_length":"37059","record_id":"<urn:uuid:1ace5ddd-ede5-4ebb-a097-2726ed2d2ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical English Usage - a Dictionary
by Jerzy Trzeciak
We must now bring dependence on d into the arguments of [9].
The proof will be divided into a sequence of lemmas.
We can factor g into a product of irreducible elements.
Other types fit into this pattern as well.
This norm makes X into a Banach space.
We regard (1) as a mapping of S^2 into S^2, with the obvious conventions concerning the point ∞.
We can partition [0,1] into n intervals by taking......
The map F can be put <brought> into this form by setting......
The problem one runs into, however, is that f need not be......
But if we argue as in (5), we run into the integral......, which is meaningless as it stands.
Thus N separates M into two disjoint parts.
Now (1) splits into the pair of equations......
Substitute this value of z into <in> (7) to obtain......
Replacement of z by 1/z transforms (4) into (5).
This can be translated into the language of differential forms.
Implementation is the task of turning an algorithm into a computer program.
Back to main page | {"url":"http://www.emis.de/monographs/Trzeciak/glossae/into.html","timestamp":"2014-04-19T14:41:47Z","content_type":null,"content_length":"1824","record_id":"<urn:uuid:26acb082-c6f2-4225-af2f-6f73df619a59>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: most famous codiscoverer gets credit (Matthew Effect) [was: This Week's Finds in Mathematical Physics (Week 112)]
Date: Nov 29, 1997 12:19 AM
Author: Jim Balter
Subject: Re: most famous codiscoverer gets credit (Matthew Effect) [was: This Week's Finds in Mathematical Physics (Week 112)]
Gerry Myerson wrote:
> In article <347eb06a.81621900@news.zippo.com>, quentin@inhb.co.nz wrote:
> > There are occasions when the anti-Matthews effect occurs.
> Pell's equation is an example of this, named after Pell because there was
> already too much stuff named after Fermat...
And then there's the Berry Paradox, discovered by Russell but named
after his librarian because "Russell's Paradox" was already taken.
The smallest number that cannot be uniquely described in fewer than a
million words has just been uniquely described in far less than a
million words; Gregory Chaitin's Algorthmic Complexity Theory is
based upon using the Berry Paradox to establish information-theoretic
incompleteness much as Godel used the Liar Paradox to establish
arithmetic incompleteness.
<J Q B> | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=104050","timestamp":"2014-04-17T10:04:31Z","content_type":null,"content_length":"2137","record_id":"<urn:uuid:ca77738b-fdc5-4e2b-a42c-4f553375af21>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accessible Formats
Math Review Accessible Formats
Downloadable Large Print (18 point) Figure Supplements
Downloadable Large Print (18 point)
Downloadable Accessible Electronic Documents
Register for the GRE revised General Test
Show schools only the scores you want them to see — only with the ScoreSelect^SM option.
The GRE® Success Starter video and 2nd edition of The Official Guide can help you do your best on the test! Shop now. | {"url":"http://www.ets.org/gre/revised_general/prepare/disabilities/math_review","timestamp":"2014-04-16T16:29:57Z","content_type":null,"content_length":"16255","record_id":"<urn:uuid:898d406d-f1ca-4287-8e59-914ac07c17bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transforming Secant and Cosecant - Problem 1
Let’s graph the transformation of the secant function. Y equals secant, theta minus pi over 2, and this is a pretty easy one. Let’s start with key points of secant. Now the key points are negative pi
over 2, and pi over 2, these are places where secant is undefined because cosine is 0 there.
Then right in between 0 secant of 0 was 1, pi over 3, secant is 2 and negative pi over 3. Remember that secants and even functions, it will also be 2. So what happens when we transform to secant of
theta minus pi over 2? Let’s make the substitution theta minus pi over 2 equals u.
U equals theta minus pi over 2 means, u plus pi over 2 equals theta. So to get the theta values I add pi over 2 to all these values. So let me add pi over 2 to negative pi over 2, I get 0. I add pi
over 2 to negative pi over 3, I get pi over 6.
I pi over 2 to 0,pi over 2 added to pi over 3 and I get 5 pi over 6. And if I add pi over 2 to pi over 2 I get pi. Now what values do I put here? Well these are just secant u, same us this so I just
copy these values down undefined and undefined, 1, 2 and 2.
I’m ready to graph let me start by plotting these vertical asymptotes. X equals 0 and x equals pi. Remember this will give me half period of the secant graph. x equals 0, and x equals pi.
So we start with the point pi over 2, 1 that’s right here. Pi over 6, 2 and 5, pi over 6 2 pi over 6 is a third the way from 0 to pi over 2 so pi over 62 is right here. 5 pi over 6 is 2/3 the way
from pi over 2 to pi. So right here so pi over 6. 2 is here. So I graph and I get that familiar u shape of the secant function. Recall to get the second half period you take this half period flip it
across the x axis hand shift it to the right half a period. In this case the period is 2 pi, so half a period would be pi. So I shift to the right pi. This point flipped across and shift it to the
right, become -1. This point flipped across shifted to the right. I’m a third a way to the pi to the 3 and pi over 2 and I’m going to get -2.
Likewise, this point flipped across and shifted pi to the right gives me a point here. So I can plot that. And also you take this and flip it across the x axis and shift to the right pi, you get
another asymptote at x equals 2 pi.
So you should remember that the secant function or the cosecant function, is going to have 3 asymptotes for every period. Left and the right and in the middle one. So this thing is also going to have
an asymptote at negative pi.
Well we only have room for another half period so let me graph that. Remember, once you have a four period all you have to do to get more is to shift to the right or left of full period write 2 pi.
So to get points over here, I have to shift these points 2 pie to the left.
So in this point for example at 3 pi over 2, -1 goes to negative pi over 2 -1. This point goes 2/3 the way between negative pi over 2 right here, and this point goes over here. So that I have it 1
and half periods of y equals secant theta minus pi over 2.
And you might recognize this function. This is the same as y equals cosecant theta. It’s really important that you know that the secant function and the secant function and the cosecant function have
exactly the same shape. And to get cosecant, all you have to do is take the secant graph and shift it to the right pi over 2.
And that’s what this is secant shifted to the right pi over 2.
secant horizontal shift vertical asymptotes | {"url":"https://www.brightstorm.com/math/trigonometry/trigonometric-functions/transforming-secant-and-cosecant-problem-1/","timestamp":"2014-04-20T03:19:55Z","content_type":null,"content_length":"71856","record_id":"<urn:uuid:2dd22685-7b91-43db-9388-d3622894052f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Find the Domain of fog and gof
Composition of functions is a form of merging functions in which input to one function is another function. Let us consider two functions f and g. Let us see how to find the domain of fog and gof.
For this, let us consider an example as shown below:
Suppose we have two function f(a) =
$\frac{1}{a + 2}$
and g(a) =
$\frac{a}{a - 3}$
Real domain of g(a) cannot include a = 3 because at this value of “a”, value of function is not defined. Similarly, domain of function f(a) can be any real number except for a = -2.
Let us first consider the function fog. According to fog, we get f(g(a)) =
$\frac{1}{g(a) + 2}$
Domain of this function will contain the values from domain of the function g(a) which means, solutions generated by these values from function g(a) must be chosen by function f(a).
We know that the function g(a) cannot possess the value a = 3 and same is true in case of fog also.
Also, solutions that arise from g(a) are obtained in the form
$\frac{a}{a - 3}$
Next, we check “a” for value of the function g(a) = -2 as at this value, f(a) is not defined.
So, by solving
$\frac{a}{a - 3}$
= -2, we get a = 2.
So, from domain of fog, we remove a = 2 also and final domain of the function fog is “a” belonging to real number and a $\neq$ 2, 3.
Similarly, domain of gof can also be found by checking the points which were checked in case of fog.
Domain of gof comes out to be all real numbers with 3 and 2 excluded. | {"url":"http://math.tutorcircle.com/precalculus/how-to-find-the-domain-of-fog-and-gof.html","timestamp":"2014-04-19T11:57:02Z","content_type":null,"content_length":"17435","record_id":"<urn:uuid:3d80b8f1-3d39-494f-9f28-23774de81e7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nicholas Sze of Yahoo Finds Two-Quadrillionth Digit of Pi
16301986 story
Posted by
from the I'm-a-good-guesser-in-binary-too dept.
gregg writes
"A researcher has calculated the 2,000,000,000,000,000th digit of pi — and a few digits either side of it. Nicholas Sze, of technology firm Yahoo, determined that the digit — when expressed in binary
— is 0."
This discussion has been archived. No new comments can be posted.
• by Nemesisghost (1720424) on Thursday September 16, 2010 @07:01PM (#33605586)
The interesting thing about this article is how they calculated the digits. They broke the problem up into small pieces and had them calculated in parallel. This approach isn't something that's
new or all the unique, but what is is applied to is. Most mathematical calculations are done in a near linear fashion, not in parallel. So for them to be able to do this is a big step forward in
how we approach these types of problem in the future.
Of course I'm very interested in this since it seems I'll be doing something like it in the near future as part of getting my master's degree.
• A serious question (Score:4, Interesting)
by $RANDOMLUSER (804576) on Thursday September 16, 2010 @07:05PM (#33605648)
I've always wondered about these ridiculously precise values of pi - doesn't that imply a measurement (of circumference or diameter) smaller than the Planck length? What's the point of 2 trillion
decimals of precision?
• Bailey–Borwein–Plouffe formula (Score:3, Interesting)
by Utopia (149375) on Thursday September 16, 2010 @07:05PM (#33605650)
Bailey–Borwein–Plouffe formula [wikimedia.org] lets you calculate the n-th digit of pi without calculating the n-1 digits.
I wonder what formula was used to calculate the digit here.
• Re:A serious question (Score:3, Interesting)
by Black Gold Alchemist (1747136) on Thursday September 16, 2010 @07:18PM (#33605752)
Well, the radius of the visible universe is roughly 7.6 * 10^6 Planck lengths [google.com]. That means the volume is on the order of 10^183 cubic Planck lengths. So, if you can calculate PI to
200 digits or so, you're really accurate. At some point, more accurate than spacetime itself.
Parent Share
• Re:A serious question (Score:3, Interesting)
by Surt (22457) on Thursday September 16, 2010 @07:36PM (#33605908) Homepage Journal
So obviously, 640 digits of pi should be enough for anybody.
And here they are:
http://www.eveandersson.com/pi/digits/pi-digits?n_decimals_to_display=640&breakpoint=100 [eveandersson.com]
Parent Share
Related Links Top of the: day, week, month. | {"url":"http://science.slashdot.org/story/10/09/16/2155227/Nicholas-Sze-of-Yahoo-Finds-Two-Quadrillionth-Digit-of-Pi/interesting-comments","timestamp":"2014-04-21T10:28:28Z","content_type":null,"content_length":"82560","record_id":"<urn:uuid:d404ea02-494c-44e8-83e3-b239451bd636>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Low-energy effective field theory investigation of lightly doped antiferromagnets
Florian Kämpfer MIT
Abstract: In this seminar I will try to give you some flavor of what I have been doing during my PhD work. I will concentrate on the analytic part where, based on Hubbard-type models, we have
constructed a low-energy effective field theory for lightly doped antiferromagnets. This effective theory is a condensed matter analog of Baryon Chiral Perturbation theory for QCD which is extremely
successful in describing the low-energy physics of QCD. In the first part of the talk I will explain the symmetry based construction of the theory for electron or hole doped antiferromagnets. In the
second part I will apply the effective theory to the problem of two isolated holes (or electrons) in an otherwise undoped system and we will examine magnon-mediated binding between two fermions. At
the end I will discuss the case of a finite but small density of fermions doped in the antiferromagnet. Our analysis is restricted to homogeneous fermion densities and we will examine possible ground
state configurations of the staggered magnetization vector. | {"url":"http://web.mit.edu/physics/cmt/informalseminar_abstracts/florian.html","timestamp":"2014-04-21T04:36:04Z","content_type":null,"content_length":"1932","record_id":"<urn:uuid:8e24be50-3bbd-41c9-aa24-35eb90ff9453>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem 15
January 15th 2007, 05:35 PM #1
Global Moderator
Nov 2005
New York City
Problem 15
Behold! A plain simple isoseles triangle appears besides you. Frozen in a white boring background this triangle is full of supprises. You must locate the angle by the red lines.
The angle is not 30, it is 20, sorry..
Last edited by ThePerfectHacker; January 18th 2007 at 07:49 PM.
I only got this far...
Hmmm... would it help if values for lengths could be assigned to the side? Being an isosceles triangle, I don't see a problem with that...
Then it could be solved with a bit of triginometry, am I close to the solution? Sine rule, cosine rule..etc.
You are not close to the solution, but that approach works. My solution to the problem involved trigonometry, but the "official" answer does not.
Sorry, the angle should be 20 on top. Not that cannot be solved otherwise, the solution just looks cleaner.
Indeed, that makes it far, far easier. Well, still, this problem took a lot of time and is definitely far harder than it looks.
Any way,
Don't read on if you're still planning on solving this.
Going around the triangle, starting at the bottom left going clockwise, I labelled the triangle A, B, C, where the line going from C intersects AB point D, and the final point I labelled as E. I
initially started out by finding all the corresponding angles, just as anthmoo had done. Obviously, they have now changed as a result of the angle change at angle B. I created a perpendicular
from AB to point E, and labelled this new point F. Create a bisector from angle A to angle E; parallel transport DE to a point G on BC. Extend a line from angle B to H, and mark H as the
intersection of the parallel transported line and the extension of EF. Fill in the corresponding angles by knowing the triangle is an isosceles triangle, and thus the bottom two angles, angles A
and C have to add up to 80 each. The other angles are self-evident by supplementary angles and knowing there is 180 degrees in a triangle. We have a kite formed, and we know this as a result of
the Angle-Side-Angle postulate comparing triangle BDG and triangle BHG. Connect point D with point G, and thus we have created an isosceles triangle. In fact, we know that triangle ABG is an
isosceles triangle, too, since point G is equidistant from both angle A and angle B. Now, looking back at the triangle formed in the kite, we know that those two triangles are congruent because
BC is perpendicular to DG, as they are the diagonals of the kite we formed. We now fill in the corresponding angles from the right triangles formed. And finally, we know the other two triangles
formed in the kite are congruent by using ASA. After filling in all the corresponding angles and using the fact that there are 180 degrees in a triangle, we now have enough information to
determine that angle E must equal 30 degrees.
A diagram would have made this far easier to explain. Nevertheless, very nice problem.
I wish this problem would have been mine, but it is a famous problem from the 1920's I belive. There is an "official" solution involving a construction but I do not know of it. Thus, I will use
my solution.
Let $r=AB=BC$.
By the law of cosines,
$AC^2=r^2+r^2-2r^2\cos 20^o=2r^2(1-\cos 20^o)=4r^2\sin^2 10^o$.
$AC=2r\sin 10^o$.
Since $AB=BC$, $<A=<C$.
But $<A+<B+<C=180^o$.
In triangle $AEC$, $<AEC=50^o$.
Thus, $AEC$ is isoseles with $AC=AE$. Thus, $AE=2r\sin 10^o$.
Consider $ADC$, $<ADC=40^o$.
By the law of sines,
$\frac{AC}{\sin 40^o}=\frac{AD}{\sin 80^o}$.
$\frac{2r\sin 10^o}{\sin 40^o}=\frac{AD}{2\sin 40^o\cos 40^o}$.
$AD=4r\sin 10^o\cos 40^o=4r\sin 10^o\sin 50^o$.
Consider $DEA$, $<EDA=x$ thus, $<DEA=160^o -x$.
By the law of sines,
$\frac{AE}{\sin x}=\frac{AD}{\sin (160^o-x)}$
$\frac{2r\sin 10^o}{\sin x}=\frac{4r\sin 10^o\sin 50^o}{\sin (x+20^o)}$
Now we just get to the trigonometry solving.
$2r\sin 10^o \sin (x+20^o)=4r\sin x \sin 10^o \sin 50^o$ ( $rot = 0$)
$\sin (x+20^o)=2\sin x\sin 50^o$
$\sin x\cos 20^o + \cos x\sin 20^o=2\sin x\sin 50^o$
$\sin x\cos 20^o +\cos x\sin 20^o - 2\sin x\sin 50^o=0$
$\cos x\sin 20^o +\sin x (\cos 20^o-2\sin 50^o)=0$
$\cos x \sin 20^o=\sin x (2\sin 50^o - \cos 20^o)$ ( $\cos x ot = 0$).
$\sin 20^o = \tan x (2\sin 50^o - \cos 20^o)$
$\tan x = \frac{\sin 20^o}{2\sin 50^o-\cos 20^o}$
$\tan x = \frac{\sin 20^o}{2\sin (20^o+30^o)-\cos 20^o}$
$\tan x = \frac{\sin 20^o}{2\sin 20^o\cos 30^o+2\sin 30^o\cos 20^0 - \cos 20^0}$
$\tan x = \frac{\sin 20^o}{\sin 20^o \sqrt{3}+\cos 20^o - \cos 20^o}=\frac{\sin 20^o}{\sin 20^o \sqrt{3}}$
$\tan x= \frac{1}{\sqrt{3}}=\frac{\sqrt{3}}{3}$
Very nice. I considered using Law of Cosines, but there is far too much trig., and I haven't done trig in years to recall some of the identities.
Good job.
Here is a beautiful site.
You can learn a lot from this site.
And it gives the solution without trigonometry.
Though I think my solution is more simpler.
January 17th 2007, 10:43 AM #2
Nov 2006
January 17th 2007, 10:46 AM #3
Grand Panjandrum
Nov 2005
January 17th 2007, 11:13 AM #4
Global Moderator
Nov 2005
New York City
January 17th 2007, 12:52 PM #5
Nov 2006
January 17th 2007, 01:03 PM #6
Global Moderator
Nov 2005
New York City
January 18th 2007, 07:50 PM #7
Global Moderator
Nov 2005
New York City
January 18th 2007, 09:24 PM #8
Senior Member
Apr 2006
January 22nd 2007, 10:43 AM #9
Global Moderator
Nov 2005
New York City
January 22nd 2007, 12:01 PM #10
Senior Member
Apr 2006
January 22nd 2007, 01:31 PM #11
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/math-challenge-problems/10072-problem-15-a.html","timestamp":"2014-04-17T04:35:35Z","content_type":null,"content_length":"72209","record_id":"<urn:uuid:17945fc7-2f8f-44f3-bdab-2cdff4ce5ccd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: Factor variables
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Factor variables
From "M.H.Hussein" <mhh5@kentforlife.net>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject RE: st: Factor variables
Date Sun, 17 Feb 2013 22:08:10 +0000
Thanks Richard.
In the command line, I have three variables and the interaction terms between each and a dummy (gt287). Using this command line I am getting the estimates for the coefficients for the variables when i.gt287=0 and the coefficients for the interaction terms.
To calculate the coefficients for the variables when i.gt287=1, I am adding the coefficients for each variable (when igt287=0) and the coefficient for the respective interaction terms (i.e. difference)
My question is how I can get the standard errors for the calculated coefficients-i.e when i.gt287=1?
Hope this is clearer.
Thanks again,
From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] on behalf of Richard Goldstein [richgold@ix.netcom.com]
Sent: 17 February 2013 21:47
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: Factor variables
I do not find this entirely clear; however, if you look at the help file
(-h fvvarlist-), under the "base levels" section, you will see that
"ibn." means "no base level" and this might be what you want
do realize, however, that you are changing the null hypotheses for your
On 2/17/13 4:39 PM, M.H.Hussein wrote:
> Hello Statalisters
> I am running the regression command line below which contains factor variables with interactions. The command line returns coefficients for all variables in the model when the indicator gt287=0 and interaction terms.
> xtreg TCOST i.gt287##c.P_O i.gt287##c.Y_TCOST10 i.gt287##c.enforcement10, fe
> I would to obtain the estimates for coefficients when gt287=1 alongside above estimates, so that I can get standard errors and t statistics for all coefficients.
> Could you tell me if this possible and if so how I can write the command to get all estimates in a single command line?
> Thanks,
> Mohamud
> -- Kentforlife.net - the email service for alumni of the University of Kent
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
-- Kentforlife.net - the email service for alumni of the University of Kent
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-02/msg00666.html","timestamp":"2014-04-16T19:19:39Z","content_type":null,"content_length":"10270","record_id":"<urn:uuid:ec7713dd-f48f-42b5-ab8e-a44edba90bf0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
statistics help needed
Posted by Jordana on Sunday, February 22, 2009 at 2:20am.
In the game of roulette, a steel ball is rolled onto a wheel that contains 18 red, 18 black, and 2 green slots. If the ball is rolled 24 times, find the probability of the following events.
A. The ball falls into the green slots 4 or more times.
Probability =
B. The ball does not fall into any green slots.
Probability =
C. The ball falls into black slots 11 or more times.
Probability =
D. The ball falls into red slots 12 or fewer times.
Probability =
• statistics help needed - drwls, Sunday, February 22, 2009 at 2:31am
We will be glad to critique your thinking.
• statistics help needed - Jordana, Sunday, February 22, 2009 at 2:41am
I don't know if I use the poisson function in excel. I just don't know how to start the problem.
I can find the probability of hitting a green slot in one time but not multiple times.
• statistics help needed - drwls, Sunday, February 22, 2009 at 3:36am
Poisson statistics is not the only way to do these problems, but it can provide an approximate result for some.
For B, the probability is that of no-green 24 times in a row. (36/38)^24 = (18/19)^24 = 0.273 That was easy
For A, add the probabilities of getting green 4,5,6...24 etc times in 24 attempts. The sum will rapidly converge.
Probability of 4 green:
(1/19)^4*(18/19)^20*C(24,4)= 0.02765
Probability of 5:
(1/19)^5*(18/19)^19*C(24,5) = 0.00614
Probability of 6:
(1/19)^6*(18/19)^18*C(24,6) = 0.00108
Probability of 7:
(1/19)^7*(18/19)^17*C(24,7) = 0.00015
Probability of 4 or more: 0.0350
If a Poisson distribution is used, for n = 24 spins with p = 1/19 probability of green each time, a = np = 1.26316
P(4) = a^4*e^-1.236/4! = 0.03082
You still have to add up P(5), P(6) etc.
Related Questions
Statistics - M/C - In a version of the game of roulette, a steel ball is rolled ...
Statistics/Probability - in roulette, there are 18 red 18 black and 2 green ...
Statistics/Probability - in roulette, there are 18 red 18 black and 2 green ...
Stats - A roulett wheel has 38 slots in which a ball can land two of e slots are...
math - In the game of roulette a ball rolls into one of 37 slots. All the slots ...
math - A roulette wheel has 18 red slots and 18 black slots numbered alternaetly...
math - A roulette wheel has 18 red slots and 18 black slots numbered alternately...
Math (expected value) - A roulette wheel in Las Vegas has 18 red slots and 18 ...
math - in a game of roulette, there are 18 red numbers, 18 black numbers, and 2 ...
statistics - If a bag contains a set of 20 red balls, 10 green balls, and 15 ... | {"url":"http://www.jiskha.com/display.cgi?id=1235287202","timestamp":"2014-04-20T15:54:28Z","content_type":null,"content_length":"10138","record_id":"<urn:uuid:8feb5602-331b-41a4-bb57-3ce82ceecc78>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Resistor Size and Wattage [Archive] - Parallax Forums
12-12-2011, 11:20 PM
I am having a little difficulty understanding some mathematical equations for determining resistor size and wattage. I understand that :
Resistor = (Input Voltage - Needed Voltage) / Needed Current
But.... If I have a 12 volt rail with 10 Amps current and need 4 volts, 100mA, how would I plug in the input amperage? Does that make a difference? I figure it would since 10A to 100mA is a HUGE drop
and would cause heat. | {"url":"http://forums.parallax.com/archive/index.php/t-136609.html","timestamp":"2014-04-21T07:54:15Z","content_type":null,"content_length":"31029","record_id":"<urn:uuid:e8273718-1078-46aa-8d64-91a7aec95b61>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Primer in Combinatorics
I had the great good fortune to spend my undergraduate years at UCLA in the 1970s, when their roster of number theorists included a cadre of scholars who were also exceptionally active in
combinatorics. For instance, there were Bruce Rothschild (properly a Ramsey theorist, I guess), Basil Gordon (to this day the clearest lecturer I have ever seen), and the late E. G. Straus, from whom
I took my very first number theory course, in my first year: I was immediately hooked.
As time went on, however, I gravitated rather emphatically toward algebraic number theory, still often under the tutelage of Straus and Gordon, but now with V. S. Varadarajan taking charge of me (for
which I will always be thankful) and instilling in me a passion for everything from class field theory to the theory of automorphic forms, and all this with something of a French flavor: a book by
Weil, a book by Serre, a lecture series featuring adèlic and idèlic methods, and so on. So combinatorics, and much else, was largely eclipsed by any and all things to do with algebraic number fields:
Varadarajan is both a deep and a broad mathematician, but he is not a combinatorialist.
Thus, despite wonderful and abundant opportunities I never properly pursued this “art of counting” and have over the years, happily more off than on, found this gap to be a burden. For one thing,
generating functions are beautiful and useful things and I have experienced at least two occasions where they might have been of value in a research context. I had to find another way, however,
possibly less elegant and more opaque.
Another example from my personal experience along these lines is found in the subject of graph theory. In fact, something very close to my heart, the vast strategy of analytic methods in number
theory, i.e. the exploitation of the foibles and idiosyncrasies of special functions to study algebraic number fields — typically with something like Fourier analysis packed along in one’s toolkit —
can sometimes also benefit rather dramatically from a dose of graph theory: UCSD’s Audrey Terras (my advisor, now emerita) has been playing with zeta functions of graphs for quite a while now, for
Well, that’s a lot of convincing propaganda for the cause, is it not? Should not analytic and algebraic number theorists, e.g. automorphic formers (for instance), learn some combinatorics in earnest,
even if it be somewhat off their beaten track? There is a strong case to be made here, I believe (and propose to follow suit myself).
Happily, along comes Alexander Kheyfits’ book. It is good news, I think, at least for some one like me, that Kheyfits is, as far as his stated specialty is concerned, a complex analyst and potential
theorist, as opposed to a combinatorialist down to the bone marrow: it makes his presentation of the subject, in this aptly titled Primer in Combinatorics, more user-friendly and pain-free. This
certainly jives with the relatively easy pace of Kheyfits’ presentation and the care he takes in developing his themes, making appropriate use of (many) examples and illustrations. And there are,
equally appropriately, scads of exercises: perhaps more so than any other subject, combinatorics is a learn-by-doing affair: Fingerspitzenkunde and all that…
Qua specifications, the book is proposed as a one-semester text for “a course in combinatorics with elements of graph theory,” and is pitched at the level of undergraduates and (particularly with
Ch’s 4, 5 in the mix) beginning graduate students.
A Primer in Combinatorics is split into two parts, the first being “Introductory Combinatorics and Graph Theory,” the second consisting of Ch’s. 4 and 5, “Combinatorial Analysis.” Ch. 1 deals with
basic counting: permutations, combinations, sum and product rules, &c.; Ch’s. 2 and 3 do graph theory — with élan. After this, in Ch. 4, the extremely important topics of “inclusion-exclusion” and
generating functions are covered, while Ch. 5 introduces Ramsey Theory, Hall’s (marriage) theorem, block designs, and “the proof, due to Hilton, of the necessary and sufficient conditions [for] the
existence of Steiner triple systems.” (Wow!).
This said, A Primer in Combinatorics looks to be considerably more than what Kheyfits describes it to be: it is a primer, yes, but there’s a lot more to it than that. The book not only serves to lay
a good foundation in the art of counting for any one interested (and in need of the skill), it will kindle and foster a genuine enthusiasm for the artistry that comes with its practice.
Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA. | {"url":"http://www.maa.org/publications/maa-reviews/a-primer-in-combinatorics","timestamp":"2014-04-17T17:05:14Z","content_type":null,"content_length":"98936","record_id":"<urn:uuid:9080a9b1-b619-44f2-829b-c437ab977b5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acyclic lists and innumerable trees
Is it possible to express infinite and even uncountable sets in a programming language? If it is, what good can it do?
Infinite strings, trees and sequences seem like useless things. It takes enormously long time to compute them, or to print or compare them. However, in many situations a particular application
examines only a finite subset of a potentially infinite structure. Alas, often one can't tell in advance how big this subset is: therefore, one can't simply precompute it. It doesn't mean though that
we have to compute the whole infinite structure and then hand it over to the application. We only need to make a "next" element available whenever the application asks for one. Lazy computation is
such a remarkable invention indeed. Furthermore, sometimes we can manipulate infinities "symbolically" by operating on their generating functions.
This article initially aimed to give an example of a list that is neither proper nor improper nor cyclic. The example was later extended to the construction of an infinite tree. This stirred up a
long discussion: whether the number of leaves in the tree is countable, whether constructive reals can be effectively enumerated, and if it makes sense to talk about (uncountable) infinities in the
constructive context of computer programming. Incidentally, the infinite tree bb that spawned the thread turned out to be a model of surreal numbers.
Cantor would have had fun with Scheme. I wonder if someone uses Scheme to teach or research in set theory or modern algebra.
The present page is a compilation of several articles posted on comp.lang.scheme, comp.lang.functional newsgroups Sep 22 through Oct 6, 1999.
Pathological lists
Lists are commonly classified [SRFI-1] into three disjoint sets:
• Proper lists: finite, nil-terminated lists
• Improper: finite, non-nil-terminated lists
• Circular lists. "A circular list is a value such that for every n >= 0, cdr^n(x) is a pair."
I'm afraid I have stumbled on a pathological case, which does not appear to fit either category.
As R5RS, Ch 6.4 specifies, "Some implementations may implement 'implicit forcing,' where the value of a promise is forced by primitive procedures like cdr and +". Let us assume such an implementation
and consider:
(define (succ n)
(cons n (delay (succ (+ 1 n)))))
; this is, btw, the n-th Church numeral
(define (n-times n f x)
(let loop ((n n) (x x))
(if (positive? n) (loop (- n 1) (f x)) x)))
(define b (succ 1))
Note, that (n-times n cdr b) is always a pair, for any n. Thus b cannot be a dotted list. It can't be a proper list either as (n-times n cdr b) is never '(). Finally, it is easy to see that
(car (n-times n cdr b)) ==> n+1
for any n. Therefore, (equal? (list-ref b n) (list-ref b m)) is #f for any n not equal m. It seems inappropriate to call list b cyclic as it has no cycles.
An infinite spreading tree bb
As even more interesting case, consider
(define (succsucc n)
(cons (delay (succsucc (* 2 n)))
(delay (succsucc (+ 1 (* 2 n))))))
(define bb (succsucc 0))
It represents an infinite full binary tree spreading out in both dimensions.
Why the tree bb represents an uncountable set (continuum)
Isomorphism to a set of all binary strings
By construction, the tree bb is isomorphic to a set of all binary strings (including the infinite ones), modulo disregarding trailing zeros as usual. This set obviously has the cardinality of 2^c[0],
and is uncountable.
Reply to a counter-argument: "there are never actual elements"
One article in a discussion thread for the tree bb posed a counter-argument:
That is, it's an infinite loop. Well, of sorts. One does get a tree of promises, but each promise only makes more promises. There are never any actual value elements.
Nevertheless, one can define an isomorphism between the tree bb and a set of all binary strings in the following way: Consider a path in the tree starting at the root and never ending. It corresponds
to one particular binary string. If the string is finite, the path ends with an infinite sequence of "zeros" (read: left edges or nodes, or cars). Infinite binary strings correspond to other paths.
It appears that there is a path for each binary string, and there is a binary string for any path. Hence the isomorphism. The set of all binary strings is known to be uncountable.
In the above derivation, the fact that tree nodes (cons cells) have no values is irrelevant. What's important is that every node has the left and the right child, and they are different (not eq?).
Can all the paths be indexed? No, by a diagonal slash
The discussion thread presented arguments concerning the possibility of indexing all paths in the tree bb. For example:
Perhaps you mean:
(define (succsucc1 n)
(cons (if (odd? n) 1 0)
(cons (delay (succsucc1 (* 2 n)))
(delay (succsucc1 (+ 1 (* 2 n)))))))
(define bb1 (succsucc1 0))
It is countable, however. Each implied binary string can be thought of as a binary integer, which is its index.
Let's consider this tree truncated at depth d. Obviously there are 2^d distinct paths in this tree that start at the root and end at a leaf -- just as there are 2^d leaves. Each path (leaf) has a
binary string associated with it. To enumerate all paths/leaves, we need 2^d numbers. If we increase the depth by 1, we have to double the quantity of numbers to enumerate paths/leaves. If d
increases to infinity (c[0]), we "run out" of numbers to assign to every path...
Every path in trees bb or bb1 of a finite length can be enumerated. The question is: can we count all the infinite paths from the root? A diagonal slash argument makes it clear that one cannot assign
a natural number to every infinite path in bb.
Confusion between nodes and paths
The conclusion of the innumerability of paths in the tree bb may be puzzling and frequently leads to a confusion because the number of nodes in the tree is indeed countable.
It is important however to draw a distinction between nodes and paths. In a finite tree, these two are equivalent. The set of paths is isomorphic to the set of nodes (this can be taken as the
definition of a tree as a prefix-complete set of paths). Indeed, in a finite tree, each path from a root ends at some node, and for each node there is a unique path from the root (or any other node).
An infinite tree, such as the tree bb, admits infinite paths, for example, left, right, left, right, ..... Each node in bb has the left neighbor and the right neighbor. There is always a possibility
to go left or right, from any place. The tree admits many other infinite paths, e.g., left, left, right, left, left, right, ... or the paths with the sequence of 'left' and 'right' induced by the
binary representation of PI. In an infinite tree, the isomorphism of nodes and paths no longer holds: there are infinite paths in the tree that do not end at all. Consider a set of numbers in [0,1]
expressed as infinite decimal fractional sequences. If we limit the decimal sequences to some length, their set is countable. Indeed, if we truncate the decimal representation of a number we get a
rational number, and the set of rational numbers is countable. If we allow arbitrary infinite decimal expansions, the set of such numbers become uncountable. The extra size comes from the numbers
that correspond to the expansion that continue forever without any regular pattern.
It is easy to see and prove that for each (finite or infinite) binary string, the tree in question contains the corresponding path. Because the set of all binary strings is uncountable, so is the set
of the paths admitted by the tree. The tree bb is not only infinite, it is irregular.
Surreal Numbers and Many-Worlds
Last updated October 1, 2006
This site's top page is http://pobox.com/~oleg/ftp/
oleg-at-pobox.com or oleg-at-okmij.org
Your comments, problem reports, questions are very welcome! | {"url":"http://www.okmij.org/ftp/Computation/uncountable-sets.html","timestamp":"2014-04-20T23:27:57Z","content_type":null,"content_length":"11390","record_id":"<urn:uuid:59f6e292-054a-4edd-91e9-7a9a71054300>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove: Interior is an open set
February 20th 2010, 12:45 AM
Prove: Interior is an open set
Let (X,d) be a metric space and let F be a subset of X. B(r,x) is the open ball of radius r about x.
Definition: The interior of F, int(F), is the set of all x E F such that there is an r > 0 such that B(r,x) is contained in F.
Definition: Let D be a subset of X. By definition, D is open iff for all a E D, there exists r>0 such that B(r,a) is contained in D.
Using these definitions, prove that int(F) is an open set.
Let x E int(F), then there exists r>0 such that B(r,x) is contained in F.
We need to prove that there exists some r' >0 s.t. B(r',x) is contained in int(F). How can we prove this?
Any help is greatly appreciated!
February 20th 2010, 02:08 AM
Let (X,d) be a metric space and let F be a subset of X. B(r,x) is the open ball of radius r about x.
Definition: The interior of F, int(F), is the set of all x E F such that there is an r > 0 such that B(r,x) is contained in F.
Definition: Let D be a subset of X. By definition, D is open iff for all a E D, there exists r>0 such that B(r,a) is contained in D.
Using these definitions, prove that int(F) is an open set.
Let x E int(F), then there exists r>0 such that B(r,x) is contained in F.
We need to prove that there exists some r' >0 s.t. B(r',x) is contained in int(F). How can we prove this?
Any help is greatly appreciated!
As a very brief hint, try taking r' = r/2 and using the triangle inequality.
February 20th 2010, 06:00 AM
Let (X,d) be a metric space and let F be a subset of X. B(r,x) is the open ball of radius r about x.
Definition: The interior of F, int(F), is the set of all x E F such that there is an r > 0 such that B(r,x) is contained in F.
Definition: Let D be a subset of X. By definition, D is open iff for all a E D, there exists r>0 such that B(r,a) is contained in D.
Using these definitions, prove that int(F) is an open set.
Let x E int(F), then there exists r>0 such that B(r,x) is contained in F.
We need to prove that there exists some r' >0 s.t. B(r',x) is contained in int(F). How can we prove this?
Any help is greatly appreciated!
That is a strange definition for the interior. The way that I have most often seen it defined as is the union of all open subsets of the set.
February 20th 2010, 12:39 PM
February 20th 2010, 01:02 PM
If $y\in B(r/2,x)$ then $B(r/2,y)\subseteq B(r,x)\subseteq F$, because $d(z,y)<r/2$ and $d(y,x)<r/2$ together imply that $d(z,x)<r$, by the triangle inequality. Therefore $B(r/2,x)\subseteq \text | {"url":"http://mathhelpforum.com/differential-geometry/129716-prove-interior-open-set-print.html","timestamp":"2014-04-18T13:40:32Z","content_type":null,"content_length":"10220","record_id":"<urn:uuid:f97b9418-b316-40b4-91eb-810598bf5924>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Is time a vector quantity ?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Well how would you do vector addition for time?
Best Response
You've already chosen the best response.
My point is that if vector is scalar then how does it move in a direction which is from present to the future.
Best Response
You've already chosen the best response.
I also heard that spacetime is a vector quantity.
Best Response
You've already chosen the best response.
Time is a scalar quantity... time is not defined by having a direction. it is a bit hard to explain, but if you study Einstein's theory of special relativity detailed you can might give a better
explanation than me.
Best Response
You've already chosen the best response.
What about the argument have written above that it should have a direction ?
Best Response
You've already chosen the best response.
Well you kinda say it your self... it always goes from present to the future... if the vectors of time always have the same direction, then it would be useless to talk about a vector.
Best Response
You've already chosen the best response.
@Frostbite So you agree it goes in a certain direction from present to past ? Maybe time travel maybe possible soon you cannot say anything :/
Best Response
You've already chosen the best response.
Well time travel you can argue easy against because if you can time travel you violate the law of conservation of energy.
Best Response
You've already chosen the best response.
@Frostbite Well,I asked my teacher : Is time a vector quantity and he was like:stop asking me such stupid questions lol :P Well,As soon as we are progressing most of the things are being proved
wrong. The latest research says that we can go in the future i guess but not go in the past.
Best Response
You've already chosen the best response.
Moreover,If time is a scalar quantity how do we talk about present,past and future ? But we are actually travelling forward in time over a certain rate.
Best Response
You've already chosen the best response.
@naveenbabbar What is your idea on this matter?
Best Response
You've already chosen the best response.
If we talk about Einstein's theory of special relativity then according to that space time is a vector.
Best Response
You've already chosen the best response.
@naveenbabbar Yes ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Will you call current a vector ? It has both direction and magnitude. hmmm??
Best Response
You've already chosen the best response.
I would more say as a contravariant of vectors...
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ofcorse not, reason being current does not follow vector laws of addition and subtraction! A vector is quantity having both direction and magnitude + follows vector laws of addition and
subtraction. Hence time is not vector.
Best Response
You've already chosen the best response.
But times move in a certain direction ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
CURRENT ALSO MOVES , why dont you compare with that ?
Best Response
You've already chosen the best response.
But,Vector is defined as something which has a certain direction and magnitude.
Best Response
You've already chosen the best response.
AND FOLLOWS VECTOR LAWS OF ADDITION..that is very important! ->current is not a vector since it does not follow that addition law<-
Best Response
You've already chosen the best response.
So a vector should follow the laws of addition ?
Best Response
You've already chosen the best response.
Otherwise,It is not a vector.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ok thanks :) How is spacetime a vector quantity then ?
Best Response
You've already chosen the best response.
Is it ?
Best Response
You've already chosen the best response.
Becuase you think of it as dimension... how ever it is wrong... space time is not a vector quantity either.
Best Response
You've already chosen the best response.
Okay thanks a lot @Frostbite @shubhamsrg
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f3af38e4b0abb3d8703264","timestamp":"2014-04-19T04:22:34Z","content_type":null,"content_length":"104096","record_id":"<urn:uuid:e4119b17-8d44-4b79-bf93-1c8d3ff0b624>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
One Step Equation
Math Worksheets > Equation Worksheets > One Step Equation
One Step Equations Worksheets
To solve the one-step equations, we need to isolate the variable by doing the reverse operation for the given equation. That is, if the variable is added with a number, then subtract the number on
both sides in the aim of isolating the variable. Similarly, do addition for a subtraction equation; do division for a multiplication equation; do multiplication for a division equation.
MathWorksheets4Kids.com has paved way to practice free worksheets on one-step equations with different levels and types for the beginners and masters. In this page we are given with addition,
subtraction, multiplication and division worksheets based on integers, fractions and decimals. Combination of above said category is also inculded for extra practice. You can pick the questions
according to the category of integers, fractions or decimals with addition, subtraction, multiplication or division based on different levels of difficulty. These worksheets are suitable for grade 5
and up kids. | {"url":"http://www.mathworksheets4kids.com/equations/one-step.html","timestamp":"2014-04-18T23:15:34Z","content_type":null,"content_length":"47916","record_id":"<urn:uuid:520f56ce-95a5-40d2-9ba0-5f70a86b1280>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: element wise division of matrices
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: element wise division of matrices
From "Victor, Jennifer Nicoll" <jnvictor@pitt.edu>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject st: element wise division of matrices
Date Wed, 30 Jul 2008 15:53:10 -0400
Hello list:
Thanks to Nick for recommending a running sum of matrices for my problem yesterday. I worked like a dream. Now I have a new problem.
I have created two matrices in Stata using the matrix commands. One is called "rawsum," a 438x1010 matrix. The other is "bothvoting," also a 438x1010 matrix.
I need to create matrix agreecount=(rawsum+bothvoting)/2. No problem.
Now I need to create matrix agreerate=agreecount/bothvoting, where each i,j of 'agreecount' is divided by each i,j of 'bothvoting'. I cannot find a command in Stata's matrix commands to do this type of element-wise division.
So, I've moved into Mata. Mata has a simple element-wise division command: X=A:/B.
But, I am a novice Mata-user and I cannot figure out how to get my 438x438 matrix into Mata. So, I've converted it back to a dataset using the 'svmat' command and I'm trying to get the data into Mata using the 'st_view' command. My data has 438 observations and 438 variables. Essentially, I want to use the command:
st_view(x., ., "agreecount1",..."agreecount438")
...then do the same with the 'bothvoting' data, then execute the e-w division command.
I've tried using the 'for' command but cannot figure out the successful execution of a loop in Mata because the syntax is different than Stata's.
How can I do this? Or can you provide a more elegant solution?
Thank you.
Jennifer Nicoll Victor
Assistant Professor
Department of Political Science
University of Pittsburgh
4600 Wesley W. Posvar Hall
(412) 624-7204
E-mail: jnvictor@pitt.edu
Homepage: http://www.pitt.edu/~jnvictor
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-07/msg01152.html","timestamp":"2014-04-20T08:25:08Z","content_type":null,"content_length":"7083","record_id":"<urn:uuid:acd0261e-fd6c-4ff9-bc39-23f897d03671>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mount Laurel ACT Tutor
...I tutored elementary math on a daily basis for eight years. I have experience with the following programs: Developmental Math, Miquon Math, and Teaching Textbooks. Study skills are imperative
in becoming an independent student.
23 Subjects: including ACT Math, reading, writing, geometry
...As a certified teacher and a professional SAT teacher, I have had a lot of experience re-structuring information for students, especially in mathematics. I graduated from TCNJ with a bachelor's
degree in Biology and am also a certified Biology teacher. While working on my Masters in Science Edu...
37 Subjects: including ACT Math, chemistry, reading, writing
I graduated from West Point with a Bachelor of Science degree in Engineering Management, and I currently teach mathematics, physics and engineering at an independent school in the Philadelphia
suburbs. I have tutored middle and high school students in the areas of PSAT/SAT/ACT preparation, math (Al...
19 Subjects: including ACT Math, English, physics, calculus
...We then used the scale that the car was built with to find how fast the rubber band car would go if it were large enough to fit a human driver. The students laughed when they realized most of
them are able to walk faster than a life-sized rubber band car. I strive to be a facilitator in my students' learning rather than imposing math on them.
9 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...Aside from teaching in public high schools, I have served as a writing tutor and Teaching Fellow at Seton Hall University. I began tutoring students preparing for the SAT in 2006. I have a BA
and MA in English and I work as a high school English teacher, so I am very familiar with not only the test but also the material.
16 Subjects: including ACT Math, reading, English, writing | {"url":"http://www.purplemath.com/Mount_Laurel_ACT_tutors.php","timestamp":"2014-04-18T08:50:41Z","content_type":null,"content_length":"24002","record_id":"<urn:uuid:2d562c09-004e-43cb-a8f8-c2a6a9dc6419>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Irvington, NJ Math Tutor
Find a Irvington, NJ Math Tutor
...I completed both high school and college level General Chemistry. I am self taught. I have been using Microsoft programs for years for science projects, research project and lab reports to
analyze data.
18 Subjects: including algebra 1, algebra 2, biology, calculus
...Experienced tutoring precalculus and (mainly) calculus, starting from the bottom to build a rock solid foundation. I was born in Spain, lifetime bilingual. Education at a Spanish university.
17 Subjects: including calculus, prealgebra, trigonometry, statistics
...I am certified in New York and familiar with the logistics and scoring of the SAT math exam. I taught high school math (Algebra 1 through Calculus) for 8 years, and I am expert in all math
concepts tested on the SAT exam. I taught high school math (Algebra 1 through Calculus) for 8 years and am certified in New York.
10 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...I have over four years of experience in tutoring students for the SAT Critical Reading and Writing sections. I have found great success by emphasizing both testing strategies and fundamental
reading and vocabulary skills to help students not only perform better on the test, but become overall be...
15 Subjects: including SAT math, English, reading, writing
...I have also spent many years participating in computer programming competitions, and I competed almost exclusively in Pascal/Delphi (or Kylix, the unix version of pascal). If you are just
starting to learn computer programming, Pascal is a great first language. It is a bit outdated, though. I ...
32 Subjects: including algebra 2, statistics, discrete math, logic
Related Irvington, NJ Tutors
Irvington, NJ Accounting Tutors
Irvington, NJ ACT Tutors
Irvington, NJ Algebra Tutors
Irvington, NJ Algebra 2 Tutors
Irvington, NJ Calculus Tutors
Irvington, NJ Geometry Tutors
Irvington, NJ Math Tutors
Irvington, NJ Prealgebra Tutors
Irvington, NJ Precalculus Tutors
Irvington, NJ SAT Tutors
Irvington, NJ SAT Math Tutors
Irvington, NJ Science Tutors
Irvington, NJ Statistics Tutors
Irvington, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Bayonne Math Tutors
Belleville, NJ Math Tutors
Bloomfield, NJ Math Tutors
East Orange Math Tutors
Elizabeth, NJ Math Tutors
Hillside, NJ Math Tutors
Kearny, NJ Math Tutors
Livingston, NJ Math Tutors
Maplewood, NJ Math Tutors
Newark, NJ Math Tutors
Orange, NJ Math Tutors
South Kearny, NJ Math Tutors
South Orange Math Tutors
Union Center, NJ Math Tutors
Union, NJ Math Tutors | {"url":"http://www.purplemath.com/Irvington_NJ_Math_tutors.php","timestamp":"2014-04-17T19:54:29Z","content_type":null,"content_length":"23774","record_id":"<urn:uuid:8f0cb82a-f34f-4a5a-8962-f2ec12921aba>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.
Failure detectors were proposed by Chandra and Toueg Chandra and Toueg. Unreliable failure detectors for reliable distributed systems. JACM 43(2):225–267, 1996 as a mechanism for solving consensus in
an asynchronous message-passing system with crash failures by distinguishing between slow processes and dead processes. The basic idea is that each process has attached to it a failure detector
module that continuously outputs an estimate of which processes in the system have failed. The output need not be correct; indeed, the main contribution of Chandra and Toueg's paper (and a companion
paper by Chandra, Hadzilacos, and Toueg Chandra, Hadzilacos, and Toueg. The weakest failure detector for solving consensus. PODC 1992, pp 147–158) is characterizing just how bogus the output of a
failure detector can be and still be useful.
We will mostly follow Chandra and Toueg in these notes; see the paper for the full technical details.
To emphasize that the output of a failure detector is merely a hint at the actual state of the world, a failure detector (or the process it's attached to) is said to suspect a process at time t if it
outputs "failed" at that time. Failure detectors can then be classified based on when their suspicions are correct.
We use the usual AsynchronousMessagePassing model, and in particular assume that non-faulty processes execute infinitely often, get all their messages delivered, etc. From time to time we will need
to talk about time, and unless we are clearly talking about real time this just means any steadily increasing count (e.g., of total events), and will be used only to describe the ordering of events.
How to build a failure detector
Failure detectors are only interesting if you can actually build them. In a fully asynchronous system, you can't (this follows from the FischerLynchPaterson result and the existence of
failure-detector-based consensus protocols). But with timeouts, it's not hard: have each process ping each other process from time to time, and suspect the other process if it doesn't respond to the
ping within twice the maximum round-trip time for any previous ping. Assuming that ping packets are never lost and there is an (unknown) upper bound on message delay, this gives what is known as an
eventually perfect failure detector: once the max round-trip times rise enough and enough time has elapsed for the live processes to give up on the dead ones, all and only dead processes are
Classification of failure detectors
Chandra and Toueg define eight classes of failure detectors, based on when they suspect faulty processes and non-faulty processes. Suspicion of faulty processes comes under the heading of
completeness; of non-faulty processes, accuracy.
Degrees of completeness
Strong completeness
Every faulty process is eventually permanently suspected by every non-faulty process.
Weak completeness
Every faulty process is eventually permanently suspected by some non-faulty process.
There are two temporal logic operators embedded in these statements: "eventually permanently" means that there is some time t[0] such that for all times t ≥ t[0], the process is suspected. Note that
completeness says nothing about suspecting non-faulty processes: a paranoid failure detector that permanently suspects everybody has strong completeness.
Degrees of accuracy
These describe what happens with non-faulty processes, and with faulty processes that haven't crashed yet.
Strong accuracy
No process is suspected (by anybody) before it crashes.
Weak accuracy
Some non-faulty process is never suspected.
Eventual strong accuracy
After some initial period of confusion, no process is suspected before it crashes. This can be simplified to say that no non-faulty process is suspected after some time, since we can take end of
the initial period of chaos as the time at which the last crash occurs.
Eventual weak accuracy
After some initial period of confusion, some non-faulty process is never suspected.
Note that "strong" and "weak" mean different things for accuracy vs completeness: for accuracy, we are quantifying over suspects, and for completeness, we are quantifying over suspectors. Even a
weakly-accurate failure detector guarantees that all processes trust the one visibly good process.
Failure detector classes
Two degrees of completeness times four degrees of accuracy gives eight classes of failure detectors, each of which gets its own name. Fortunately, the distinction between strong and weak completeness
turns out to be spurious; a weakly-complete failure detector can simulate a strongly-complete one (but this requires a proof). We can use this as an excuse to consider only the strongly-complete
P (perfect)
Strongly complete and strongly accurate: non-faulty processes are never suspected; faulty processes are eventually suspected by everybody. Easily achieved in synchronous systems.
S (strong)
Strongly complete and weakly accurate. The name is misleading if we've already forgotten about weak completeness, but the corresponding W (weak) class is only weakly complete and weakly accurate,
so it's the strong completeness that the S is referring to.
⋄P (eventually perfect)
Strongly complete and eventually strongly accurate.
⋄S (eventually strong)
Strongly complete and eventually weakly accurate.
Jumping to the punch line: P can simulate any of the others, S and ⋄P can both simulate ⋄S but can't simulate P or each other, and ⋄S can't simulate any of the others. Thus ⋄S is the weakest class of
failure detectors in this list. However, ⋄S is strong enough to solve consensus, and in fact any failure detector (whatever its properties) that can solve consensus is strong enough to simulate ⋄S
(this is the result in the Chandra-Hadzilacos-Toueg paper)—this makes ⋄S the "weakest failure detector for solving consensus" as advertised. Continuing our tour through Chandra and Toueg, we'll show
the simulation results and that ⋄S can solve consensus, but we'll skip the rather involved proof of ⋄S's special role from Chandra-Hadzilacos-Toueg.
Boosting completeness
Recall that the difference between weak completeness and strong completeness is that with weak completeness, somebody suspects a dead process, while with strong completeness, everybody suspects it.
So to boost completeness we need to spread the suspicion around a bit. On the other hand, we don't want to break accuracy in the process, so there needs to be some way to undo a premature rumor of
somebody's death. The simplest way to do this is to let the alleged corpse speak for itself: I will suspect you from the moment somebody else reports you dead until the moment you tell me otherwise.
Formally, this looks like:
initially suspects = ∅
do forever:
for each process p:
if my weak-detector suspects p, then send p to all processes
upon receiving p from some process q:
suspects := suspects + p - q
It's not hard to see that this boosts completeness: if p crashes, somebody's weak-detector eventually suspects it, this process tells everybody else, and p never contradicts it. So eventually
everybody suspects p.
What is slightly trickier is showing that it preserves accuracy. The essential idea is this: if there is some good-guy process p that everybody trusts forever (as in weak accuracy), then nobody ever
reports p as suspect—this also covers strong accuracy since the only difference is that now every non-faulty process falls into this category. For eventual weak accuracy, wait for everybody to stop
suspecting p, wait for every message ratting out p to be delivered, and then wait for p to send a message to everybody. Now everybody trusts p, and nobody every suspects p again. Eventual strong
accuracy is again similar.
This justifies our ignoring the weakly-complete classes.
Consensus with S
Here the failure detectors as applied to most processes are completely useless. However, there is some non-faulty process c that nobody every suspects, and this is enough to solve consensus with as
many as n-1 failures.
Basic idea of the protocol: There are three phases. In the first phase, the processes gossip about input values for n-1 asynchronous rounds. In the second, they exchange all the values they've seen
and prune out any that are not universally known. In the third, each process decides on the lowest-id input that hasn't been pruned (min input also works since at this point everybody has the same
view of the inputs).
In more detail, in phase 1 each process p maintains two partial functions V[p] and Δ[p], where V[p] lists all the input values (q,v[q]) that p has ever seen and Δ[p] lists only those input values
seen in the most recent of n-1 asynchronous rounds. V[p] and Δ[p] are both initialized to {(p, v[p]}}. In round i, p sends (i,Δ[p]) to all processes. It then collects (i,Δ[q]) from each q that it
doesn't suspect and sets Δ[p] to ∪[q](Δ[q]) - V[p] (where q ranges over the processes from which p received a message in round i) and sets V[p] to V[p]∪Δ[p]. In the next round, it repeats the
process. Note that each pair (q,v[q]) is only sent by a particular process p the first round after p learns it: so any value that is still kicking around in round n-1 had to go through n-1 processes.
In phase 2, each process p sends (n,V[p]), waits to receive (n,V[q]) from every process it does not suspect, and sets V[p] to the intersection of V[p] and all received V[q]. At the end of this phase
all V[p] values will in fact be equal, as we will show.
In phase 3, everybody picks some input from their V[p] vector according to a consistent rule.
Proof of correctness
Let c be a non-faulty process that nobody every suspects.
The first observation is that the protocol satisfies validity, since every V[p] contains v[c] after round 1 and each V[p] can only contain input values by examination of the protocol. Whatever it may
do to the other values, taking intersections in phase 2 still leaves v[c], so all processes pick some input value from a nonempty list of same in phase 3.
To get termination we have to prove that nobody ever waits forever for a message it wants; this basically comes down to showing that the first non-faulty process that gets stuck eventually is
informed by the S-detector that the process it is waiting for is dead.
For agreement, we must show that in phase 3, every V[p] is equal; in particular, we'll show that every V[p] = V[c]. First it is necessary to show that at the end of phase 1, V[c] ⊆ V[p] for all p.
This is done by considering two cases:
1. If (q,v[q]) ∈ V[c] and c learns (q,v[q]) before round n-1, then c sends (q,v[q]) to p no later than round n-1, p waits for it (since nobody ever suspects c), and adds it to V[p].
2. If (q,v[q]) ∈ V[c] and c learns (q,v[q]) only in round n-1, then (q,v[q]) was previously sent through n-1 other processes, i.e. all of them. Each process p ≠ c thus added (q,v[q]) to V[p] before
sending it and again (q,v[q]) is in V[p].
(The missing case where (q,v[q]) isn't in V[c] we don't care about.)
But now phase 2 knocks out any extra elements in V[p], since V[p] gets set to V[p]∩V[c]∩(some other V[q]'s that are supersets of V[c]). It follows that at the end of phase 2 V[p] = V[c] for all p.
Finally in phase 3 everybody applies the same selection rule to these identical sets and we get agreement.
Consensus with ⋄S and f < n/2
The consensus protocol for S depends on some process c never being suspected; if c is suspected during the entire (finite) execution of the protocol—as can happen with ⋄S—then it is possible that no
process will wait to hear from c (or anybody else) and the processes will all decide their own inputs. So to solve consensus with ⋄S we will need to assume less than n/2 failures, allowing any
process to wait to hear from a majority no matter what lies its failure detector is telling it.
The resulting protocol, known as the Chandra-Toueg consensus protocol, is structurally similar to the consensus protocol in Paxos. The difference is that instead of initiators blindly showing up, the
protocol is divided into rounds with a rotating coordinator p[i] in each round r with r = i (mod n). The termination proof is based on showing that in any round where the coordinator is not faulty
and nobody suspects it, the protocol finishes.
Here's the essence of the protocol. It uses as a subroutine a protocol for ReliableBroadcast, which guarantees that any message that is sent is either received by no processes or exactly once by all
non-faulty processes.
• Each process keeps track of a preference (initially its own input) and a timestamp, the round number in which it last updated its preference.
• The processes go through a sequence of asynchronous rounds, each divided into four phases:
1. All processes send (round, preference, timestamp) to the coordinator for the round.
2. The coordinator waits to hear from a majority of the processes (possibly including itself). The coordinator sets its own estimate to some estimate with the largest timestamp of those it
receives and sends (round, estimate) to all processes.
3. Each process waits for the new proposal from the coordinator or for the failure detector to suspect the coordinator. If it receives a new estimate, it adopts it as its own, sets timestamp ←
round, and sends (round, ack) to the coordinator. Otherwise, it sends (round, nack) to the coordinator.
4. The coordinator waits to receive ack or nack from a majority of processes. If it receives ack from a majority, it announces the current estimate as the protocol decision value using
• Any process that receives a value in a ReliableBroadcast decides on it immediately.
Proof of correctness
For validity, observe that the decision value is an estimate and all estimates start out as inputs.
For termination, observe that no process gets stuck in phase 1, 2, or 4, because either it isn't waiting or it is waiting for a majority of non-faulty processes who all sent messages unless they have
already decided (this is why we need the nacks in phase 3). The loophole here is that processes that decide stop participating in the protocol; but because any non-faulty process retransmits the
decision value in the ReliableBroadcast, if a process is waiting for a response from a non-faulty process that already terminated, eventually it will get the ReliableBroadcast instead and terminate
itself. In phase 3, a process might get stuck waiting for a dead coordinator, but the strong completeness of ⋄S means that it suspects the dead coordinator eventually and escapes. So at worst we do
infinitely many rounds.
Now suppose that after some time t there is a process c that is never suspected by any process. Then in the next round in which c is the coordinator, in phase 3 all surviving processes wait for c and
respond with ack, c decides on the current estimate, and triggers the ReliableBroadcast protocol to ensure everybody else decides on the same value. Since ReliableBroadcast guarantees that everybody
receives the message, everybody decides this value or some value previously broadcast—but in either case everybody decides.
Agreement is the tricky part. It's possible that two coordinators both initiate a ReliableBroadcast and some processes choose the value from the first and some the value from the second. But in this
case the first coordinator collected acks from a majority of processes in some round r, and all subsequent coordinators collected estimates from an overlapping majority of processes in some round r'
> r. By applying the same induction argument as for Paxos we get that all subsequent coordinators choose the same estimate as the first coordinator, and so we get agreement.
f < n/2 is still required even with ⋄P
We can show that with a majority of failures, we're in trouble with just ⋄P (and thus with ⋄S, which is trivially simulated by ⋄P). The reason is that ⋄P can lie to us for some long initial interval
of the protocol, and consensus is required to terminate eventually despite these lies. So the usual partition argument works: start half of the processes with input 0, half with 1, and run both
halves independently with ⋄P suspecting the other half until the processes in both halves decide on their common inputs. We can now make ⋄P happy by letting it stop suspecting the processes, but it's
too late.
Relationships among the classes
It's easy to see that P simulates S and ⋄P simulates ⋄S without modification. It's also immediate that P simulates ⋄P and S simulates ⋄S (make "eventually" be "now"), which gives a diamond-shaped
lattice structure between the classes. What is trickier is to show that this structure doesn't collapse: there is no simulation from ⋄P to S, S to ⋄P, or from ⋄S to any of the other classes.
First let's observe that there is no simulation of S by ⋄P: if there were, we would get a consensus protocol for f ≥ n/2 failures, which we can't do. It follows that ⋄P can't simulate P (which can
simulate S).
To show that S can't simulate ⋄P, choose some non-faulty victim process v and consider an execution in which S periodically suspects v (which it is allowed to do as long as there is some other
non-faulty process it never suspects). If the ⋄P-simulator ever responds to this by refusing to suspect v, there is an execution in which v really is dead, and the simulator violates strong
completeness. But if not, we violate eventual strong accuracy. Note that this also implies S can't simulate P, since P can simulate ⋄P. It also shows that ⋄S can't simulate either of ⋄P or P.
We are left with showing ⋄S can't simulate S. Consider a system where p's ⋄S detector suspects q but not r from the start of the execution, and similarly r's ⋄S detector also suspects q but not p.
Run p and r in isolation until they give up and decide that q is in fact dead (which they must do eventually by strong completeness, since this run is indistinguishable from one in which q is
faulty). Then wake up q and crash p and r. Since q is the only non-faulty process, we've violated weak accuracy.
Chandra and Toueg give as an example of a natural problem that can be solved only with P the problem of Terminating Reliable Broadcast, in which a single leader process attempts to send a message and
all other processes eventually agree on the message if the leader is non-faulty but must terminate after finite time with a default "no message" return value if the leader is faulty.^1 The process is
solvable using P by just having each process either wait for the message or for P to suspect the leader, which can only occur if the leader does in fact crash. If the leader is dead, the processes
must eventually decide on no message; this separates P from ⋄S and ⋄P since we can then wake up the leader and let it send its message. But it also separates P from S, since we can have the
S-detector only be accurate for non-leaders. For other similar problems see the paper.
1. This is a slight weakening of the problem, which however still separates P from the other classes. For the real problem see Chandra and Toueg. (1) | {"url":"http://pine.cs.yale.edu/pinewiki/FailureDetectors","timestamp":"2014-04-16T22:03:24Z","content_type":null,"content_length":"38928","record_id":"<urn:uuid:221935ed-0302-4266-9e84-645de441891a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2012/042
Key Length Estimation of Pairing-based Cryptosystems using $\eta_T$ PairingNaoyuki Shinohara and Takeshi Shimoyama and Takuya Hayashi and Tsuyoshi Takagi Abstract: The security of pairing-based
cryptosystems depends on the difficulty of the discrete logarithm problem (DLP) over certain types of finite fields. One of the most efficient algorithms for computing a pairing is the $\eta_T$
pairing over supersingular curves on finite fields whose characteristic is $3$. Indeed many high-speed implementations of this pairing have been reported, and it is an attractive candidate for
practical deployment of pairing-based cryptosystems. The embedding degree of the $\eta_T$ pairing is 6, so we deal with the difficulty of a DLP over the finite field $ GF(3^{6n})$, where the function
field sieve (FFS) is known as the asymptotically fastest algorithm of solving it. Moreover, several efficient algorithms are employed for implementation of the FFS, such as the large prime variation.
In this paper, we estimate the time complexity of solving the DLP for the extension degrees $n=97,163, 193,239,313,353,509$, when we use the improved FFS. To accomplish our aim, we present several
new computable estimation formulas to compute the explicit number of special polynomials used in the improved FFS. Our estimation contributes to the evaluation for the key length of pairing-based
cryptosystems using the $\eta_T$ pairing. Category / Keywords: public-key cryptography / pairing-based cryptosystems, discrete logarithm problem, finite field, key length, suitable valuesPublication
Info: This is a full version of ISPEC 2012 paper.Date: received 25 Jan 2012, last revised 18 Jun 2012Contact author: shnhr at nict go jpAvailable format(s): PDF | BibTeX Citation Note: Table 1 and 3
are edited. Version: 20120619:054704 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2012/042/20120619:054704","timestamp":"2014-04-20T08:41:38Z","content_type":null,"content_length":"3265","record_id":"<urn:uuid:9826f553-4a38-4da1-a934-9d21dedeaa48>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of the Optical Society of America B
A vectorial finite-difference time-domain (FDTD) method is used to present a numerical study of very narrow spatial solitons interacting with the surface of what has become known as a left-handed
medium. After a comprehensive discussion of the background and the family of surface modes to be expected on a left-handed material, bounded by dispersion-free right-handed material, it is
demonstrated that robust outcomes of the FDTD approach yield dramatic confirmation of these waves. The FDTD results show how the linear and nonlinear surface modes are created and can be tracked in
time as they develop. It is shown how they can move backward or forward, depending on either a critical value of the local nonlinear conditions at the interface or the ambient linear conditions.
Several examples are given to demonstrate the power and versatility of the method and the sensitivity to the launching conditions.
© 2005 Optical Society of America
OCIS Codes
(160.0160) Materials : Materials
(190.0190) Nonlinear optics : Nonlinear optics
(190.5530) Nonlinear optics : Pulse propagation and temporal solitons
(240.0240) Optics at surfaces : Optics at surfaces
(240.5420) Optics at surfaces : Polaritons
(240.6680) Optics at surfaces : Surface plasmons
(240.6690) Optics at surfaces : Surface waves
Allan D. Boardman, Larry Velasco, Neil King, and Yuriy Rapoport, "Ultra-narrow bright spatial solitons interacting with left-handed surfaces," J. Opt. Soc. Am. B 22, 1443-1452 (2005)
Sort: Year | Journal | Reset
1. A. D. Boardman, Electromagnetic Surface Modes (Wiley, 1982).
2. A. Otto, "Excitation of nonradiative surface plasma waves in silver by the method of frustrated total reflection," Z. Phys. 216, 398 (1968).
3. H. F. Taylor, "Optical modulation in thin films," J. Vac. Sci. Technol. 11, 150-155 (1974).
4. V. E. Semenov, N. N. Rozanov, and N. V. Vysotina, "Ultranarrow beams of electromagnetic radiation in media with a Kerr nonlinearity," JETP 89, 243-248 (1999).
5. N. N. Rozanov, V. E. Semenov, and N. V. Vyssotina, "Optical needles in media with saturating self-focusing nonlinearities," J. Opt. B 3, S96-S99 (2001).
6. E. Granot, S. Sternklar, Y. Isbi, B. Malomed, and A. Lewis, "Subwavelength spatial solitons," Opt. Lett. 22, 1290-1292 (1997).
7. C.-F. Chen and S. Chi, "Subwavelength spatial solitons of TE mode," Opt. Commun. 157, 170-172 (1998).
8. E. Granot, S. Sternklar, Y. Isbi, B. Malomed, and A. Lewis, "On the existence of subwavelength spatial solitons," Opt. Commun. 178, 431-435 (2000).
9. B. V. Gisin and B. A. Malomed, "One- and two-dimensional subwavelength solitons in saturable media," J. Opt. Soc. Am. B 18, 1356-1361 (2001).
10. J. S. Aitchison, J. S. Silberberg, A. M. Weiner, M. K. Oliver, J. L. Jackel, D. E. Leaird, E. M. Vogel, and P. W. E. Smith, "Observation of spatial optical solitons in a nonlinear glass
waveguide," Opt. Lett. 15, 471-473 (1990).
11. V. G. Veselago, "The electrodynamics of substances with simultaneously negative values of epsilon and µ," Sov. Phys. Usp. 10, 509-514 (1968).
12. R. A. Shelby, D. R. Smith, and S. Schultz, "Experimental verification of a negative index of refraction," Science 292, 77-79 (2001).
13. D. R. Smith and N. Kroll, "Negative refractive index in left-handed materials," Phys. Rev. Lett. 85, 2933-2936 (2000). [CrossRef]
14. R. W. Ziolkowski and E. Heyman, "Wave propagation in media having negative permittivity and permeability," Phys. Rev. E 64, 056625 (2001). [CrossRef]
15. I. V. Shadrivov, A. A. Sukhorukov, Y. S. Kivshar, A. A. Zharov, A. D. Boardman, and P. Egan, "Nonlinear surface waves in left-handed materials," Phys. Rev. E 69, 016617 (2004). [CrossRef]
16. J. Pendry, "Electromagnetic materials enter the negative age," Phys. World 14, 47-51 (2001).
17. R. Ruppin, "Surface polaritons of a left-handed medium," Phys. Lett. A 277, 61-64 (2000).
18. J. B. Pendry, "Negative refraction makes a perfect lens," Phys. Rev. Lett. 85, 3966-3969 (2000). [CrossRef]
19. I. V. Shadrivov, A. A. Sukhorukov, and Y. S. Kivshar, "Guided modes in negative-refractive-index waveguides," Phys. Rev. E 67, 057602 (2003). [CrossRef]
20. M. W. McCall, A. Lakhtakia, and W. S. Weiglhofer, "The negative index of refraction demystified," Eur. J. Phys. 23, 353-359 (2002).
21. R. W. Ziolkowski, "Pulsed and cw Gaussian beam interactions with double negative metamaterial slabs," Opt. Express 11, 662-681 (2003).
22. I. V. Shadrivov, A. A. Zharov, and Y. S. Kivshar, "Giant Goos-Hänchen effect at the reflection from left-handed metamaterials," Appl. Phys. Lett. 83, 2713-2715 (2003).
23. A. K. Sarychev, V. P. Drachev, H. Yuan, V. A. Podolskiy, and V. M. Shalaev, "Optical properties of metal nanowires," in Nanotubes and Nanowires, A.Lakhtakia and S.Maksimenko, eds. Proc. SPIE
5219, 92-98 (2003).
24. V. A. Podolskiy, A. K. Sarychev, E. E. Narimanov, and V. M. Shalaev, "Resonant light interaction with plasmonic nanowire systems," J. Opt. A, Pure Appl. Opt. 7, 32-37 (2005).
25. J. B. Pendry, "Focusing light using negative refraction," J. Phys.: Condens. Matter 15, 6345-6364 (2003).
26. A. Taflove and S. C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method (Artech House, 2000).
27. K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Method for Electromagnetics (CRC Press, 1993).
28. R. M. Joseph and A. Taflove, "Spatial soliton deflection mechanism indicated by FD-TD Maxwell's equations modeling," IEEE Photonics Technol. Lett. 6, 1251-1254 (1994).
29. K. L. Shlager and J. B. Schneider, "A selective survey of the finite-difference time-domain literature," IEEE Antennas Propag. Mag. 37, 39-56 (1995).
30. K. S. Yee, "Numerical solution of initial boundary value problems involving Maxwell's equations in isotropic media," IEEE Trans. Antennas Propag. 14, 302-307 (1966).
31. G. Bellanca, R. Semprini, and P. Bassi, "FDTD modeling of spatial soliton propagation," Opt. Quantum Electron. 29, 233-241 (1997).
32. S. A. Cummer, "Dynamics of causal beam refraction in negative refractive index materials," Appl. Phys. Lett. 82, 2008-2010 (2003).
33. R. M. Joseph and A. Taflove, "FDTD Maxwell's equations models for nonlinear electrodynamics and optics," IEEE Trans. Antennas Propag. 45, 364-374 (1997).
34. A. D. Boardman, P. Egan, L. Velasco, and N. King, "Control of planar nonlinear guided waves and spatial solitons with a left-handed medium," J. Opt. A 7, 57-67 (2004).
35. A. Schuster, An Introduction of the Theory of Optics (Edward Arnold, 1904).
36. G. P. Agrawal, Nonlinear Fiber Optics (Academic, 2001).
37. Y. Kivshar and G. P. Agrawal, Optical Solitons: From Fibers to Photonic Crystals (Academic, 2003).
38. N. Akhmediev and A. Ankieewicz, Solitons: Nonlinear Pulses and Beams (Chapman & Hall, 1997).
39. A. D. Boardman, K. Marinov, D. I. Pushkarov, and A. Shivarova, "Influence of nonlinearly induced diffraction on spatial solitary waves," Opt. Quantum Electron. 32, 49-62 (2000).
40. D. Sullivan, J. Liu, and M. Kuzyuk, "Three-dimensional optical pulse simulation using the FDTD method," IEEE Trans. Microwave Theory Tech. 48, 1127-1133 (2000).
41. R. W. Ziolkowski and J. B. Judkins, "Full-wave vector Maxwell equation modeling of the self-focusing of ultrashort optical pulses in a nonlinear Kerr medium exhibiting a finite response time," J.
Opt. Soc. Am. B 10, 186-198 (1993).
42. H. Lee, K. Chae, S. Yim, and S. Park, "Finite-difference time-domain analysis of self-focusing in a nonlinear Kerr film," Opt. Express 12, 2603-2609 (2004).
43. D. M. Sullivan, "Nonlinear FDTD formulations using Z transforms," IEEE Trans. Microwave Theory Tech. 43, 676-682 (1995).
44. J. V. Moloney, A. C. Newell, and A. B. Aceves, "Spatial soliton optical switches: a soliton-based equivalent particle approach," Opt. Quantum Electron. 24, S1269-S1293 (1992).
45. P. Mazur and B. Djafari-Rouhani, "Effect of surface polaritons on the lateral displacement of a light beam at a dielectric interface," Phys. Rev. B 30, 6759-6762 (1984). [CrossRef]
46. A. D. Boardman, N. King, Y. Rapoport, and L. Velasco, "Gyrotropic impact upon negative refracting surfaces," New J. Phys. (to be published).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josab/abstract.cfm?URI=josab-22-7-1443","timestamp":"2014-04-20T14:31:08Z","content_type":null,"content_length":"188050","record_id":"<urn:uuid:22a852bb-1e57-4aec-94be-6b07c3b03b25>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characteristic of an integral domain
December 1st 2012, 07:22 PM #1
Super Member
Feb 2008
Characteristic of an integral domain
Let A be a finite integral domain. Prove that if there is a nonzero a in A such that 256 * a = 0, then A has characteristic 2.
I'm not at all sure how to do this. Any advice is appreciated. Thanks for your time!
Re: Characteristic of an integral domain
First we know that the characteristic of an ID is p, where p is prime (since ID is finite) .
Second notice that the additive order of every non zero element in the integral domain is equal to characteristic p.
Take $x \in D$ such that $x ot = 0$ . Let the additive order of x be n. Then since $x ot = 0$, n > 1. Since the characteristic is p, we know that $px = 0$. Thus this means that n must divide p.
Since the only things that divide p are 1 and p, since we said n > 1, it must be that n = p .
This means that p must divide 256 = 2^8.
Since 2 is the only prime which divides 256. So A has char 2.
Last edited by jakncoke; December 1st 2012 at 07:47 PM.
December 1st 2012, 07:44 PM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/208856-characteristic-integral-domain.html","timestamp":"2014-04-19T22:15:31Z","content_type":null,"content_length":"34298","record_id":"<urn:uuid:d36ccda0-b0c0-4583-bae5-709750d89f85>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Beowulf] Teaching Scientific Computation (looking for the perfect
[Beowulf] Teaching Scientific Computation (looking for the perfect text)
Joe Landman landman at scalableinformatics.com
Tue Nov 20 15:17:12 PST 2007
Robert G. Brown wrote:
>>> So, I'm thinking about reworking the class to favor C, and fearing 3
>>> weeks of pointer and addressing hell. For those of you who teach
>> Its actually not that painful. I was able to walk through it in one
>> session, explaining what the symbols mean, how to read them, and then
>> showing them in use for a heat diffusion problem in 2D (breaking up a
>> large array into panels).
> Indeed, if you combine pointers with a "here's how the computer REALLY
> works, kids" lecture so that you help them visualize memory and maybe
My first two lectures were on that. Bytes? We don't need no steenkeen
bytes! Cache lines is where its at, yessiree.
One of my favorite examples of showing them that memory is not a "big
honking chunk O ram" is creating an array, and using two loops to walk
through it, then interchanging the order of the loops. Same memory
access, one just takes, well, a little bit longer than the other.
I sort of hid the pointer stuff in the matrix setup anyway, and then I
roll it out. It's not too painful, and as long as you talk about it
carefully, they seem to get it (or at least not struggle with it).
> give them a simplified overview of "assembloid" -- a ten instruction
> assember that does nothing but load, store,
> add/subtract/multiply/divide, compare (x3), and branch that helps them
> understand what compilers are actually doing in there, it will doubtless
> help them with both why pointers are astoundingly useful and how/when to
> use them.
I preferred the "pragmatic" examples: those focused upon a particular
problem, and used that problem as the template to discuss how to access
particular things. Once you get them mentally breaking down the
problem, they naturally ask the "how do I" questions, which are a bit
easier to answer than the "why" questions.
>> Sum reductions look the same in all languages. And they can be done
>> incorrectly in all languages :(
> Which is the one reason that they DO need at least the first couple of
> chapters of real numerical methods somewhere in there. Fortran
> encourages you to think that there is a standard FORmula TRANslation for
> any equation based algorithm, and of course in once sense there is. If
> you fail to ever learn the risks of summing e.g. alternativing series
> with terms that mostly cancel, one day you are bound to e.g. sum up your
> own spherical bessel functions from the ascending recursion relation and
> wonder why all your answers end up wrong... especially when it WORKS for
> the first three or four values of \ell.
The example I showed in the MPI class was this:
program sum
real*4 sum_x,sum_y
integer i,N
do i=1,N
sum_x = sum_x + 1.0/float(i)
print *,' sum = ',sum_x
do i=N,1,-1
sum_y = sum_y + 1.0/float(i)
print *,' sum = ',sum_y
print *,' difference = ',sum_x-sum_y
Compiling and running this usually gets people's attention
landman at lightning:~$ gfortran sumfs.f
landman at lightning:~$ ./a.out
sum = 15.40368
sum = 18.80792
difference = -3.404236
Same sum, just done in different orders. Yes it is a divergent series
in general, but that isn't the reason why you get this difference. Its
all about the roundoff error accumulation. You see this in every
The issue is that even good languages can "hide" bad algorithms. Its
the algorithms that matter. FWIW: the second sum is much closer to what
the double precision version will report.
> rgb
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf mailing list | {"url":"http://www.beowulf.org/pipermail/beowulf/2007-November/020017.html","timestamp":"2014-04-17T12:40:24Z","content_type":null,"content_length":"7747","record_id":"<urn:uuid:3a5630fd-b0c5-4d50-97c3-021e191c371e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Black Vault Message Forums
My son is law is right, you are removing the division sign improperly on your last step.
48÷2(12) (Parentheses) Divide either the product(24 into 48) or you can divide them both one at a time(2 into 48=24, followed by 12 into 24)
24(12) (Division) The division sign is not removed at this point, the parentheses are--The answer is 2
You are wrong, my son in law is right.
Division and multiplication are calculated with the same priority from left to right. That is because division is multiplication by the reciprocal.
48÷2(12) parentheses first
Now there is a division and a multiplication. By the definition of division, dividing by 2 is the same as multiplying by the reciprocal of 2 which is 1/2 or .5.
Then we have
These two multiplications can be carried out in either order to get
which is 288.
How does your son in law define division by 2?
"it is easy to grow crazy"
Sounds like one of your trick questions At1
You don’t have to work it out from left to right, and it is dependent to what rules you are willing to accept and what the context is in math(algebra, arithmetic, etc.). If you accept left to right
after the work in parenthesis, then he is correct, if you don’t then my answer is. We do not teach what was stated by your opponent, we say “multiplication or division, left to right”, there are
other properties that are taught that conflict with his premise, he must know these or has an awareness of them. What is associated with the parenthesis is usually worked on next, rather than going
from left to right.
Theres no correct determination in this exercise given multiple competing understandings, as its sole purpose is bait –for him to get attention and hold you and others hostage to changing and
conflicting contexts.
I don’t have a masters in math, I have one in educational management, and a course shy of a second one in political science/public administration. My bachelor of science degree is in criminology. I
have enough math courses for a minor in math and to qualify for a teaching credential in math(which is enough units to qualify in most schools for a major in math).
We have done this same problem, as a problem of the week in 6-8th math for the past 16 years, and before I became a teacher as well. It’s a classic, but has more than one interpretation.
The PEMDAS convention, coupled with the definition of division and subtraction, gives one unique evaluation for every well-formed expression. Using them, the result is 288.
"it is easy to grow crazy"
Pretty obvious the question, was going to present more than one answer or why post it at all, other than to do exactly what my Son in Law could see? Fooling someone like me is as easy as the shell
and pea game. Like I said a kind of trick problem. As my Son in Law stated as such, depending on what premise you are sticking to, and he also stated you would know that, so lets be honest, you know
what he is saying. You also know what he is saying, in how students are taught the order in which they would solve this problem in school, the parentheses first, followed by what pertains to that
As he said with your thinking you are correct, but with his thinking 2 is correct, and given the way it would be taught, seems like 2 is not only right, but most proper.
Yes, I have stated that the "answer" depends on what convention you use. However, 2 is not the result of the standard PEMDAS convention. The PEMDAS convention will uniquely evaluate every well-formed
expression. Your son in law is interpreting 48÷2(9+3) as 48÷(2(9+3)) which is a distinctly different expression. You should ask him whether or not division by two is, in his book, the same as
multiplying by 1/2 (by definition).
It's a little scary if people are teaching students that PEMDAS will not give a unique answer for every well-formed expression. A well-formed expression is by definition unequivocal. Given any
well-formed expression, PEMDAS as well as any other convention, leads to one result.
If PEMDAS did NOT lead to a unique answer, then computers would not be able to evaluate expressions.
"it is easy to grow crazy"
I posted exactly what he wrote me back, I am far from a Math expert, you both are light years ahead of me. I'm sure you are right, I do not even know what the convention is you are talking about, I'm
sure he knows, as he intimated to. As he stated if you work the problem as you say, 288 is right, if you work it the other way, the answer is 2. To my thinking I thought the answer of 2 was correct,
and that 288 resulted from making an error. Even my son in law states that is not so, if there are 2 approaches. When I said 2 may be the proper answer, that was me talking, not my son in law, so I
do not know if they are not teaching as you ask, he probably does. As you know as a teacher, he has to abide to the school districts agreed way to teach each subject.
What I do know is that if that was you bank calculating you bank fees of $288 from that equation, you would argue to the death, it should be $2. | {"url":"http://www.theblackvault.com/phpBB3/post116600.html","timestamp":"2014-04-17T01:23:52Z","content_type":null,"content_length":"58466","record_id":"<urn:uuid:ba805a05-bbd8-494c-93b8-a7b1579ec529>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Coloring
Less is more.
Modern RISC architectures have quite large register sets, typically 32 general purpose integer registers and an equivalent number of floating-point registers, or sometimes more (IA64 has 128 of each
type). Due to the long latencies of memory accesses in modern processors – even assuming primary data cache hits – the effective use of these large register sets for placement of variables and other
values is a critical factor in achieving high performance. Register allocation is especially important for RISC architectures, because all operations occur between registers.
Register allocation is the process of determining which values should be placed into which registers and at what times during the execution of the program. Note that register allocation is not
concerned specifically with variables, only values – distinct uses of the same variable can be assigned to different registers without affecting the logic of the program.
There have been a number of techniques developed to perform register allocation at a variety of different levels – local register allocation refers to allocation within a very small piece of code,
typically a basic block; global register allocation assigns registers within an entire function; and interprocedural register allocation works across function calls and module boundaries. The first
two techniques are commonplace, with global register allocation implemented in virtually every production compiler, while the latter – interprocedural register allocation – is rarely performed by
today's mainstream compilers.
It has been shown experimentally that modern RISC architectures provide large enough register sets to accommodate all of the important values in most pieces of real-world code. Unfortunately,
however, there will always be some pieces of code that require more registers than actually exist. In such cases, the register allocator must insert spill code to store some values back into memory
for part of their lifetime.
Minimizing the runtime cost of spill code is a crucial consideration in register allocation. This issue becomes even more interesting (and challenging) if we consider the simple way in which many
values are created, because many simple expressions are only worth keeping if registers are available in which to keep them – loading a common subexpression from memory may well be slower than
recomputing it. This is called the rematerialization problem.
Interference Graphs
Register allocation shows close correspondence to the mathematical problem of graph coloring. The recognition of this correspondence is originally due to John Cocke, who first proposed this approach
as far back as 1971. Almost a decade later, it was first implemented by G. J. Chaitin and his colleagues in the PL.8 compiler for IBM's 801 RISC prototype. The fundamental approach is described in
their famous 1981 research paper [ChaitinEtc1981]. Today, almost every production compiler uses a graph coloring global register allocator.
When formulated as a coloring problem, each node in the graph represents the live range of a particular value. A live range is defined as a write to a register followed by all the uses of that
register until the next write. An edge between two nodes indicates that those two live ranges interfere with each other because their lifetimes overlap. In other words, they are both simultaneously
active at some point in time, so they must be assigned to different registers. The resulting graph is thus called an interference graph.
Although theoretically an arbitrary graph can arise from a code sequence, in practice interference graphs are relatively sparse. Experiments have shown that the number of edges in a graph is normally
about 20 times the number of nodes. As a result, a large graph of a few hundred nodes will only have a few thousand edges, rather than the many tens of thousands it theoretically could contain. In
addition, these graphs are not structurally random but tend to contain "clumps" of connected areas.
Given an interference graph, the register allocation problem becomes equivalent to coloring the graph using as few colors (registers) as possible, and no more than the total number provided by the
target architecture. When coloring a graph, two nodes connected by an edge must be colored differently, which corresponds to assigning a different register to each value...
Determining the minimum number of colors required by a particular interference graph (the graph's chromatic number), or indeed determining if a graph can be colored using a given number of colors (a
k-coloring), has been shown to be an intractable problem – only a strategy of brute force enumeration of every combination can guarantee an optimal result. Naturally, this approach is completely out
of the question for large graphs, which are actually quite common due to optimizations such as function inlining.
Luckily, heuristics have been found that give good results in a time approximately proportional to the size of the graph, if coloring is actually possible [Chaitin1982]. Over the past two decades,
these heuristics have been refined to the point where today's production compilers use quite sophisticated techniques which take into account issues like rematerialization [BriggsEtc1992] and code
structure [CallahanKoblenz1991]. A very bold person might even say that global register allocation is now a solved problem :-)
Graph Coloring
This introductory paper describes the most widely used variant of these coloring heuristics – the optimistic coloring method first proposed by Preston Briggs and his associates at Rice University,
which is based on the general structure of the original IBM "Yorktown" allocator described in Chaitin's original research papers. The optimistic "Chaitin/Briggs" graph coloring algorithm is clearly
the technique most widely used by production compilers, and arguably the most effective.
There are, of course, many other subtle variants of this technique, as well as a significantly different technique called priority-based coloring proposed by Fred Chow and John Hennessy, which uses a
coarser and less accurate interference graph but can more easily handle live range splitting [ChowHennessy1984], [ChowHennessy1990].
The basic "Chaitin/Briggs" approach is as follows...
First, the live ranges must be determined. This is not as trivial as it seems – determining the exact live ranges of values requires tricky dataflow analysis to be carried out, carefully following
the def-use chains, sometimes called webs, through the control flow graph. This is necessary because a value's live range is not necessarily a contiguous range of instructions. Even a simple split in
the flow of control can cause a live range to become segmented...
Constructing the interference graph is the next step. Usually, the graph is created in two forms simultaneously – a bit matrix and a set of adjacency lists. This makes working with the graph more
efficient, allowing quick tests for the existence of an edge using the bit matrix and quick calculation of the set of neighbors which interfere with any particular node by examining the adjacency
In addition to the nodes for live ranges, the graph also contains nodes for each machine register (normally). This allows the modeling of other types of interferences such as function calling
conventions – values live across function calls can simply be made to interfere with all of the caller-saved machine registers, and live range splitting will take care of the rest (see the spilling
section below).
After building the graph, unneeded copies (register-to-register move instructions) are eliminated in the coalesce phase. This might sound unnecessary, but it is actually an extremely important step,
partially because copies are often introduced during optimization, but more importantly because coalescing allows the allocator to achieve targeting – where specific registers are used for specific
purposes such as argument passing. With luck, those values can be computed directly into the appropriate registers in the final code. Coalescing also allows better handling of self- updating
instructions, such as auto-increment addressing, as well as the two-address instructions commonly found in older CISC architectures.
After coalescing, the whole graph must then be rebuilt because eliminating the copies might have changed the interference relationships. A conservative approximation can be used incrementally, to
allow multiple copies to be coalesced in a single pass, however a proper rebuild of the graph must be done afterwards for complete accuracy.
Once the graph is constructed, spill costs are calculated for each node (live range) using a variety of heuristics to estimate the runtime cost of storing each value and reloading it around its uses.
Typical cost heuristics include loop nesting, reference counting and so on. This is where a degree of "cleverness" is required on behalf of the allocator.
The core of the coloring process itself starts with the simplify phase, sometimes called pruning. Here, the graph is repeatedly examined and nodes with fewer than k neighbors are removed (where k is
the number of colors we have to offer). As each node is removed it is placed on a stack and its edges are removed from the graph, thereby decreasing the degree of interference of its neighbors. If a
point is reached where every node has k neighbors (or more), a node is chosen as a possible spill candidate (just because a node has k neighbors doesn't necessarily mean it will spill – the neighbors
may not all be different colors). Choosing which node to spill is, of course, the hard part – often a variant of the ratio of spill cost to degree of interference is used, or sometimes a combination
of several different metrics [BernsteinEtc1989]. Once a spill candidate is chosen, the node is then removed from the graph and pushed onto the stack, just like the others, and the algorithm continues
on to the next node.
Finally, the select phase actually selects a color (register) for each node. This is done by repeatedly popping a node off the stack, re-inserting it into the graph, and assigning it a color
different from all of its neighbors. In effect, simplify chooses the order of assignment and select makes the actual assignments themselves. During the select phase, the allocator might also make
assignment decisions to accommodate hardware restrictions such as register pairing (the handling of register pairs can also be done in other ways).
At any time during the select phase, if a situation occurs where no color is available (ie: all k neighbors have different colors), the node is marked for spilling and remains uncolored while the
algorithm continues on to the next node. If nodes have been left uncolored after this process, the allocator must then generate the necessary spill code (or rematerialization code) and start the
whole register allocation process all over again.
An example of the core simplify/select process is shown below...
If spill code is necessary, the spilling can be done in many different ways. One approach is to simply spill the value everywhere and insert loads and stores around every use. There are some
advantages to spilling a value for its entire lifetime – it is straightforward to implement and tends to reduce the number of coloring iterations required before a solution is found. Unfortunately,
there are also major disadvantages to this simple approach. Spilling a node everywhere does not quite correspond to completely removing it from the graph, but rather to splitting it into several
small live ranges around its uses. Not all of these small live ranges may be causing the problem – it might only have been necessary to spill the value for part of its lifetime.
To account for this, a more sophisticated approach is to split the problematic live range into several subranges, only some of which might need to actually be spilled (think about a value entering an
if statement where one side has high register pressure but the other only has low register pressure). The coloring process will probably require more iterations to find a solution, but the result
should be a better final allocation. Some area-based heuristics have been suggested to help the allocator decide where to split live ranges, based on the well-known relationship of dominance
represented by the dominator tree and control dependence graph [NorrisPollock1994].
Hierarchical coloring is essentially a more structured approach to this type of live range splitting [CallahanKoblenz1991]. With this scheme, the code is first divided into tiles which are grouped
into a tree representing the hierarchical structure of the code...
Registers are allocated for each tile using standard graph coloring, and the conflicts left after each local allocation are handled by the next higher level of tiling. The allocator controls the
spilling process by determining which tiles will have spill code added around them. Overall, this approach results in colorings which are more sensitive to the local register requirements within each
tile. It can also take advantage of execution profiling information, allowing the allocator to give priority to the most important pieces of code.
Clique Separators
Since graph coloring is a relatively slow optimization, anything that can be done to make it faster is worth investigation. The chief determining factor is, of course, the size of the interference
graph. The asymptotic efficiency of graph coloring is somewhat worse than linear in the size of the interference graph – in practice, graph coloring register allocation is something like O(n log n) –
so coloring two smaller graphs is faster than coloring a single large graph. The bulk of the running time is actually the build-coalesce loop, not the coloring phases, however this too is dependent
on the size of the graph (and non-linear).
An allocator can take advantage of this fact by breaking large graphs into parts, as long as this is done carefully. Specifically, good places must be chosen to split the graph so that the resulting
two parts are smaller, yet recombining them still gives valid results. Splitting at clique separators achieves this goal. A clique separator is a strongly connected subgraph, which when removed from
the original graph, completely disconnects it (dividing it into disjoint subgraphs)...
Since the disjoint subgraphs can use the same set of colors (registers), these subgraphs can be colored separately. The original graph can then be reassembled to form the final allocation. The size
of each individual coloring problem is now smaller than the original graph, so this approach significantly reduces both the time and space requirements of the graph coloring process [
GuptaSoffaSteele1989], [GuptaSoffaOmbres1994].
Fortunately, the basic approach of clique separators can be used almost implicitly, by letting the control flow graph itself guide the subdivision of the "large" graph for each function. In effect,
this is a natural consequence of the tiling allocators mentioned above. In a sense, every compiler also uses clique separators at function call boundaries – each function is colored independently and
they are all knitted back together by the assignments required by the argument passing conventions.
Linear-Scan Allocation
As a special case, local register allocation within basic blocks can be considerably accelerated by taking advantage of the structure of the particular interference graphs involved. For straight-line
code sequences, such as basic blocks or software-pipelined loops, the interference graphs are always interval graphs – the types of graphs formed by the interference of segments along a line. It is
known that such graphs can be easily colored, optimally, in linear time. In practice, an interference graph need not even be constructed – all that is required is a simple scan through the code. This
approach can even be extended to the global case, by using approximate live ranges which are the easy-to-identify linear supersets of the actual segmented live ranges.
The linear-scan approach to register allocation is common in two real-world situations. First, this approach is useful in dynamic code generation situations, such as Java virtual machines, where
basic blocks are often the unit of translation – and even if they are not, a more complex and time- consuming algorithm such as graph coloring would be too costly. Second, linear-scan allocation can
be applied as a fast alternative to graph coloring when compilation speed is more important than execution speed, such as during development situations. Rather than turning off register allocation
altogether, linear-scan register allocation is fast enough that it goes virtually unnoticed during compilation, yet produces results that are often not dramatically worse than true graph coloring [
TraubEtc1998]. Many compilers for RISC architectures default to having local register allocation always turned on, even during debugging, implemented at the basic-block level using simple linear-scan
Linear-scan register allocation can also be extended in an attempt to take into account the "holes" in the not-really-linear live ranges. This approach corresponds to the mathematical problem of bin
packing. Perhaps surprisingly, one production compiler actually used this approach – DEC's GEM compilers for the Alpha architecture (the 'GEM' name used for the Alpha compilers actually relates to
the RISC architecture which preceded Alpha at DEC, the PRISM architecture (PRISM and GEM, get it?) – it is often said that the AXP from the name Alpha AXP unofficially stands for Almost eXactly
More Information?
To learn more about graph coloring register allocation, Preston Briggs' PhD thesis is definitely the best place to start [Briggs1992]. Briggs' dissertation gives detailed coverage of the entire graph
coloring process, including many important real-world issues such as dealing with register pairs, the rematerialization problem, representations for the required data structures and so on (in his own
words, it "basically beat the subject to death"). For anyone actually implementing a graph coloring register allocator, Briggs' thesis is an absolute must-read.
Register allocation is also covered in some detail in the book "Advanced Compiler Design & Implementation" by Steven Muchnick [Muchnick1997]. | {"url":"http://www.lighterra.com/papers/graphcoloring/","timestamp":"2014-04-20T13:35:43Z","content_type":null,"content_length":"46802","record_id":"<urn:uuid:112e6744-e0d0-49d3-8b3c-8bb1b5ca7c0c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: command for non linear decomposition using a multinimial logit
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: command for non linear decomposition using a multinimial logit
From Austin Nichols <austinnichols@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: command for non linear decomposition using a multinimial logit
Date Wed, 22 Feb 2012 10:08:26 -0500
arlette simo fotso <fotsoarlette@gmail.com>
See also the SJ:
The latter says:
regress, logit, probit, ologit, oprobit, tobit, intreg, truncreg,
poisson, nbreg, zip, zinb, ztp, and
ztnb are supported.
Can you reframe your multinomial logit as -ologit- or as several
-logit- models, one for
each category in the -mlogit- outcome?
On Wed, Feb 22, 2012 at 9:46 AM, Stas Kolenikov <skolenik@gmail.com> wrote:
> On Wed, Feb 22, 2012 at 5:45 AM, arlette simo fotso
> <fotsoarlette@gmail.com> wrote:
>> Can any one please tell me if it exists a Stata commands for doing an
>> Oaxaca-Blinder decomposition using a multinomial logit (or multinomil
>> probit) model?
> As Maarten said, this is not at all straightforward. One reason is
> that the binary dependent variable estimate the ratio beta/sigma,
> where sigma has to take all of the unaccounted variability, and they
> do so by arbitrarily fixing sigma. Thus if one of the groups has a
> greater variability in the error term, the estimated coefficients will
> not be comparable between two groups. Also, the scale of the
> coefficients may change as you add regressors with good explanatory
> power (which would in turn decrease the residual variance).
> Some additional Stata-related materials on O-B decomposition can be
> found at http://ideas.repec.org/p/ets/wpaper/5.html and
> http://econpapers.repec.org/paper/bocdsug07/04.htm.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-02/msg00989.html","timestamp":"2014-04-18T10:56:03Z","content_type":null,"content_length":"9729","record_id":"<urn:uuid:30a39710-80f7-4546-859b-65d5ffc6ccb9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Let's say Alvin will catch the flu with probability of 1/10 during any given month.Let's also assume that Alvin can catch the flu only once per month, and that if he has caught the flu,the flu virus
will die by the end of the month.What is the probability of the following events? 1)He catches the flu in September, October and November. 2)He catches the flu in September and then again in
November, but not in October. 3)He catches the flu exactly once in the three months from September through November. 4)He catches the flu in two or more of the three months from September through
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51697affe4b0c5c0722e329f","timestamp":"2014-04-17T01:19:22Z","content_type":null,"content_length":"25743","record_id":"<urn:uuid:ff25c0a0-fdb6-49b1-99f0-69b0bd552228>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear Oscillation via Volterra Series
- Proc. Workshop on Integrated Nonlinear Microwave and Millimeter-wave Circuits , 1992
"... The current state-of-the art of oscillator simulation techniques is presented. Candidate approaches for the next genertion of oscillator simulation techniques are reviewed. The method is
presented which uses an e#cient and robust convolution-based procedure to integrate frequency-domain modeling of ..."
Cited by 2 (0 self)
Add to MetaCart
The current state-of-the art of oscillator simulation techniques is presented. Candidate approaches for the next genertion of oscillator simulation techniques are reviewed. The method is presented
which uses an e#cient and robust convolution-based procedure to integrate frequency-domain modeling of a distributed linear network in transient simulation. The impulse response of the entire linear
distributed network is obtained and the algorithm presented herein ensures that aliasing e#ects are minimized by introducing a procedure that ensures that the interconnect network response is both
time-limited and band-limited. In particular, artificial filtering to bandlimit the response is not required. I. Introduction Large signal simulation of microwave oscillators is necessary to provide
steady-state characterization of oscillator performance. Such quantities as power and harmonic content information are then readily available. This is particularly important in achieving first pass
"... INTRODUCTION The Volterra series technique has been used extensively in various applications in the area of nonlinear circuit analysis and optimization (see e.g. references [1]--[28]). Examples
are in the (i) analysis of intermodulation in small signal amplifiers [6]--[12], (ii) determination of os ..."
Add to MetaCart
INTRODUCTION The Volterra series technique has been used extensively in various applications in the area of nonlinear circuit analysis and optimization (see e.g. references [1]--[28]). Examples are
in the (i) analysis of intermodulation in small signal amplifiers [6]--[12], (ii) determination of oscillation frequency and amplitude in near sinusoidal oscillators [3]--[5], (iii) analysis of
mixers with moderate local oscillator levels [13, 14], analysis of communication systems [14]--[18], and (v) analysis of noise in nonlinear networks [24]--[28]. The use of the Volterra series
technique basically involves two steps: (i) first, from specified input signal frequencies to determine all relevant Volterra transfer functions of the network, and (ii) next, to determine the output
response from the non-linear network based on specified amplitudes of the input signals. One limitation in the use of Volterra series is that the determination of Volterra transfer functions is
usually limi
, 1993
"... LUNSFORD II, PHILIP J. The Frequency Domain Behavioral Modeling and Simulation of Nonlinear Analog Circuits and Systems. (Under the direction of Michael B. Steer.) A new technique for the
frequency-domain behavioral modeling and simulation of nonautonomous nonlinear analog subsystems is presented. ..."
Add to MetaCart
LUNSFORD II, PHILIP J. The Frequency Domain Behavioral Modeling and Simulation of Nonlinear Analog Circuits and Systems. (Under the direction of Michael B. Steer.) A new technique for the
frequency-domain behavioral modeling and simulation of nonautonomous nonlinear analog subsystems is presented. This technique extracts values of the Volterra nonlinear transfer functions and stores
these values in binary files. Using these files, the modeled substem can be simulated for an arbitrary periodic input expressed as a finite sum of sines and cosines. Furthermore, the extraction can
be based on any circuit simulator that is capable of steady state simulation. Thus a large system can be divided into smaller subsystems, each of which is characterized by circuit level simulations
or lab measurements. The total system can then be simulated using the subsystem characterization stored as tables in binary files. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1735032","timestamp":"2014-04-20T21:33:09Z","content_type":null,"content_length":"18425","record_id":"<urn:uuid:9611f9ec-5da7-4c7d-9497-ea29782b9643>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are the k-rational points of k[[t]]?
up vote 1 down vote favorite
Let $k$ be a field. What are the $k$-rational points of the affine $k$-scheme $\mathrm{Spec}(k[[t]])$, where $k[[t]]$ is the power series ring over $k$ (equivalently, what are the $k$-algebra
morphisms $k[[t]] \rightarrow k$?)
I'm only sure about one point, namely the map $t \mapsto 0$. Do I have to assume some sort of completeness of $k$ to get more points?
Is there a nice presentation of $k[[t]]$, i.e. a quotient of some polynomial ring that is isomorphic to $k[[t]]$?
Okay, a question into another direction: Why do people consider deformations parametrized by $k[[t]]$? Parametrizing over $k[t]$ makes perfect sense to me; I have a fiber over any $\alpha \in k$.
But in case of $k[[t]]$? – Georg S. Jul 31 '10 at 13:24
Perhaps you should post this comment as a separate question. I know there are times when you can embed a complete DVR with residue field $k$ into a field $K$ with nicer properties than $k$ (e.g.,
characteristic zero rather than characteristic $p$), but there are probably much better answers out there. – Charles Staats Jul 31 '10 at 15:19
Well, it is easier to give deformations over $k[[t]]$ because its spectrum is small compared to $k[t]$. The ring $k[[t]]$ can be written as projective limit over $k[t]/(t^n)$ and those are local
1 artin algebras, i.e. their spectrum are just points with some tangent vectors attached. Now, you can use deformation theory (in the sense of Schlessinger) to produce deformations those over artin
algebras. If you have a system of deformation (say over each $k[t]/(t^n)$ then there are techniques (like Grothendieck's existence theorem) which sometimes allow you to pass to familiy over $k
[[t]]$. – Holger Partsch Jul 31 '10 at 15:20
Dear Georg, Regarding your question about deformations: the topic you are (implicitly) asking about is whether deformations can be algebraized. It would be easier to answer if you posted it as a
2 separate question (and there are several people on MO who could give you good answers about it). Here I will just say that if C is any smooth curve over k, and you had a family over C, then
looking at the formal n.h. of a point will give you something over k[[t]]. In other words, k[t] is not the unique way of algebraizing k[[t]]; any smooth curve will do. Thus you shouldn't prejudge
the ... – Emerton Aug 1 '10 at 4:24
... situation and expect to have a family over k[t], just because there is one over k[[t]]. For example, if you look at families of elliptic curves with an 11-torsion point, there are is no
2 interesting such family over an affine line, but there is an interesting such family over a (several times punctured) elliptic curve. In any event, people are very often interested in algebraic
families of the type you are wondering about (this is the study of moduli problems), but computing formal deformations is typically much easier, and an important first step even if the moduli
space is your goal. – Emerton Aug 1 '10 at 4:28
show 3 more comments
2 Answers
active oldest votes
$k[[t]]$ is a local ring with maximal ideal $(t)$ and the kernel of every $k$-homomorphism $k[[t]] \to k$ is a maximal ideal, thus the maximal ideal. Thus it factors as $k[[t]] \
up vote 5 down vote to k[[t]]/(t) = k \to k$ and $t \mapsto 0$ is the unique $k$-rational point.
Why is the kernel maximal? (This is probably obvious...) – Georg S. Jul 31 '10 at 13:26
Every $k$-homomorphism to $k$ is surjective. This also shows: Every $k$-rational point of a $k$-scheme is closed. – Martin Brandenburg Jul 31 '10 at 13:31
Ah, of course! Thanks. Can you also give me a hint concerning my deformation question above? – Georg S. Jul 31 '10 at 13:34
add comment
Perhaps one answer to your question about deformations is something like the following. A deformation over a complete local ring A (such as k[[t]]) is just a family X $\to$ Spec(A). Suppose
that the fibers belong to some sort of moduli space M, such as the moduli space of curves. In the functorial point of view of moduli spaces, the family X $\to$ Spec(A) corresponds to a
morphism Spec(A) $\to$ M that assigns to a point of Spec(A) the moduli of the fiber over this point. So, one parameter formal deformations (by this I just mean that A = k[[t]]) correspond
precisely to the morphisms Spec(k[[t]]) $\to$ M. The scheme Hom(k[[t]], M) is called the space of arcs in M. If we fix the central fiber of the deformation then we get the space of arcs in M
up at the point corresponding to the central fiber. The space of arcs is a subtle and important invariant of a singularity. One can think of an arc (that is, a morphism Spec(k[[t]]) $\to$ M) as
vote 1 follows: if we had a curve in M then the arc would be the collection of jets this curve determines, ie all the derivatives of all orders of the curve (think of the way morphisms Spec$(k[t]/(t^
down 2)) \to M$ determine the tangent vectors at the image of the closed point). So these deformations are telling us something significant about the local structure of the moduli space.
The construction of these one parameter formal deformations works regardless of the existence of any moduli space. It tells us what the space of arcs on the moduli space of whatever it is you
are deforming should be.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/34016/what-are-the-k-rational-points-of-kt/34084","timestamp":"2014-04-19T00:01:59Z","content_type":null,"content_length":"65181","record_id":"<urn:uuid:ddca622f-0387-4989-b428-51d703282dae>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinations and permutations
March 18th 2008, 11:37 AM #1
Mar 2008
Combinations and permutations
The 7 rooms of a new home are to be painted. Each room is to be painted one color from a selection of 4 shades of blue (B1,B2,B3,B4) and 3 shades of green (G1,G2,G3) and 2 shades of red (R1,R2).
More than one room may have the same color.
Count the number of ways each of the following steps can be completed to determine the number of different ways to paint the 7 rooms.
a.) If 5 are to be painted a shade of blue and 2 a shade of green.
b.) 3 rooms some shade of red, 3 rooms some shade of blue and one room some shade of green, how many different ways can the seven rooms be painted?
c.)How many ways can the rooms be painted?
d.) How many different ways can all the rooms be painted red?
e.) How many different ways can the rooms be painted if atleast one room is green or blue?
Any help would be tremendously appreciated!!!
Hello, digitalis77!
This is a tricky (but immensely satisfying) problem.
The 7 rooms of a new home are to be painted.
Each room is to be painted one color from a selection of:
. . 4 shades of blue (B1,B2,B3,B4),
. . 3 shades of green (G1,G2,G3),
. . 2 shades of red (R1,R2).
More than one room may have the same color.
Count the number of ways each of the following steps can be completed
to determine the number of different ways to paint the 7 rooms.
a) If 5 are to be painted a shade of blue and 2 a shade of green.
First, select the 5 rooms to be blue and the 2 to be green..
. . There are: . ${7\choose5,2} \:=\:21$ choices.
For each of the 5 blue rooms, there are 4 choices of blue.
. . There are: . $4^5\,=\,1,024$ choices.
For each of 2 green rooms, there are 3 choices of green.
. . There are: . $3^2\,=\,9$ choices.
Therefore, there arte: . $21 \times 1,024 \times 9 \:=\:\boxed{193,536\text{ ways}}$
b) 3 rooms some shade of red, 3 rooms some shade of blue
and one room some shade of green.
How many different ways can the seven rooms be painted?
First, select the 3 rooms to be red, 3 rooms to be blue, and 1 to be green.
. . There are: . ${7,\choose3,3,1} \:=\:140$ choices.
For each of the 3 red rooms, there are 2 choices of red.
. . There are: . $2^3 \,=\, 8$ choices.
For each of 3 blue rooms, there are 4 choices of blue.
. . There are: . $4^3 \,=\,64$ choices.
For the one green room, there are 3 choices of green: . $3$ choices.
Therefore, there are: . $140 \times 8 \times 64 \times 3 \:=\:\boxed{215,040\text{ ways}}$
c) How many ways can the rooms be painted?
For each of the seven rooms, there are nine choices of colors.
. . There are: . $9^7 \:=\:\boxed{4,782,969\text{ ways}}$
d) How many different ways can all the rooms be painted red?
For each of the 7 rooms, there is a choice of 2 reds.
. . There are: . $2^7 \,=\,\boxed{128\text{ ways}}$
e) How many different ways can the rooms be painted
if at least one room is green or blue?
The opposite of "at least one green or blue" is "no green or blue" ... all red.
From part (c), there are: . $4,782,969$ ways to paint the rooms
. . of which (d) 128 ways are all red.
Therefore, there are: . $4,782,060 - 128 \:=\:\boxed{4,782,841\text{ ways}}$
. . in which there is at least one green or one blue.
Last edited by Soroban; March 18th 2008 at 01:34 PM.
Very helpful
Thank you so much. I had been working on that problem for about 2 hours and just wasn't getting it I completely understand now. Thank you so much!!!!
March 18th 2008, 01:14 PM #2
Super Member
May 2006
Lexington, MA (USA)
March 18th 2008, 04:10 PM #3
Mar 2008 | {"url":"http://mathhelpforum.com/statistics/31334-combinations-permutations.html","timestamp":"2014-04-20T10:57:24Z","content_type":null,"content_length":"42446","record_id":"<urn:uuid:9c8b38a3-c88b-4e6d-9ba6-e71cb5cf3524>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Science in Christian Perspective
Letter to the Editor
On World and U.S. Population Growth: Or Is It Growth?
J.C. Keister
Department of Physics
Covenant College
Lookout Mountain, Tennessee 37350
From: JASA 29 (September 1977): 143-144.
This communication is the outgrowth of a lecture delivered at Covenant College in the spring semester, 1975, in a course dealing with the problems of population, world starvation, ecology and energy.
This lecture dealt only with world population numbers.
1. Consider a square 12 miles by 12 miles-the area of a good-sized city. The area of this square is about 3.6 billion square feet. If there was one person standing on every square foot, the entire
world population would fit into this 12 x 12 mile square. Furthermore, it has been estimated that the world population is doubling at the rate of once every 30 years; ^1 if such a rate were to
continue, it would take over 300 years for 0.1% of the earth's surface to be occupied by standing people. These numbers are offered as "counter-rhetoric" to those who insist^2 that there will not be
any room on the planet in another 500 years or so.
2. There is a rightful concern about starvation, ecology, energy, etc. Unfortunately, there appears to be a tendency to lump all these problems together, and to call this lumped aggregate "the
overpopulation problem." The difficulty here is that the label "overpopulation" presupposes that the answer to each of the three individual problems (starvation, ecology and energy) lies in the
active control of the world population by one means or another, whereas the real answers to these problems may lie elsewhere. For example, if a man is found starving in the street, one could take him
into one's home and feed him, thus solving this particular problem. However, if overpopulation is the problem, then the obvious answer is simply to pull out a gun and to eliminate the man.
3. Not all countries have an increasing population. in Ireland,, for example, the population apparently increased drastically around the early 1800's to a peak of over 8 million. Then, a potato
blight struck, and about a million starved. The population continued to dwindle somewhat, even after the blight, so that the population in 1960 was about 4 million, (about half the 1835 peak).
Ireland currently has one of the oldest marriage ages, one of the lowest marriage rates,- and a relatively stable population. No doubt there are other countries whose populations are quite stable.
4. There is serious question as to the accuracy of population estimates of countries such as Mainland China. "Both the Chinese admission that they have no knowledge of the previous growth rates, and
the round progression from 1.9 to 2.0 to 2.1 percent per year, suggest some rather arbitrary estimates.^4 Therefore, it is difficult to tell whether or not China's population is even growing, let
alone how fast it is growing.
Having made the above points about the world population in general, let us consider an analysis of the U.S. population in particular (which is measured to a high degree of accuracy) and evaluate the
demographers' estimates of what the population is doing as a function of time. The procedure is to fit a mathematical curve to the census date from 1790 to 1970. Short-term predictions then are made
by simply extrapolating the mathematical curve. In making such an extrapolation, it is assumed that the past and the present are the keys to the future, at least on the short-term basis. Assumed also
is that there will be no drastic deviation in population growth unless a catastrophic or other significant event takes place, affecting everyone.
The curves are plotted on semi-log paper to show any deviation from exponential growth. The actual data (taken from the 1974 Statistical Abstracts of the U.S.) and the mathematical fit are shown in
Figure 1. The data are represented by circled dots, while the mathematical fit is represented by the solid line.
Note several things:
(1) The population increases exponentially (doubling about
every 25 years) until about 1860.
(2) The projection of the population level at 1970 (based on extrapolation of data from 1790 to 1860) is 900 million! (What would have happened had we worried about our population "explosion" back in
(3) The U.S. population started to deviate smoothly from exponential behavior at about 1860, without any government edicts controlling the population.
(4) Note the smoothness of the curve, even through depressions
and wars.
(5) The net population effect of the 1930 depression and the post World War 11 boom was to affect a cancellation and to put the population trend back to where it was in the 1920's as shown by the
curved dotted line in Figure 1.
How about predictions of things to come? Demographers have calculated what are called A, B, C, D, E, F, and X curves, based on fertility rates (assumed in all cases to reach a constant level), and a
constant rate of immigration. The A curve has the highest fertility rate, and the F curve has the lowest rate. Reference 5 (a 1971 pamphlet) pointed out that in 1971, curve A had been dropped and
curve E had been added. Then, in the 1974 Statistical Abstracts,^6 curve B was dropped and curves F & X were added. Curve C assumes a leveling off birth rate of 2.8 children/woman; D has a final rate
of 2.5, E is 2.1 (so-called replacement rate) and F is 1.8 (below replacement rate). Curve X has a birth rate of 2.1 with no immigration, while curves C-F assume an immigration rate of 400,000 per
year. The projections for curves C-F and X for 1972 are quite good, but by 1975 the C and D projections start to show considerable deviation from the actual data, while curves E, F and X, together
with the mathematical fit of Figure 1, seem to predict the 1975 population the best. ^ 14
Why can't demographers come up with a good model? Why must they keep adding and dropping curves? The primary reason (as they themselves have stated) is that they are trying to second-guess the birth
rate of people free to make their own decisions about their families, a Congress and an Executive branch capable of regulating immigration, and a Supreme Court capable of legalizing abortion. Let us
consider each of these three aspects separately.
The demographers in each of their separate graphs are assuming a leveling-out process for the birth rate. Does past history justify this assumption? The rate dropped drastically in the 1930's, rose
by almost a factor of 2 from 1935 to 1960, and then fell again by close to a factor of 2 from 1960 to 1970.^8 Therefore, assuming constant (or nearly constant) fertility rates over decades is a very
risky business, based on past history.
How about the second assumption - constant immigration? A drastic plunge in immigration rate from close to I million/year in 1900-1910 to less than 0.1 million/year in 1931-1940 has occurred within
the time span of 30 years.^9 Clearly, past history shows that the assumption of constant immigration rates is not a good one to make. The laws affecting these immigration rates are outlined in
Reference 10.
Abortion is an issue not directly incorporated in any of the assumptions involved in curves C-F and X. How much of an effect is the abortion ruling recently made by the Supreme Court? According to
the New York Times Index," legal abortions are estimated to be about 900,000 in 1974. It is estimated that 1/3 of these would not have been made if the Supreme Court ruling had been unfavorable
toward abortions. This suggests, therefore, that there were 300,000 less people in the U.S. in 1974 as the result of this Supreme Court ruling. Since this reduction is close to the immigration rate
(about 400,000/year), it would seem that abortion ought to be considered by the demographers. Furthermore, abortions are on the increase at a rate of more than 25%/year since 1972.^11 An
extrapolation shows an abortion rate of 100 million/year by 1996, a figure no more ridiculous than some of the current world population extrapolation figures,^12 in the author's opinion.
The demographers themselves are at variance with one another. Estimates of the increase in the U.S. population by the year 2000 range from 20 million to 100 million, or a variation of a factor of 5,
depending on the demographer.^13 If in a situation in which the data are well known, demographers vary in their predictions of U.S. population growth by a factor of 5, over a 30 year period, what
about their predictions of world population growth, where the data are not well known?
One should be very cautious about advocating control of world population. One cannot adequately control what one does not understand. The solution to starvation, ecology and energy may lie elsewhere.
The author gratefully acknowledges discussion and comments from Dr.'s Nicholas Barker, James Hurley, and John Muller, all of whom are professors at Covenant College.
^1Associated Press article, Chattanooga News Free Press, Sept. 19,1971.
^ 2 Penthouse Magazine, Isaac Asimov, "The End," Vol. 2 No. 5, Jan. 1971, 26-28.
^3Expanding Population in a Shrinking World, Marston Bates, P. 16f - cited in The Myth of Over Population by R.J. Rushdoony, Craig Press, 1969, p. 41.
^ 4 China: Population in the People's Republic, Population Reference Bureau Bulletin Vol. 27 No. 6, Dec. 197 1, p. 9 & 10.
^5The Future Population of the United States, Population Reference Bureau Bulletin Vol. 27 No. 1, Feb. 1971, p. 15.
^61974 Statistical Abstracts of the U.S., p. 6.
^7The Future Population of the United States, op. cit., p. 13.
8Ibid., p. 22.
^91974 Statistical Abstracts, op. cit., p. 97.
101974 Statistical Abstracts, op. cit., p. 95.
11The New York Times Index, Feb. 1-15,1975, ABORTION.
^12Op. cit., Penthouse Magazine, Vol. 2 No. 5, where Asimov asserts that "at current rates of increase," the total mass of human population will equal the mass of the earth by 3530 A.D., and the mass
of the universe by 6826 A.D.!
^13 The Future Population of the United States, op. cit., p. 20.
^141t should be noted that a very recent revision has been made in the curves used by demographers for the U.S.A. population. Specifically, in an Oct. 1975 issue of Projections of the Population of
the U.S., p. 25, No. 607, all of the lettered curves A-F & X, have been replaced with curves labeled 1, 11, 111 & II-x, with changed demographic assumption, (i.e. lowered birth rates, etc.). Needless
to say, most of these most recent curves fit the July 1, 1975 data quite well!
Editor's Note: In connection with Dr. Keister's assessment of the U.S. population growth problem, it is interesting to take note of a 1960 prediction for world population set forth by von Foerster,
Mora and Amiot: ^1
N = 1.79 x 10^11/(2026.87 - t)^0.99
where N is the world population and t is time measured in years. A.D. Serrin ^ 2 points out that this expression fits world population figures very well from 1750 to 1960. In 1975 the above equation
predicted a world population N - 3.65 billion persons, whereas the best estimate for world population as of that date is 3.97 billion persons. The equation predicts a world population of 5 billion
persons in 1990, and of course a rather catastrophic occurrence late in the year 2026!
1. H. von Foerster, P.M. Mora, L.W. Amiot, Science 132, 1291 (1960)
2. J. Serrin, Science 189, 86 (1975)
Such concerns as war and peace, environmental pollution, discrimination, and so on, are far from unimportant. They are indeed criticical, . . . But these matters are nonetheless footnotes on the main
text, namely, that God has spoken and that what God says is what bears determinatively an all existence and life. The unmistakable priority of Gods people, the church in the world, is to proclaim
Gods revealed Word. Divorced from this calling, the church and Christians are undurable and unendurable phenomena. By stifling divine revelation, they are, in fact, an affront to God. Devoid of
motivation for implementing Christ's cause, they become both delinquents and delinquent in neighbor and world relations.
Carl F. H. Henry
God, Revelation and Authority, Vol. Il God Who Speaks and Shows, Word Books, Waco, Texas (1976), P. 22.
Leading, multimillion dollar corporation in the service industry located in a western suburb of Chicago has excellent growth opportunity for a Technical Manager to direct activities of such corporate
functions as product and process development, applied research, quality control of chemical manufacturing and technical services.
This rapidly growoing company is involved in all areas of business including manufactur~g~Zsearch and development, franchising and international operations. Kno e of government and industry
regulations is desired.
Our Corporation strives to attain four objectives: 1) To honor God in all we do; 2) To help people develop; 3) To pursue excellence; 4) To grow profitably.
if you have a solid technical background (preferably with an advanced degree in a chemical field), a proven record as manager of technical personnel and resources, and a desire far a challenging,
demanding and rewarding career opportunity, send a detailed resume "th salary history in confidence to:
ServiceMaster Industries Inc.
Will Southcombe
Coordinator of Employee Relations
2300 Warrenville Road
Downers Grove, Illinois 60515 | {"url":"http://www.asa3.org/ASA/PSCF/1977/JASA9-77Keister.html","timestamp":"2014-04-19T19:33:38Z","content_type":null,"content_length":"20157","record_id":"<urn:uuid:c1eef940-5c1a-4b96-bd0d-743a0cb5f683>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton's Method
Graphs a function
) and the tangent lines to the graph of
) used in Newton's Method to approximate roots of
How to use || Examples || Other Notes
Try the Derivative Calculator. How to Use
• Enter the function f (x) in the text input field marked "f (x)=" (Example: f (x)=x^2-2)
• Click the "Graph" button (this button also refreshes the graph)
• Enter the derivative f '(x) in the text input field marked "f '(x)=" (Example: f '(x)=2x)
• Select an x[0] value (Example: x=2)
This can be done in either of two ways:
□ Click and drag the mouse on the graph -- value of x[0] corresponds to horizontal mouse position on graph
□ Enter the value in the text input field marked "x[0]=" and click the "Graph" button to refresh
• The maximum number of iterations n can be changed using the "+" and "-" buttons under the "n=" field
The text input fields marked "f (x)=" and "f '(x)=" can accept a wide variety of expressions to represent functions, and the buttons under the graph allow various manipulations of the graph
coordinates. The text input field marked "x[0]=" can accept any decimal number.
For assistance computing the derivative f '(x), try the Derivative Calculator.
Square roots of 2: Approximating Pi: Which root?
f (x)=x^2-2 f (x)=tan(x/4)-1 f (x)=sin(x)
f '(x)=2x f '(x)=sec^2(x)/4 f '(x)=cos(x)
(x[0]=5, n=5)
Horizontal tangent: Diverging: Multiple root:
f (x)=x^2-2 f (x)=4arctan(x) f (x)=x^3-3x+2
f '(x)=2x f '(x)=4/(1+x^2) f '(x)=3x^2-3
x[0]=0 (x[0]=1.5, n=4) (double root at x=1)
Other Notes
Newton's Method approximates roots of a function using the iteration formula x[n+1]=x[n]-f (x[n])/f '(x[n]). So x[n+1] is the x-intercept of the tangent line to the graph of f (x) at x[n].
Under certain rather technical conditions, Newton's Method can be guaranteed to converge quickly to a root x[r] of f (x), as long as x[0] is sufficiently close to x[r]. However, Newton's Method
encounters problems for x values near where f '(x)=0 or f ''(x)=0.
One example of a problem happens when f '(x[0])=0 (see the horizontal tangent example above) -- in this case, the tangent line is horizontal, so it has no x-intercept, and there is no x[1].
If x[0] is not close enough to the root x[r], then Newton's Method may not converge at all (see the diverging example above). Or Newton's Method may converge, but not to the expected root (see the
"which root?" example above).
If the root x[r] satisfies f '(x[r])=0, then x[r] is a multiple root, and even if Newton's Method converges, it will converge more slowly (as in the multiple root example above). | {"url":"http://cs.jsu.edu/mcis/faculty/leathrum/Mathlets/newton.html","timestamp":"2014-04-16T19:27:57Z","content_type":null,"content_length":"7489","record_id":"<urn:uuid:077c9af7-9397-4e60-82c4-7e3cd84083ce>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Precision and recall
Next: Area under the ROC Up: Error computation Previous: Basic errors Contents Index
Precision and recall
In the literature, two other measures are often used, namely
These errors are returned in the second output variable of dd_error:
>> [e,f] = dd_error(z,w)
Here f(1) contains the precision, and f(2) the recall.
Finally, a derived performance criterion using the precision and recall is the
This can be computed using dd_f1:
>> x = target_class(gendatb([50 0]),'1');
>> w = svdd(x,0.1);
>> z = oc_set(gendatb(200),'1');
>> dd_f1(x,w)
>> dd_f1(x*w)
>> x*w*dd_f1
David M.J. Tax 2006-07-26 | {"url":"http://homepage.tudelft.nl/n9d04/dd_tools/node20.html","timestamp":"2014-04-16T15:58:56Z","content_type":null,"content_length":"5246","record_id":"<urn:uuid:7200099e-cf5b-42a2-9863-2ff0c3132a79>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
weiner khinchin relation
So far, we have only asserted that the sum of waves with random phases generates a time-stationary gaussian signal. We now have to check this. It is convenient to start with a signal going from
Notice that the frequencies come in multiples of the ``fundamental''
The averaging on the right hand side has to be carried out by letting each of the phases
We note that the autocorrelation is independent of
Clearly, the
if we define
This is the ``Wiener-Khinchin theorem'' stating that the autocorrelation function is the Fourier transform of the power spectrum. It can also be written with the frequency measured in cycles (rather
than radians) per second and denoted by
and as before,
In this particular case of the autocorrelation, we did not use independence of the
The autocorrelation
A simple but instructive application of the Wiener Khinchin theorem is to a power spectrum which is constant (``flat band'') between
The first factor ^1.1'', which occurs when
Another important case, in some ways opposite to the preceding one, occurs when 1.8). Clearly, this is the minimum number of measurements which would have to be made to reproduce the signal, since if
we missed one of them the others would give us no clue about it. As we will now see, it is also the maximum number for this bandwidth!
The Wiener–Khinchin theorem (also known as the Wiener–Khintchine theorem and sometimes as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem) states that the power spectral
density of a wide-sense-stationary random process is the Fourier transform of the corresponding autocorrelation function.^[1]^[2]^[3]
Continuous case:
$<br /> S_{xx}(f)=\int_{-\infty}^\infty r_{xx}(\tau)e^{-j2\pi f\tau} \ d\tau<br />$
$r_{xx}(\tau) = \operatorname{E}\big[\, x(t)x^*(t-\tau) \, \big] \$
is the autocorrelation function defined in terms of statistical expectation, and where
$S_{xx}(f) \$
is the power spectral density of the function $x(t)\,$. Note that the autocorrelation function is defined in terms of the expected value of a product, and that the Fourier transform of $x(t)\,$ does
not exist, in general, because stationary random functions are not square integrable.
The asterisk denotes complex conjugate, and can be omitted if the random process is real-valued.
Discrete case:
$S_{xx}(f)=\sum_{k=-\infty}^\infty r_{xx}[k]e^{-j2\pi k f}$
$r_{xx}[k] = \operatorname{E}\big[ \, x[n] x^*[n-k] \, \big] \$
and where
$S_{xx}(f) \$
is the power spectral density of the function with discrete values $x[n]\,$. Being a sampled and discrete-time sequence, the spectral density is periodic in the frequency domain.
The theorem is useful for analyzing linear time-invariant systems, LTI systems, when the inputs and outputs are not square integrable, so their Fourier transforms do not exist. A corollary is that
the Fourier transform of the autocorrelation function of the output of an LTI system is equal to the product of the Fourier transform of the autocorrelation function of the input of the system times
the squared magnitude of the Fourier transform of the system impulse response. This works even when the Fourier transforms of the input and output signals do not exist because these signals are not
square integrable, so the system inputs and outputs cannot be directly related by the Fourier transform of the impulse response.
Since the Fourier transform of the autocorrelation function of a signal is the power spectrum of the signal, this corollary is equivalent to saying that the power spectrum of the output is equal to
the power spectrum of the input times the power transfer function.
This corollary is used in the parametric method for power spectrum estimation.
Discrepancy of definition
By the definitions involving infinite integrals in the articles on spectral density and autocorrelation, the Wiener–Khintchine theorem is a simple Fourier transform pair, trivially provable for any
square integrable function, i.e. for functions whose Fourier transforms exist. More usefully, and historically, the theorem applies to wide-sense-stationary random processes, signals whose Fourier
transforms do not exist, using the definition of autocorrelation function in terms of expected value rather than an infinite integral. This trivialization of the Wiener–Khintchine theorem is
commonplace in modern technical literature.courtesy:en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin_theorem
Dear Guest,
Spend a minute to
in a few simple steps, for complete access to the Social Learning Platform with Community Learning Features and Learning Resources.
If you are part of the Learning Community already, | {"url":"https://www.classle.net/book/weiner-khinchin-relation","timestamp":"2014-04-17T18:24:07Z","content_type":null,"content_length":"69041","record_id":"<urn:uuid:79c17443-b341-4b63-a214-25e0b3eea32d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Totally Lost - Functions
March 2nd 2009, 08:58 PM #1
Jan 2009
Totally Lost - Functions
Let X be a set (possibly infinite) and let f: X--->X. Define a function g: X--->X by requiring that g(x) = f(f(x)) for all x ϵ X.
a.) Prove that Range g(x) ⊆ Range f(x). Give an example where Range g(x) != Range f(x), using a set X with just three elements.
So i'm not really sure what this is asking... am I supposed to come up with my own function like f(x) = g(x) and f(g(x) = g(x)
or am I supposed to make my own function like f(x) = x and g(x) = 3x etc.
Any tips please?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/76645-totally-lost-functions.html","timestamp":"2014-04-18T06:24:12Z","content_type":null,"content_length":"28764","record_id":"<urn:uuid:828229f5-9a65-45d2-8989-1a4780d3ee74>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mambo Poker
Brian Alspach
11 January 2000
We determine the probabilities of the hands for mambo poker.
In mambo poker one uses the best 3-card hand which can be formed from 4 cards. We are going to count the numbers of ways of achieving the possible hands. Since the hands are based on 3 cards, when we
use the word straight or flush throughout this file, we automatically mean 3-card straights and 3-card flushes, respectively. Any deviations from this will be named explicitly.
Since the game is played high-low, we examine both high and low hands. We consider high hands first. The total the number of possible 4-card hands is given by
In order to avoid double counting certain hands, we shall mention a variety of 4-card possibilities and decide later how the hands should be valued. For example, there are 4-card hands containing
both a straight and a flush. We shall distinguish them initially in order to make certain, for example, that straights really beat flushes.
The initial step is to determine all the types of hands to be counted. They are 4-of-a-kind, 4-card straight flush, 4-card flush containing a straight flush, 4-card flush not containing a straight
flush, 4-card straight containing a straight flush, 4-card straight not containing a straight flush, straight flush with a pair, straight flush without a pair, 3-of-a-kind, straight & flush, straight
without a pair, straight with a pair, flush without a pair, flush with a pair, 2 pairs, 1 pair and high card.
There are 13 possible ranks for the quartet and precisely 1 4-of-a-kind of each rank. Thus, there are 13 4-of-a-kind hands.
4-card straight flush.
Any card of rank ace through jack may begin a 4-card straight flush. Thus, there are 44 4-card straight flushes
4-card flush containing a straight flush.
There are 8 straight flushes beginning with an A or a Q. Any of 9 cards can be added to each to produce a flush which is not a 4-card straight flush. There are 40 straight flushes remaining and
any of 8 cards can be added to each. This gives us
4-card flush not containing a straight flush.
We employ a technique in this case which will be used in several other cases leading us to explain it now in detail. There are y is neither x-1 nor x+3, for if x = A or x = Q, there are 9 choices
for y and if x lies between 2 and J, inclusive, there are 8 choices for y. So removing these 109 sets of ranks leaves 606 sets
4-card straight containing a straight flush.
There are 8 straight flushes beginning with A or Q. We can add any of 3 cards to each of them to obtain a 4-card straight containing a straight flush. To the remaining 40 straight flushes we may
add any of 6 cards. This produces
4-card straight not containing a straight flush.
There are 11 sets ^4 = 256 choices of the 4 cards, but some choices correspond to hands already counted: All in the same suit is a 4-card straight flush, and either the first 3 or the last 3 in
the same suit gives a straight flush. There are 4 choices for the former and 24 choices for the latter. Removing these 28 hands already counted gives
Straight flush with a pair.
There are 48 straight flushes and any of 9 cards producing a pair as well. This gives
Straight flush without a pair.
If a straight flush begins with an ace or queen, any of 27 cards may be added without forming any hand which already has been counted. If a straight flush begins with any other card, one may add
any of 24 cards without creating a hand already counted. This produces
There are 13 choices for the rank of the 3-of-a-kind, 4 choices for the 3 cards of the chosen rank, and the remaining card may be any of 48 cards. This yields
Straight & flush.
This is a hand of the form x,x+1,x+2,y, where precisely 2 of the cards from ranks x,x+1,x+2 are in the same suit as yand y, 3 choices for the other 2 cards of the same suit, and 3 choices for the
other suit. This gives us
Straight without a pair.
A straight without a pair has the form x,x+1,x+2,y, and we saw earlier that there are 98 such rank sets not allowing a 4-card straight. For each set of ranks, there are 4^4 = 256 choices for the
cards, but we must exclude 3 or 4 from the same suit in order to eliminate flushes. This eliminates
Straight with a pair.
The set of ranks for this type of hand is
Flush without a pair.
As we saw in an earlier case, there are 606 sets
Flush with a pair.
There are
2 pairs.
There are
1 pair.
There are, as seen above, 274 sets
High card.
There are 606 sets
If we add all the types of hands together we obtain 270,725 as we should. This tells us the above breakdown of all the 4-card hands is a partition of the set of all 4-card hands so there is no
There are 5 types of hands containing straight flushes. In spite of that, the sum of these types is smaller than any other kind of hand. We obtain 44 + 392 + 264 + 432 + 1,176 = 2,308 hands with a
straight flush. There are 2 types of hands which contain 3-of-a-kind. Thus, there are 2,509 3-of-a-kind hands. We see that there is not a big distinction between these two premium hands.
The most interesting comparison is between straights and flushes. Let us see what happens if we rank a straight higher. This means every possible hand containing a straight will be counted as a
straight. Doing so gives us 2,508 + 3,528 + 3,024 + 19,992 = 29,052 straights. The number of flushes becomes 2,424 + 9,864 + 29,088 = 41,376 and we see it is correct to rank straights as the better
hand. In fact, a straight is considerably stronger as can be seen from the numbers.
The number of pairs is 71,856 since 2 types of hands will count as pairs. The number of high card hands is 123,624. Below is a table encapsulating the information.
│ Type of Hand │ Number of Hands │
│ Straight flush │ 2,308 │
│ 3-of-a-kind │ 2,509 │
│ Straight │ 29,052 │
│ Flush │ 41,376 │
│ One pair │ 71,856 │
│ High card │ 123,624 │
We now examine the possible number of low hands given the rule the player must make a 6 or better to qualify for low. There are only two possibilities for a low hand. Either a player has a hand with
4 distinct ranks 3 of which are 6 or below, or the player has 3 distinct ranks all of which are 6 or below.
4 distinct low cards.
There are
3 distinct low cards and 1 big card.
There are
3 distinct low cards and a pair.
There are 20 choices for the 3 low ranks as in the previous case, there are 3 choices for the rank of the pair, there are 6 choices for the pair, and there are 4 choices for each of the remaining
cards. This yields
We see there are only 45,440 low hands which means the probability of achieving a low is only | {"url":"http://people.math.sfu.ca/~alspach/comp5/","timestamp":"2014-04-17T18:23:09Z","content_type":null,"content_length":"17273","record_id":"<urn:uuid:0b48c50f-73cd-435c-b8f5-3cd847940096>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof of 1 + 1 = 2.
Last edited by mr fantastic; March 11th 2011 at 12:14 PM. Reason: Re-titled.
Starting from what basis? In Peano's axioms for the natural number system, in which "successor" is taken as a primitive notion, "2" is defined as the "successor" of 1 and then "a+ 1" is defined to be
the successor of a. From that, immediately, 1+ 1 is the successor of 1, 2. Peano's axioms: The natural numbers is a set of objects, N, (called "numbers") together with a "successor function", s(n),
such that: 1) There is a unique member of the set (called "1") such that the successor function maps N one to one and onto N-{1}. 2) If A is a subset of N such that 1 is in A and, whenever x is in A,
the successor of x, s(x), is in A, then A= N. Once we have that we define "2" to be the successor of "1", "3" to be the successor of "2", "4" to be the successor of "3", etc. We define "a+ b" by: 1)
a+ 1 is s(a). 2) if b is not 1, then there exist c such that b= s(c). In that case, a+ b= s(a+ c). From (1), 1+ 1= s(1)= 2. A little more interesting is that 2+ 2= 4. Since 2 is NOT 1, 2= s(1) so 2+
2= s(2+ 1). 2+ 1= s(2)= 3 so 2+ 2= s(3)= 4.
Last edited by HallsofIvy; March 11th 2011 at 06:43 AM.
Without knowing your mathematical background and hence the type of proof expected, asking for a proof of 1 + 1 = 2 is a waste of time. eg. An axiomatic derivation belongs in the Logic subforum (see
Russell and Whitehead) and you would be advised to go to a library, find their book and read it. Thread closed. | {"url":"http://mathhelpforum.com/algebra/174233-proof-1-1-2-a.html","timestamp":"2014-04-19T19:52:02Z","content_type":null,"content_length":"47694","record_id":"<urn:uuid:e99ef405-51bb-4b2d-a865-5328e6bb3413>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Label The Voltages Across And The Direction Of ... | Chegg.com
I need help with voltage, current and kirchoffs law.
1. Label the voltages across and the direction of current flowing through each resistor in the circuit below.
2. Determine the voltage across each resistor and current flowing through each resistor in the circuit shown below using KVL, KCL, and Ohm’s Law.
Electrical Engineering
Answers (1)
1. Label the voltages across and the direction of current flowing through each resistor in the circuit below.
2. Determine the voltage across each resistor and current flowing through each resistor in the circuit shown below using KVL, KCL, and Ohm’s Law.
Rating:5 stars 5 stars 1
RUClark answered 39 minutes later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/label-voltages-across-direction-current-flowing-resistor-circuit--determine-voltage-across-q1338623","timestamp":"2014-04-19T02:58:15Z","content_type":null,"content_length":"21171","record_id":"<urn:uuid:7310c757-08c6-4ec0-907f-b456992fd482>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Microwave Calculators - Microwaves101.com
Congratulations, you have found one of the most useful areas of the Microwaves101 site! Here are some calculators that will help you in your microwave work. Check back often, we plan to continuously
add more! If you notice any errors or have problems with a Microwaves101 calculator, let us know and we will do our best to fix it. And if there is something in the microwave universe that you need
to calculate often, drop us a note and we'll see what we can do to include it here. You may win a fine Microwaves101 pocket knife for your suggestions!
Delay/length/phase converter (new for May 2013!)
Parallel plate capacitance calculator
Synthesizing lumped-element Chebyshev filters (N=3, N=4 and N=5),
Calculating RF sheet resistance for up to three metal layers,
Calculating N-section impedance transformers,
Calculating and plotting K-factor, maximum available gain, group delay and much more from S-parameters (our famous S-Parameter Utilities spreadsheet!)
An amazing coax spreadsheet,
Cascade analysis of gain, P1dB and noise figure,
and more! | {"url":"http://microwaves101.com/content/calculators.cfm","timestamp":"2014-04-20T21:12:48Z","content_type":null,"content_length":"15277","record_id":"<urn:uuid:730f3f6e-9872-4693-a691-791257a647b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question on having the root of a variable in denominator
January 29th 2013, 01:51 AM #1
Jan 2013
Hi, Ive just been learning how to do some derivatives and have become stuck on a question because i think I dont understand some algebra.
ok question is finding the derivative
d/du 1/SQRT(u)?
the answers say -1/(2u)(SQRTu)
I know that 1/SQRT(u) = SQRT(u)^-1 but to get u on its own would it then become u^-1.5? or u^-0.5? And I'm wondering is the answer a typo? I thought I knew the power rule and cant see how that
answer can e right. I know this is calc now but Im posting here as Im missing some simple algebra/arithmetic
Re: Question on having the root of a variable in denominator
Hi, Ive just been learning how to do some derivatives and have become stuck on a question because i think I dont understand some algebra.
ok question is finding the derivative
d/du 1/SQRT(u)?
the answers say -1/(2u)(SQRTu)
I know that 1/SQRT(u) = SQRT(u)^-1 but to get u on its own would it then become u^-1.5? or u^-0.5? And I'm wondering is the answer a typo? I thought I knew the power rule and cant see how that
answer can e right. I know this is calc now but Im posting here as Im missing some simple algebra/arithmetic
All your considerations and calcuations are correct.
You differentiated the function correctly:
$v'(u)=-\frac12 \cdot u^{-\frac32} = -\frac1{2 u^{\frac32}}= -\frac1{2 u^{1+\frac12}}=-\frac1{2 u \cdot \sqrt{u}}$
Re: Question on having the root of a variable in denominator
Ah I see, great thankyou.
January 29th 2013, 04:29 AM #2
January 29th 2013, 06:28 PM #3
Jan 2013 | {"url":"http://mathhelpforum.com/algebra/212207-question-having-root-variable-denominator.html","timestamp":"2014-04-19T01:57:09Z","content_type":null,"content_length":"38734","record_id":"<urn:uuid:8871edb3-8ba7-48d5-a822-556ae7049c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cubic Vertex-Transitive Non-Cayley Graphs of Order $8p$
A graph is vertex-transitive if its automorphism group acts transitively on its vertices. A vertex-transitive graph is a Cayley graph if its automorphism group contains a subgroup acting regularly on
its vertices. In this paper, the cubic vertex-transitive non-Cayley graphs of order $8p$ are classified for each prime $p$. It follows from this classification that there are two sporadic and two
infinite families of such graphs, of which the sporadic ones have order $56$, one infinite family exists for every prime $p>3$ and the other family exists if and only if $p\equiv 1\mod 4$. For each
family there is a unique graph for a given order.
Cayley graphs; Vertex-transitive graphs; Automorphism groups
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i1p53/0","timestamp":"2014-04-21T04:38:34Z","content_type":null,"content_length":"15649","record_id":"<urn:uuid:38795162-4c97-48ec-8aae-ed2d62f69cad>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotate Me
Pseudosphere is a surface of revolution of the curve tractrix. Pseudosphere is a surface of constant curvature, having Gaussian curvature of -1 everywhere (except at the cusp).
Hermann Karcher
The Pseudosphere is a surface of revolution (of the
and has Gaussian curvature minus one, or in other words, the
product of its principal curvatures is -1. On a surface of
revolution, this translates into a simple analytic property:
Parametrize the meridian curve by arc length s ---> (r(s) , h(s)),
r'^2 + h'^2 = 1. Then r is a solution of the differential equation
r'' = r, and consequently h is also known---it is the anti-derivative
of sqrt(1 - (r')^2).
The Pseudosphere is best known because its intrinsic geometry is
hyperbolic, the meridians are a family of asymptotic geodesics
and the orthogonal latitudes are therefore a geodesically parallel
family of “horocycles”, i.e. limits of circles as their midpoints
converge to the limit point of the asymptotic geodesics.
This Pseudosphere is obtained by the construction which relates
solutions of the Sine-Gordon equation to surfaces of Gaussian
curvature -1, here the solution is a one-soliton solution:
q(u,v) := 4 arctan(exp(u)).
The parametrization obtained has another remarkable property: The
diagonal curves in ALL the parameter quadrilaterals have the
same length! Nets used for fishing also have such equiquadrilaterals
as meshes; the mathematical term is “Tchebycheff net”. Such
Tchebycheff nets exist on all surfaces which are isometric immersions
of (portions of) the hyperbolic plane. This fact plays a key role in the
proof of Hilbert's theorem which says: There is no smooth isometric
immersion of the whole hyperbolic plane into euclidean threespace.
Wikipedia pseudosphere
〈Pseudosphere〉 (1990s), by Hermann Karcher. pseudosphere.pdf
〈About Pseudospherical Surfaces〉 (1990s), by Chuu-Lian Terng. Pseudospherical_Surfaces.pdf
blog comments powered by | {"url":"http://xahlee.info/surface/pseudosphere/pseudosphere.html","timestamp":"2014-04-18T03:36:15Z","content_type":null,"content_length":"9083","record_id":"<urn:uuid:b384240d-a95d-4cb3-b54d-1da4277f7a72>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bifurcation from interval and positive solutions for a class of fourth-order two-point boundary value problem
We consider the fourth-order two-point boundary value problem , , , which is not necessarily linearizable. We give conditions on the parameters k, l and that guarantee the existence of positive
solutions. The proof of our main result is based upon topological degree theory and global bifurcation techniques.
MSC: 34B15.
topological degree; fourth-order ordinary differential equation; bifurcation; positive solution; eigenvalue
1 Introduction
The deformations of an elastic beam in an equilibrium state with fixed both endpoints can be described by the fourth-order boundary value problem
where is continuous, is a parameter and l is a given constant. Since problem (1.1) cannot transform into a system of second-order equations, the treatment method of the second-order system does not
apply to it. Thus, the existing literature on problem (1.1) is limited. When , the existence of positive solutions of problem (1.1) has been studied by several authors, see [1-5]. Especially, when
, Xu and Han [6] studied the existence of nodal solutions of problem (1.1) by applying disconjugate operator theory and bifurcation techniques.
Recently, motivated by [6], when k, l satisfy (A1), Shen [7] studied the existence of nodal solutions of a general fourth-order boundary value problem by applying disconjugate operator theory [8,9]
and Rabinowitz’s global bifurcation theorem
(A1) one of following conditions holds:
(i) k, l satisfying are given constants with
(ii) k, l satisfying are given constants with
In this paper, we consider bifurcation from interval and positive solutions for problem (1.2). In order to prove our main result, condition (A1) and the following weaker conditions are satisfied
throughout this paper:
(H1) is continuous and there exist functions , , , and such that
for some functions , defined on with
for some functions , defined on with
(H3) There exists a function with in any subinterval of such that
It is the purpose of this paper to study the existence of positive solutions of (1.2) under conditions (A1), (H1), (H2) and (H3). The main tool we use is the following global bifurcation theorem for
the problem which is not necessarily linearizable.
Theorem A (Rabinowitz [10])
LetVbe a real reflexive Banach space. Let be completely continuous such that , . Let ( ) be such that is an isolated solution of the following equation:
for and , where , are not bifurcation points of (1.10). Furthermore, assume that
where is an isolating neighborhood of the trivial solution. Let
Then there exists a continuum (i.e., a closed connected set) ofcontaining , and either
Remark 1.1 For other results on the existence and multiplicity of positive solutions and nodal solutions for boundary value problems of fourth-order ordinary differential equations based on
bifurcation techniques, see [11-20].
2 Hypotheses and lemmas
Theorem 2.1 (see [[7], Theorem 2.4])
Let (A1) hold. Then
(i) is disconjugate on , and has a factorization
Theorem 2.2 (see [[7], Theorem 2.7])
Let (A1) hold and with on any subinterval of . Then
(i) the problem
has an infinite sequence of positive eigenvalues
(iii) to each eigenvalue , there corresponds an essential unique eigenfunction which has exactly simple zeros in and is positive near 0;
(iv) given an arbitrary subinterval of , an eigenfunction that belongs to a sufficiently large eigenvalue changes its sign in that subinterval;
(v) for each , the algebraic multiplicity of is 1.
Theorem 2.3 (see [[7], Theorem 2.8]) (Maximum principle)
Let (A1) hold. Let with on and in . If satisfies
Let with the norm . Let with its usual norm . By a positive solution of (1.2), we mean x is a solution of (1.2) with (i.e., in and ).
Let with the inner product and the norm . Further, define the linear operator
Then is a closed operator and is completely continuous.
Lemma 2.4Let be the first eigenfunction of (2.5). Then, for all , we get
Integrating by parts, we obtain
Let be the closure of the set of positive solutions of the problem
We extend the function f to a continuous function defined on by
Then for . For , let x be an arbitrary solution of the problem
Since for , we have for . Thus x is a nonnegative solution of (2.11), and the closure of the set of nontrivial solutions of (2.13) in is exactly Σ.
Let be the Nemytskii operator associated with the function
Then (2.13), with , is equivalent to the operator equation
In the following, we shall apply the Leray-Schauder degree theory, mainly to the mapping ,
For , let , and let denote the degree of on with respect to 0.
Lemma 2.5Let be a compact interval with . Then there exists a number with the property
Proof Suppose to the contrary that there exist sequences and in , in E, such that for all , then in .
Set . Then and . Now, from condition (H1), we have the following:
and, accordingly,
Let and denote the nonnegative eigenfunctions corresponding to and , respectively. Then we have, from the first inequality in (2.19),
From Lemma 2.4, we have
Since in E, from (1.6) we have
By the fact that , we conclude that in E. Thus,
Combining this and (2.21) and letting in (2.20), we get
and consequently
Similarly, we deduce from the second inequality in (2.19) that
Proof Lemma 2.5, applied to the interval , guarantees the existence of such that for ,
which ends the proof.□
Lemma 2.7Suppose . Then there exists such that with , ,
where is the nonnegative eigenfunction corresponding to .
Proof We assume to the contrary that there exist and a sequence , with and in E, such that for all . As
Notice that has a unique decomposition
where and . Since on and , we have from (2.32) that .
By (H1), there exists such that
Since , there exists such that
and consequently
Applying Lemma 2.4 and (2.37), it follows that
This contradicts (2.33).□
Proof Let , where is the number asserted in Lemma 2.7. As is bounded in , there exists such that for all . By Lemma 2.7, one has
Now, using Theorem A, we may prove the following.
Proposition 2.9 is a bifurcation interval from the trivial solution for (2.15). There exists an unbounded componentCof a positive solution of (2.15), which meets . Moreover,
Proof For fixed with , let us take that , and . It is easy to check that for , all of the conditions of Theorem A are satisfied. So, there exists a connected component of solutions of (2.15)
containing , and either
By Lemma 2.5, the case (ii) cannot occur. Thus is unbounded bifurcated from in . Furthermore, we have from Lemma 2.5 that for any closed interval , if , then in E is impossible. So, must be
bifurcated from in .□
3 Main results
Theorem 3.1Let (A1), (H1), (H2), (H3) hold. Assume that either
then problem (1.2) has at least one positive solution.
Proof of Theorem 3.1 It is clear that any solution of (2.15) of the form yields a solution x of (1.2). We will show that C crosses the hyperplane in . To do this, it is enough to show that C joins
to . Let satisfy
We note that for all since is the only solution of (2.15) for and .
In this case, we show that
We divide the proof into two steps.
Step 1. We show that is bounded.
Let denote the nonnegative eigenfunction corresponding to .
From (3.4), we have
By Lemma 2.4, we have
Step 2. We show that C joins to .
From (3.3) and (3.7), we have that . Notice that (2.15) is equivalent to the integral equation
which implies that
We divide both of (3.9) by and set . Since is bounded in E, there exists a subsequence of and , with and on , such that
relabeling if necessary. Thus, (3.9) yields that
which implies that
Let and denote the nonnegative eigenfunction corresponding to and , respectively. Then we have, from the first inequality in (3.12),
From Lemma 2.4, integrating by parts, we obtain that
and consequently
Similarly, we deduce from the second inequality in (3.12) that
and, moreover,
Assume that is bounded; applying a similar argument to that used in Step 2 of Case 1, after taking a subsequence and relabeling if necessary, we obtain
Authors’ contributions
WS conceived of the study, and participated in its design and coordination and helped to draft the manuscript. TH drafted the manuscript. All authors read and approved the final manuscript.
Sign up to receive new article alerts from Boundary Value Problems | {"url":"http://www.boundaryvalueproblems.com/content/2013/1/170","timestamp":"2014-04-17T07:34:57Z","content_type":null,"content_length":"283176","record_id":"<urn:uuid:f5cc17d2-a5c2-40c3-b7e6-bb6b5530f1c5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 42x 1 36 ) Example 1 Factor the Sum or Difference of Two Cubes Checkpoint Factor the polynomial. 1. x 3 1 512 2. x 3 2 h 3 (x 1 8)(x 2 2 8x 1 64) (x 2 h)(x
Variable and Verbal Expressions
1) the difference of 10 and 5 2) the quotient of 14 and 7 3) u decreased by 17 4) half of 14 5) x increased by 6 6) the product of x and 7 7) the sum of q and 8 8) 6 ...
YOSEMITE HIGH SCHOOL 50200 ROAD 427 - OAKHURST, CA 93644
... 5.6 Dividing Polynomials 5.7 Synthetic Division Worksheet-Obj #73, #124 Simplify Expressions With Rational Exponents, Factor Sum or Difference Two Cubes Worksheet- Obj #122 ...
revised 4/10/09 3of 3 IV. Expressions with four terms. A. Group the expressions into two groups of two terms each. B. Factor out the GCF of each group.
MATH 098 Basic Algebra II
MATH 098 - Basic Algebra II Na MATH 098 Basic Algebra II
A. Can I take anything out? Always look for the greatest common factor !!
Title: Probability in the 6th Grade
Days 4 and 5 - Using the Bags and Colored Cubes (Worksheet ... they should then create a chart that shows the sum ... not the same, explain why you think there is a difference?
HeyMath! E-Lessons
HeyMath! E-Lessons Programme (HELP) Ma2:Number and algebra Rescue the princess - game Ma3:Shape, space and measures Measures and mensuration Ma4:Handling data Numbers ...
SMART Board Interactive Whiteboard Notes
SMART Board Interactive Whiteboard Notes
Factoring A Sum+Difference of Cubes
Factoring A Sum/Difference of Cubes Factoring A Sum+Difference of Cubes. Kuta Software - Infinite Algebra 2 Name_____ Period____ Date ...
Mannheim District 83 2nd Grade: Math Core Map
Math Map: 2nd Grade page 2 10/2008 Addition Utilize the order (commutative) and zero (identify) properties to find the sum Use a number line to count on ...
Factoring Flow Chart
Sum and Difference of Cubes The sum and difference of cubes can be factored: a 3 - b 3 = (a - b) (a 2 + ab + b 2) a 3 + b 3 = (a + b) (a 2 - ab + b 2)
Analyze rational functions with slant asymptotes
Lesson Plans - Polynomials C lass: Pre-Calculus Grades: 10 -12 MA Standard 2.0: Students are adept at the arithmetic of complex numbers MA Standard 4.0: Students know ...
Sum of Two Cubes Worksheet
Microsoft Word - Sum of Two Cubes Worksheet.doc. Sum of Two Cubes Worksheet www.edonyourown. com Sum of Two Cubes Worksheet By Janine Bouyssounouse The formula for ...
LESSON PLANmaths grade 10FINAL
... Self, peer, group, educator Expanded Opportunities: Sum and difference of two cubes Sum and difference of two cubes Sum and difference of two cubes Resources Worksheet, Worksheet ...
1 PREREQUISITE SKILLS WORKSHEET FOR CALCULUS School _____ Grade __ Prerequisite Skills Activity Objectives for Grade/Course State Standards Make ...
Lesson Guide 24
PALINDROMES A. Apalindrome is a word, phrase, sentence, or number that is the same backward or forward B. Examples of words that are palindromes: 1.
Topic: Addition and subtraction with sums and differences to 15 ...
(unifix cubes, touch math, using counters, drawing ... use a frog for jumping to find the correct sum or difference. ... Worksheet generator: http://themathworksheetsite.com ...
Factoring A Sum+Difference of Cubes
Factoring A Sum/Difference of Cubes Factoring A Sum+Difference of Cubes
Factoring Review worksheet
... for factoring polynomials: Factoring Review worksheet ... it is a binomial), then see if it is the difference of two squares : () 2 2 b a-. | {"url":"http://www.cawnet.org/docid/sum+and+difference+of+cubes+worksheet/","timestamp":"2014-04-17T15:37:15Z","content_type":null,"content_length":"51000","record_id":"<urn:uuid:873297a8-a0b5-4ffd-a1cf-c5000ab3ba12>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Medina, WA Prealgebra Tutor
Find a Medina, WA Prealgebra Tutor
...I received exemplary marks in the inorganic chemistry series at University of Washington, complete with accompanying labs. I completed introductory organic chemistry as well, also at
University of Washington. I completed my History degree at the University of Washington, specializing in Ancient and Medieval Europe.
16 Subjects: including prealgebra, chemistry, reading, writing
...Though my degree is in English, and I certainly enjoy crafting essays to perfection, I also enjoy digging into history and science of all types, and other subjects as well. I'm also nearly
fluent in Spanish, and would be happy to converse with students taking Spanish classes. I like to communic...
39 Subjects: including prealgebra, English, Spanish, reading
...I have been a math tutor for the past 5 years on and off and have spent multiple hours volunteering in both middle school and high school math classes. I have passed the WyzAnt Algebra 2 quiz
and am applying to grad schools to be a math teacher full time. In my personal studies I have completed math classes through calculus two.
4 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I have taken statistics classes while working on my math degree at University of Hawaii at Manoa. I had some experience tutoring college level stats while working at Chaminade University as
well. Since I started tutoring with WyzAnt, I have had many students taking stats at various colleges.
20 Subjects: including prealgebra, reading, calculus, geometry
...Some traits are innate, and those that must be learned need to access these. Study skills are the means to an end. They are a process that involves knowing where you want to go and why, and
developing a system to help you get there.
19 Subjects: including prealgebra, reading, writing, geometry
Related Medina, WA Tutors
Medina, WA Accounting Tutors
Medina, WA ACT Tutors
Medina, WA Algebra Tutors
Medina, WA Algebra 2 Tutors
Medina, WA Calculus Tutors
Medina, WA Geometry Tutors
Medina, WA Math Tutors
Medina, WA Prealgebra Tutors
Medina, WA Precalculus Tutors
Medina, WA SAT Tutors
Medina, WA SAT Math Tutors
Medina, WA Science Tutors
Medina, WA Statistics Tutors
Medina, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Medina_WA_prealgebra_tutors.php","timestamp":"2014-04-21T05:14:54Z","content_type":null,"content_length":"24074","record_id":"<urn:uuid:4f726abe-2098-48a3-a323-427b5cd1456a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Maxwell's Demon revisited
Ken G
No, that is exactly what cannot be true, in any theory of mechanics exhibited by large systems. That's pretty much the whole point of thermodynamics! Again, "disorder" simply means "more ways of
being", which means "more likely", and that's the second law in a nutshell. The sole assumption is that you can just count the ways of being (the number of configurations)-- this is the crux of
statistical mechanics, that every individual state is equally likely. That's the only assumption behind the second law, and if it weren't true, it would only mean that we would need to have a more
sophisticated concept of what entropy is, beyond just ln(N).
Are you saying that it is literally impossible to have laws of physics in which all the particles work together to produce a particular ordered state?
And that is what is not true. Newton's laws are about the details, thermodynamics is what you can do without anything like Newton's laws. That's why the main principles of thermodynamics were
discovered independently of Newton's laws (like the work of Carnot and Clausius), and sometimes even prior to them (like Boyle's law).
Sure, just like Kepler's laws were discovered before Newton's law of gravitation and the Balmer series was discovered before the Schrodinger's equation. Phenomena of nature can be discovered
independently even if they derive theoretically from a common source.
Right, with no reference to any mechanism or mechanics of the Demon. This is crucial-- the mechanics only serve as informative examples of the second law, they are not part of the derivation of it.
The derivation proceeds along the lines I gave above, and with no mention of any laws of mechanics.
I was envisioning a different sort of procedure. I'm suggesting doing the statistical mechanics derivation of the second law of thermodynamics from Newton's laws of motion, as outlined in the Feynman
lectures and fleshed out by Boltzmann, but restricting the proof to the case where you have a Maxwell's demon with unspecified mechanism. So the rest of the scenario will be analyzed according to
mechanics, it is only the demon that is a black box.
If you set F=mv instead of ma, as the ancients imagined, you still get the second law of thermodynamics, without any difference. Indeed, this is the second law in highly dissipative situations, and
it's still just thermodynamics.
I don't think this is too surprising; (this part of) Aristotelian physics is just Newtonian physics in the limit of strongly dissipative forces. | {"url":"http://www.physicsforums.com/showpost.php?p=3789698&postcount=53","timestamp":"2014-04-20T23:37:02Z","content_type":null,"content_length":"10642","record_id":"<urn:uuid:1d94c363-4f69-49b3-bf06-c0d7f3b363a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Segmentation fault
David Cournapeau david@ar.media.kyoto-u.ac...
Thu Apr 17 04:59:24 CDT 2008
Anne-Sophie Sertier wrote:
> Hello !
> I'm a new user of Numpy. I want to use it to work with matrices (very
> huge matrices), select column, make product, etc ...
> Configuration : Debian, python 2.4.4 and numpy 1.0.1
> So here is my problem:
> I initialize a matrix which will contain only 0 and 1
> >>> from numpy import *
>>>> matCons = zeros((194844,267595)),dtype=int8)
Unless you are executing this on a gigantic computer, this won't work
very well: you are asking to create an array which has ~ 2e5^2 elements,
that is around 40 Gb.
There is a bug, but the bug happens at the above line: the zeros call
did not fail whereas it should have. It is likely caused because the
number of elements cannot fit into a 32 bits integers, which means it
import numpy as np
n , m = 1e5,1e5
a = np.zeros((n, m), np.int8)
assert a.size == n * m
Will raise an assertion error (n * m is just big enough to not fit into
a 32 bits integer in this case).
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/032848.html","timestamp":"2014-04-17T01:48:03Z","content_type":null,"content_length":"3813","record_id":"<urn:uuid:8988448f-2b4b-480b-9d7d-e2a099e8c0cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Embarrassing, but
June 4th 2012, 11:35 PM #1
Embarrassing, but
Hi everyone. I was tutoring a student today, and we were both stumped on this question (well, I could do it, but I had to use non-right-angle trigonometry, and this was in a right-angle
trigonometry test).
A person walks from school 1.2 km on a bearing of 265 degrees true to a milk bar, then walks the 800 m from the milkbar to home on a bearing of 120 degrees true. How far south of the school is
home? How far west of the school is home? Hence, what is the distance between school and home?
The last part is obvious, you can use Pythagoras once you have the distances south and west. But how can you get those distances using only right angle trig?
Re: Embarrassing, but
Hi Prove It. You need to note that the 265 degree bearing in the first part equates to a right-angled triangle with an acute angle of 5 degrees below the horizontal and similarly for the second
part, the 120 degree bearing gives you a right-angled triangle with an acute angle of 30 degrees below the horizontal.
Re: Embarrassing, but
Hi Prove It,
Your embarrassed but I'm confused. What exactly do you mean by right angle trig ?
Are you saying that we are not allowed to say, for example, that the milk bar is $1.2\cos(5^{\circ})km$ south of the school ?
Re: Embarrassing, but
I mean that you can only use the trigonometric ratios for right angle triangles and Pythagoras' Theorem. You can't use things like the sine and cosine rules (for non-right-angle triangles).
Hi Prove It. You need to note that the 265 degree bearing in the first part equates to a right-angled triangle with an acute angle of 5 degrees below the horizontal and similarly for the second
part, the 120 degree bearing gives you a right-angled triangle with an acute angle of 30 degrees below the horizontal.
Thanks for your response. That's actually what I did, but still couldn't get enough information.
Re: Embarrassing, but
Thanks for your responses, I now understand.
June 5th 2012, 01:38 AM #2
Junior Member
May 2012
June 5th 2012, 01:47 AM #3
Super Member
Jun 2009
June 5th 2012, 02:27 AM #4
June 5th 2012, 02:32 AM #5 | {"url":"http://mathhelpforum.com/trigonometry/199659-embarrassing-but.html","timestamp":"2014-04-17T15:14:13Z","content_type":null,"content_length":"44801","record_id":"<urn:uuid:3e1d0cdc-4a15-41de-9389-8f2c1ce052ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- math-history-list
Discussion: math-history-list
Discussion on the history of mathematics, including announcements of meetings, new books and articles; discussion of the teaching of the history of math; and questions that you would like answered.
The Mathematical Association of America operated this moderated list from 1995 until 2009. | {"url":"http://mathforum.org/kb/forum.jspa?forumID=193&start=120","timestamp":"2014-04-20T13:33:56Z","content_type":null,"content_length":"38796","record_id":"<urn:uuid:5fc17c0e-cdd2-4472-b35e-3ad669c43b7f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Proposed Roadmap Overview
Dag Sverre Seljebotn d.s.seljebotn@astro.uio...
Mon Feb 20 11:18:59 CST 2012
On 02/20/2012 08:55 AM, Sturla Molden wrote:
> Den 20.02.2012 17:42, skrev Sturla Molden:
>> There are still other options than C or C++ that are worth considering.
>> One would be to write NumPy in Python. E.g. we could use LLVM as a
>> JIT-compiler and produce the performance critical code we need on the fly.
> LLVM and its C/C++ frontend Clang are BSD licenced. It compiles faster
> than GCC and often produces better machine code. They can therefore be
> used inside an array library. It would give a faster NumPy, and we could
> keep most of it in Python.
I think it is moot to focus on improving NumPy performance as long as in
practice all NumPy operations are memory bound due to the need to take a
trip through system memory for almost any operation. C/C++ is simply
"good enough". JIT is when you're chasing a 2x improvement or so, but
today NumPy can be 10-20x slower than a Cython loop.
You need at least a slightly different Python API to get anywhere, so
numexpr/Theano is the right place to work on an implementation of this
idea. Of course it would be nice if numexpr/Theano offered something as
convenient as
with lazy:
arr = A + B + C # with all of these NumPy arrays
# compute upon exiting...
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-February/060847.html","timestamp":"2014-04-18T07:24:23Z","content_type":null,"content_length":"3889","record_id":"<urn:uuid:34048d5a-2f9a-4a93-b814-28bf35f79baa>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variance Shadow Mapping
Purely Functional Rendering Engine
Variance Shadow Mapping
Posted by on October 14, 2012
One of the test examples in the repository is an implementation of Variance Shadow Mapping, which lets us demonstrate multi-pass rendering through a relatively simple effect. In this post we’ll first
have a look at the runnable EDSL implementation, discussing some of our syntactic woes on the way.
Basic shadow mapping
The Wikipedia article on shadow mapping gives a good overview of the general idea: first we render the scene from the point of view of the light source (using perspective projection for point or spot
lights, and orthographic projection for directional lights), recording the distances of various surfaces from the light, then use this information as a look-up table in a second pass when we render
the scene in the space of the main camera.
All we need to do to make shadow mapping work is to think carefully about the transformation matrices involved. The transformation from object local coordinates to screen coordinates is normally
decomposed into three phases:
1. model: object-local to world space (global coordinates);
2. view: world to camera space (camera-local coordinates);
3. projection: camera to screen space (coordinates after perspective projection and normalisation).
The model matrix is potentially different for every object, while view and projection are global to the rendering pass, so they can be composed before rendering. Consequently, we are going to use the
following transformation matrices:
• modelMatrix: per-object model matrix;
• lightMatrix: view-projection matrix for the light;
• cameraMatrix: view-projection matrix for the camera.
The first step is to record the distances of the closest surfaces from the light source. Whether the light is directional or point light is irrelevant; all we need is the light matrix to be able to
transform the vertices as desired. This simple depth pass is handled by the following definition:
depth :: Exp Obj (FrameBuffer N1 (Float, Float))
depth = Accumulate accCtx PassAll frag (Rasterize triangleCtx prims) clearBuf
accCtx = AccumulationContext Nothing (DepthOp Less True :. ColorOp NoBlending True :. ZT)
clearBuf = FrameBuffer (DepthImage n1 1000 :. ColorImage n1 0 :. ZT)
prims = Transform vert (Fetch "geometrySlot" Triangle (IV3F "position"))
lightMatrix = Uni (IM44F "lightMatrix")
modelMatrix = Uni (IM44F "modelMatrix")
vert :: Exp V V3F -> VertexOut Float
vert pos = VertexOut lightPos (floatV 1) (Smooth depth :. ZT)
lightPos = lightMatrix @*. modelMatrix @*. v3v4 pos
V4 _ _ depth _ = unpack' lightPos
frag :: Exp F Float -> FragmentOut (Depth Float :+: Color Float :+: ZZ)
frag depth = FragmentOutRastDepth (depth :. ZT)
Due to the lack of dedicated syntax, the notation is somewhat heavier than the DSL equivalent would be. The Exp type constructor corresponds to the @ operator; its first argument is the frequency,
while the second is the payload type. For technical reasons, we also need our own non-flat tuple representation, which is constructed with the :. operator (:+: in the type system). In this case,
depth is a one-layer framebuffer that stores two floating point numbers per pixel. In this particular case, both happen to be the same: the distance from the light. The first is the depth generated
by the rasteriser, and the second is the output of the fragment shader. The latter value will change later when we calculate variance.
The top level of the definition starts with the last stage of the pipeline, the accumulation step. This is where the fragment shader is applied to the output of the rasteriser, i.e. the fragment
stream. The accumulation context describes what happens to each fragment produced by the shader: they are only kept if their depth is less than that in the framebuffer, the new depth is written to
the buffer, and the new colour simply overwrites the old one without blending. There is no additional fragment filter, which is expressed with the PassAll constant (equivalent to providing a
constantly true function). Before accumulation, the framebuffer is cleared by setting the raster depth to 1000 and the ‘colour’ to 0 in each pixel.
As for the fragment stream, it is obtained by rasterising the primitive stream, which consists of triangles. There is only one vertex attribute, the position. Everything else is irrelevant for this
pass, since we don’t need to calculate anything besides the actual shapes. Geometry can be pushed into the pipeline through named slots, and this is the point where we can define the name.
The vertex shader transforms the position into light-space coordinates by going through the world coordinate system first. Our infix operators are all prefixed with @ to avoid name collusions with
the operators defined in the standard prelude; we intend to drop this prefix in the DSL. The v3v4 function simply extends a 3D vector with the homogeneous coordinate w=1. For the payload, it simply
emits the z coordinate of the light-space position. We don’t have a convenient interface for swizzling at the moment: vector components can be extracted by pattern matching on the result of the
unpack’ function (all the primitive functions have an apostrophe in their name to avoid collision with the prelude). Also, the definition of uniforms makes it quite apparent that we are basically
building a raw AST.
In order to make use of the depth information, we need to convert it into a sampler, which will be referenced in the second, final pass:
shadowMapSize :: Num a => a
shadowMapSize = 512
sm :: Exp Obj (FrameBuffer N1 (Float, V4F))
sm = Accumulate accCtx PassAll frag (Rasterize triangleCtx prims) clearBuf
accCtx = AccumulationContext Nothing (DepthOp Less True :. ColorOp NoBlending (one' :: V4B) :. ZT)
clearBuf = FrameBuffer (DepthImage n1 1000 :. ColorImage n1 (V4 0.1 0.2 0.6 1) :. ZT)
prims = Transform vert (Fetch "geometrySlot" Triangle (IV3F "position", IV3F "normal"))
cameraMatrix = Uni (IM44F "cameraMatrix")
lightMatrix = Uni (IM44F "lightMatrix")
modelMatrix = Uni (IM44F "modelMatrix")
lightPosition = Uni (IV3F "lightPosition")
vert :: Exp V (V3F, V3F) -> VertexOut (V3F, V4F, V3F)
vert attr = VertexOut viewPos (floatV 1) (Smooth (v4v3 worldPos) :. Smooth lightPos :. Smooth worldNormal :. ZT)
worldPos = modelMatrix @*. v3v4 localPos
viewPos = cameraMatrix @*. worldPos
lightPos = lightMatrix @*. worldPos
worldNormal = normalize' (v4v3 (modelMatrix @*. n3v4 localNormal))
(localPos, localNormal) = untup2 attr
frag :: Exp F (V3F, V4F, V3F) -> FragmentOut (Depth Float :+: Color V4F :+: ZZ)
frag attr = FragmentOutRastDepth (luminance :. ZT)
V4 lightU lightV lightDepth lightW = unpack' lightPos
uv = clampUV (scaleUV (pack' (V2 lightU lightV) @/ lightW))
surfaceDistance = texture' sampler uv
lightPortion = Cond (lightDepth @<= surfaceDistance @+ floatF 0.01) (floatF 1) (floatF 0)
lambert = max' (floatF 0) (dot' worldNormal (normalize' (lightPosition @- worldPos)))
intensity = lambert @* lightPortion
luminance = pack' (V4 intensity intensity intensity (floatF 1))
clampUV x = clamp' x (floatF 0) (floatF 1)
scaleUV x = x @* floatF 0.5 @+ floatF 0.5
(worldPos, lightPos, worldNormal) = untup3 attr
sampler = Sampler PointFilter Clamp shadowMap
shadowMap :: Texture (Exp Obj) DIM2 SingleTex (Regular Float) Red
shadowMap = Texture (Texture2D (Float Red) n1) (V2 shadowMapSize shadowMapSize) NoMip [PrjFrameBuffer "shadowMap" tix0 depth]
The definition of the sampler is right at the bottom. First we convert the framebuffer yielded by depth into an image using PrjFrameBuffer, which projects a given member of a tuple during the
conversion. The predefined value tix0 is the tuple index of the first element. In this case, our ‘tuple’ is degenerate, since it consists of a single element anyway. The resulting image is converted
into a two-dimensional texture (shadowMap) with just a floating-point red channel. Finally, the texture is wrapped in a sampler structure (sampler), which specifies that it’s a non-repeating image
that must not be smoothened during sampling, since that wouldn’t be meaningful for depth values at the edges of objects.
The pipeline setup is very similar to that of the depth pass, and most of the difference is in the shaders. This is a more complex case, where we have tuples both in the vertex and the fragment
stream. Again, for technical reasons we need to unpack these representations with a dedicated function (untup*) and pattern matching, just like vectors. We could also use view patterns to make this
extra step a bit less painful, but in the end all this won’t be necessary in the DSL.
As for shadows, the work is divided up between the two shader phases. The vertex shader calculates the vectors needed: view position (used by the rasteriser), world space position, light space
position and world space surface normal. We cheated a bit with the normal calculation, since we don’t use the inverse transpose of the matrix. This is fine as long as our transformations don’t
involve any non-uniform scaling, or we’re only scaling axis-aligned cuboids. The n3v4 function extends a 3D vector with w=0, so it is treated as a direction.
The fragment shader calculates the final colour of each pixel. The light space position is used to address the shadow map (sampler) as well as to quickly determine the distance to the light source
without calculating an extra square root. The value of lightPortion is 0 if there is an occluder between the current point and the light source, 1 otherwise. To avoid self-shadowing, a little offset
is applied to the stored depth of the closest surface (this could have been done in the first pass as well). Afterwards, we calculate the light contribution using the Lambert model, i.e. taking the
cosine of the angle between the surface normal and the vector that points towards the light. Multiplying this value with the light proportion gives us an image like this:
We can clearly see that the chosen depth offset is not sufficient, since the background plane still suffers from self-shadowing.
In the actual example, we use a spotlight instead of a point light that radiates in every direction. This effect is simply achieved by calculating a colour that depends on the shadow UV coordinates.
We can change the definition of intensity and luminance to get a better idea of the light’s direction:
uv' = uv @- floatF 0.5
spotShape = floatF 1 @- length' uv' @* floatF 4
intensity = max' (floatF 0) (spotShape @* lambert)
V2 spotR spotG = unpack' (scaleUV (round' (uv' @* floatF 10)) @* intensity)
luminance = pack' (V4 spotR spotG intensity (floatF 1)) @* lightPortion
The resulting image is maybe a bit more interesting than the previous one:
Variance shadow mapping
There are several issues with basic shadow mapping. Aliased shadow edges are only the tip of the iceberg; nowadays they can be easily fixed by using shadow samplers, which can provide bilinear
filtering on the light proportion instead of just a binary comparison. However, if we want softer shadows, life gets a lot more complicated. The obvious solution is to take several samples of the
shadow map and average the results, but this entails a potentially severe performance hit. We can also get away with a single sample if we apply a jitter to the UV coordinates, but this results in a
noisy pattern instead of a smooth transition.
Variance shadow mapping makes it possible to get a proper smoothing effect with a single sample from the shadow map. The basic idea is to store a probability distribution instead of an exact
function, and estimate the probability of a pixel being occluded instead of performing an exact test. The VSM algorithm uses Chebyshev’s inequality for the estimation. Since our shadow map is now a
probability distribution, we can directly blur it with a Gaussian filter and get meaningful results. Another nice side effect is that the new formula also addresses the problem of self-shadowing, and
provides a robust solution in place of the rather brittle and scene dependent offset hack.
In the first pass, we store the first two moments of the depth distribution:
moments :: Exp Obj (FrameBuffer N1 (Float, V2F))
moments = Accumulate accCtx PassAll frag (Rasterize triangleCtx prims) clearBuf
accCtx = AccumulationContext Nothing (DepthOp Less True :. ColorOp NoBlending (one' :: V2B) :. ZT)
clearBuf = FrameBuffer (DepthImage n1 1000 :. ColorImage n1 (V2 0 0) :. ZT)
prims = Transform vert (Fetch "geometrySlot" Triangle (IV3F "position"))
lightMatrix = Uni (IM44F "lightMatrix")
modelMatrix = Uni (IM44F "modelMatrix")
vert :: Exp V V3F -> VertexOut Float
vert pos = VertexOut lightPos (floatV 1) (Smooth depth :. ZT)
lightPos = lightMatrix @*. modelMatrix @*. v3v4 pos
V4 _ _ depth _ = unpack' lightPos
frag :: Exp F Float -> FragmentOut (Depth Float :+: Color V2F :+: ZZ)
frag depth = FragmentOutRastDepth (pack' (V2 moment1 moment2) :. ZT)
dx = dFdx' depth
dy = dFdy' depth
moment1 = depth
moment2 = depth @* depth @+ floatF 0.25 @* (dx @* dx @+ dy @* dy)
The difference between moments and depth is the type (two-dimensional float vector instead of a single float per pixel) and the fragment shader, which calculates two moments of the distribution. The
second pass (vsm) is also similar to the basic shadow mapping case, the only thing that changes is the formula for the light portion, which now becomes the maximum probability that the surface is
V2 moment1 moment2 = unpack' (texture' sampler uv)
variance = max' (floatF 0.002) (moment2 @- moment1 @* moment1)
distance = max' (floatF 0) (lightDepth @- moment1)
lightProbMax = variance @/ (variance @+ distance @* distance)
The other thing that changes slightly is the definition of the sampler:
sampler = Sampler LinearFilter Clamp shadowMap
shadowMap :: Texture (Exp Obj) DIM2 SingleTex (Regular Float) RG
shadowMap = Texture (Texture2D (Float RG) n1) (V2 shadowMapSize shadowMapSize) NoMip [PrjFrameBuffer "shadowMap" tix0 moments]
Unlike the previous one, this texture has a green component as well to store the second moment, and the sampler is set up to perform linear filtering. Using the value of lightProbMax directly as the
light portion is a good first approximation, but it leads to light bleeding, a well-known problem with VSM:
Before addressing the bleeding issue, we should first take advantage of the fact that the shadow map can be filtered. We are going to insert an extra pair of passes between moments and vsm that blurs
the shadow map. It is a pair of passes because we exploit the separability of the Gaussian filter, so first we blur the image vertically, then horizontally, thereby doing O(n) work per pixel instead
of O(n^2), where n is the width of the filter. The blur is described by the following function:
blur :: [(Float, Float)] -> Exp Obj (Image N1 V2F) -> Exp Obj (FrameBuffer N1 V2F)
blur coefficients img = filter1D dirH (PrjFrameBuffer "" tix0 (filter1D dirV img))
dirH v = Const (V2 (v / shadowMapSize) 0) :: Exp F V2F
dirV v = Const (V2 0 (v / shadowMapSize)) :: Exp F V2F
filter1D :: (Float -> Exp F V2F) -> Exp Obj (Image N1 V2F) -> Exp Obj (FrameBuffer N1 V2F)
filter1D dir img = Accumulate accCtx PassAll frag (Rasterize triangleCtx prims) clearBuf
accCtx = AccumulationContext Nothing (ColorOp NoBlending (one' :: V2B) :. ZT)
clearBuf = FrameBuffer (ColorImage n1 (V2 0 0) :. ZT)
prims = Transform vert (Fetch "postSlot" Triangle (IV2F "position"))
vert :: Exp V V2F -> VertexOut V2F
vert uv = VertexOut pos (Const 1) (NoPerspective uv' :. ZT)
uv' = uv @* floatV 0.5 @+ floatV 0.5
pos = pack' (V4 u v (floatV 1) (floatV 1))
V2 u v = unpack' uv
frag :: Exp F V2F -> FragmentOut (Color V2F :+: ZZ)
frag uv = FragmentOut (sample :. ZT)
sample = foldr1 (@+) [texture' smp (uv @+ dir ofs) @* floatF coeff | (ofs, coeff) <- coefficients]
smp = Sampler LinearFilter Clamp tex
tex = Texture (Texture2D (Float RG) n1) (V2 shadowMapSize shadowMapSize) NoMip [img]
The blur function takes a set of coefficient-offset pairs and an input image, and yields a framebuffer containing the filtered version of the image. This shows the power of our approach: since this
is an ordinary pure function, it can be applied to any image in any pipeline. In fact, it could be part of a standard library of utility functions after factoring out the resolution as a separate
argument instead of using our global constant shadowMapSize. The only notable novelty here is the use of Haskell for meta-programming: the amount of shader code generated is proportional to the
length of the coefficient list, since the summation is expanded statically. We need to find a convenient substitute for this facility when migrating to the DSL.
To insert the blur, we need to change the definition of the sampler in vsm:
sampler = Sampler LinearFilter Clamp shadowMapBlur
shadowMapBlur :: Texture (Exp Obj) DIM2 SingleTex (Regular Float) RG
shadowMapBlur = Texture (Texture2D (Float RG) n1) (V2 shadowMapSize shadowMapSize) NoMip [PrjFrameBuffer "shadowMap" tix0 blurredMoments]
blurredMoments = blur blurCoefficients (PrjFrameBuffer "blur" tix0 moments)
blurCoefficients = [(-4.0, 0.05), (-3.0, 0.09), (-2.0, 0.12), (-1.0, 0.15), (0.0, 0.16), (1.0, 0.15), (2.0, 0.12), (3.0, 0.09), (4.0, 0.05)]
After this change, our shadows change drastically:
As it turns out, blurring also helps against light bleeding to a certain extent, since light bleeding appears in areas where the variance is big. However, it is still quite obvious. There are several
ways to address the problem, and we chose the simplest for the sake of the example: raising the light portion to a power. The higher the exponent, the less the light bleeding, but unfortunately
increasing the exponent also causes overdarkening. In a way, this is a hack in a similar vein as the depth offset is for simple shadow mapping. In the example, we use the square of lightProbMax as
the light portion, which gives us nice soft shadows:
We recommend using this example as a starting point for your own experiments, as it has no external dependencies.
One response to “Variance Shadow Mapping” | {"url":"http://lambdacube3d.wordpress.com/2012/10/14/variance-shadow-mapping/","timestamp":"2014-04-16T04:11:39Z","content_type":null,"content_length":"90441","record_id":"<urn:uuid:ff1c60c8-6fd9-46bc-a0a8-e3704fbc615f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generic stencil based convolutions.
If your stencil fits within a 7x7 tile and is known at compile-time then using then using the built-in stencil support provided by the main Repa package will be 5-10x faster.
If you have a larger stencil, the coefficients are not statically known, or need more complex boundary handling than provided by the built-in functions, then use this version instead.
Arbitrary boundary handling
:: (Num a, Unbox a, Monad m)
=> (DIM2 -> a) Function to get border elements when the stencil does not apply.
-> Array U DIM2 a Stencil to use in the convolution.
-> Array U DIM2 a Input image.
-> m (Array U DIM2 a)
Image-kernel convolution, which takes a function specifying what value to return when the kernel doesn't apply.
Specialised boundary handling
type GetOut aSource
= (DIM2 -> a) The original get function.
-> DIM2 The shape of the image.
-> DIM2 Index of element we were trying to get.
-> a
A function that gets out of range elements from an image.
outClamp :: GetOut aSource
If the requested element is out of range use the closest one from the real image.
:: (Num a, Unbox a, Monad m)
=> GetOut a How to handle out-of-range elements.
-> Array U DIM2 a Stencil to use in the convolution.
-> Array U DIM2 a Input image.
-> m (Array U DIM2 a)
Image-kernel convolution, which takes a function specifying what value to use for out-of-range elements. | {"url":"http://hackage.haskell.org/package/repa-algorithms-3.2.2.3/docs/Data-Array-Repa-Algorithms-Convolve.html","timestamp":"2014-04-20T00:47:16Z","content_type":null,"content_length":"11572","record_id":"<urn:uuid:02af08e3-bf5d-4896-aebe-567a08be58e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Addition interval interval element element
sets maps sets maps
T& T::add(const P&) e i b p b
T& add(T&, const P&) e i b p e b
T& T::add(J pos, const P&) i p b
T& add(T&, J pos, const P&) i p e b
T& operator +=(T&, const P&) e i S b p M e s b m
T operator + (T, const P&) e i S b p M e s b m
T operator + (const P&, T)
T& operator |=( T&, const P&) e i S b p M e s b m
T operator | (T, const P&) e i S b p M e s b m
T operator | (const P&, T)
Functions and operators that implement Addition on icl objects are given in the table above. operator |= and operator | are behavioral identical to operator += and operator +. This is a redundancy
that has been introduced deliberately, because a set union semantics is often attached operators |= and |.
Description of Addition
Sets Addition on Sets implements set union
Addition on Maps implements a map union function similar to set union. If, on insertion of an element value pair (k,v) it's key k is in the map already, the addition function is propagated to
the associated value. This functionality has been introduced as aggregate on collision for element maps and aggregate on overlap for interval maps.
Find more on addability of maps and related semantic issues following the links.
Examples, demonstrating Addition on interval containers are overlap counter, party and party's height average.
For Sets addition and insertion are implemented identically. Functions add and insert collapse to the same function. For Maps addition and insertion work differently. Function add performs
aggregations on collision or overlap, while function insert only inserts values that do not yet have key values.
The admissible combinations of types for member function T& T::add(const P&) can be summarized in the overload table below:
// overload table for T\P| e i b p
T& T::add(const P&) ---+--------
T& add(T&, const P&) s | s
m | m
S | S S
M | M M
The next table contains complexity characteristics for add.
Table 1.21. Time Complexity for member function add on icl containers
Function T& T::add(T::iterator prior, const P& addend) allows for an addition in constant time, if addend can be inserted right after iterator prior without collision. If this is not possible the
complexity characteristics are as stated for the non hinted addition above. Hinted addition is available for these combinations of types:
// overload table for addition with hint T\P| e i b p
T& T::add(T::iterator prior, const P&) ---+--------
T& add(T&, T::iterator prior, const P&) s | s
m | m
S | S
M | M
The possible overloads of inplace T& operator += (T&, const P&) are given by two tables, that show admissible combinations of types. Row types show instantiations of argument type T. Columns types
show show instantiations of argument type P. If a combination of argument types is possible, the related table cell contains the result type of the operation. Placeholders e i b p s S m M will be
used to denote elements, intervals, element value pairs, interval value pairs, element sets, interval sets, element maps and interval maps. The first table shows the overloads of += for element
containers the second table refers to interval containers.
// overload tables for element containers: interval containers:
T& operator += (T&, const P&) += | e b s m += | e i b p S M
---+-------- ---+------------
s | s s S | S S S
m | m m M | M M M
For the definition of admissible overloads we separate element containers from interval containers. Within each group all combinations of types are supported for an operation, that are in line with
the icl's design and the sets of laws, that establish the icl's semantics.
Overloads between element containers and interval containers could also be defined. But this has not been done for pragmatical reasons: Each additional combination of types for an operation enlarges
the space of possible overloads. This makes the overload resolution by compilers more complex, error prone and slows down compilation speed. Error messages for unresolvable or ambiguous overloads are
difficult to read and understand. Therefore overloading of namespace global functions in the icl are limited to a reasonable field of combinations, that are described here.
For different combinations of argument types T and P different implementations of the operator += are selected. These implementations show different complexity characteristics. If T is a container
type, the combination of domain elements (e) or element value pairs (b) is faster than a combination of intervals (i) or interval value pairs (p) which in turn is faster than the combination of
element or interval containers. The next table shows time complexities of addition for icl's element containers.
Sizes n and m are in the complexity statements are sizes of objects T y and P x:
n = iterative_size(y);
m = iterative_size(x); //if P is a container type
Note, that for an interval container the number of elements T::size is different from the number of intervals that you can iterate over. Therefore a function T::iterative_size() is used that provides
the desired kind of size.
Table 1.22. Time Complexity for inplace Addition on element containers
domain domain
T& operator += (T& y, const P& x) type mapping __ch_iclsets][_ch_iclmaps_
std::set O(log n) O(m)
icl::map O(log n) O(m)
Time complexity characteristics of inplace addition for interval containers is given by this table.
Table 1.23. Time Complexity for inplace Addition on interval containers
domain interval domain interval interval interval
T& operator += (T& y, const P& x) type type mapping mapping sets maps
type type
interval_sets interval_set O(log n) amortized O(m log(n+m))
separate_interval_set O(log n)
split_interval_set O(log n) O(n) O(m log(n+m))
interval_maps O(log n) O(n) O(m log(n+m))
Since the implementation of element and interval containers is based on the link red-black tree implementation of std::AssociativeContainers, we have a logarithmic complexity for addition of
elements. Addition of intervals or interval value pairs is amortized logarithmic for interval_sets and separate_interval_sets and linear for split_interval_sets and interval_maps. Addition is linear
for element containers and loglinear for interval containers.
The admissible type combinations for infix operator + are defined by the overload tables below.
// overload tables for element containers: interval containers:
T operator + (T, const P&) + | e b s m + | e i b p S1 S2 S3 M1 M3
T operator + (const P&, T) ---+-------- ---+---------------------------
e | s e | S1 S2 S3
b | m i | S1 S2 S3
s | s s b | M1 M3
m | m m p | M1 M3
S1 | S1 S1 S1 S2 S3
S2 | S2 S2 S2 S2 S3
S3 | S3 S3 S3 S3 S3
M1 | M1 M1 M1 M3
M3 | M3 M3 M3 M3
See also . . .
Back to section . . . | {"url":"http://www.boost.org/doc/libs/1_55_0/libs/icl/doc/html/boost_icl/function_reference/addition.html","timestamp":"2014-04-17T21:46:17Z","content_type":null,"content_length":"60910","record_id":"<urn:uuid:ad968f1f-c007-4213-a633-505dd7fe240b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrating Factor
Assume that the equation
is not exact, that is-
In this case we look for a function u(x,y) which makes the new equation
an exact one. The function u(x,y) (if it exists) is called the integrating factor. Note that u(x,y) satisfies the following equation:
This is not an ordinary differential equation since it involves more than one variable. This is what's called a partial differential equation. These types of equations are very difficult to solve,
which explains why the determination of the integrating factor is extremely difficult except for the following two special cases:
Case 1: There exists an integrating factor u(x) function of x only. This happens if the expression
is a function of x only, that is, the variable y disappears from the expression. In this case, the function u is given by
Case 2: There exists an integrating factor u(y) function of y only. This happens if the expression
is a function of y only, that is, the variable x disappears from the expression. In this case, the function u is given by
Once the integrating factor is found, multiply the old equation by u to get a new one which is exact. Then you are left to use the previous technique to solve the new equation.
Advice: if you are not pressured by time, check that the new equation is in fact exact!
Let us summarize the above technique. Consider the equation
If your equation is not given in this form you should rewrite it first.
Step 1: Check for exactness, that is, compute
then compare them.
Step 2: Assume that the equation is not exact (if it is exact go to step ?). Then evaluate
If this expression is a function of x only, then go to step 3. Otherwise, evaluate
If this expression is a function of y only, then go to step 3. Otherwise, you can not solve the equation using the technique developped above!
Step 3: Find the integrating factor. We have two cases:
3.1 If the expression x only. Then an integrating factor is given by
3.2 If the expression y only, then an integrating factor is given by
Step 4: Multiply the old equation by u, and, if you can, check that you have a new equation which is exact.
Step 5: Solve the new equation using the steps described in the previous section.
The following example illustrates the use of the integrating factor technique:
[Differential Equations] [First Order D.E.]
[Geometry] [Algebra] [Trigonometry ]
[Calculus] [Complex Variables] [Matrix Algebra]
S.O.S MATHematics home page
Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.
Author: Mohamed Amine Khamsi
Copyright © 1999-2014 MathMedics, LLC. All rights reserved.
Contact us
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA
users online during the last hour | {"url":"http://www.sosmath.com/diffeq/first/intfactor/intfactor.html","timestamp":"2014-04-17T18:23:15Z","content_type":null,"content_length":"8407","record_id":"<urn:uuid:af5a72fe-04e9-4097-8aa3-a4febde74606>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry proof involving sum and differnece indentity
How would I solve the following sin^5x=(1/16)(10sinx-5sin3x+sin5x) Help is appreciated.
First, you probably mean "prove," not "solve." One solves an equation to find its roots, which are usually few. One proves an identity, which is an equality that holds for all values of x. Using the
formulas for the sine and cosine of the sum of angles, try to express sin(3x) and sin(5x) through powers of sin(x). | {"url":"http://mathhelpforum.com/trigonometry/201813-trigonometry-proof-involving-sum-differnece-indentity-print.html","timestamp":"2014-04-17T06:06:18Z","content_type":null,"content_length":"4022","record_id":"<urn:uuid:a36df1db-1cd6-4cf5-abe3-48069b6c1787>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
The classification of finite simple groups
A group is a collection of elements that obey to certain rules. For every group we can construct some subgroups, in particular the normal subgroups. Given a group $G$, a subgroup $K$ is normal if,
for every element $g \in G$ \[gK = Kg\] or in a trivial way if every element $g \in G$ commute with every element $k \in K$.
Now, if the set of the normal subgroups of a nontrivial given group $G$ is constituted only by the trivial group and the group itself, then $G$ is a
simple group
. And if the group $G$ is finite (the number of the elements of the group is finite), so $G$ is a
finite simple group
. In order to
classify the finite simple groups
Daniel Gorenstein
Ron Solomon
Richard Lyons
started in 1980s a program to produce a
new and complete proof of the Classification theorem^(1)
Every finite simple group is isomorphic to one of the following groups:
□ A cyclic group with prime order;
□ An alternating group of degree at least 5;
□ A simple group of Lie type, including both
☆ the classical Lie groups, namely the groups of projective special linear, unitary, symplectic, or orthogonal transformations over a finite field;
☆ the exceptional and twisted groups of Lie type (including the Tits group which is not strictly a group of Lie type).
□ The 26 sporadic simple groups.
The work was concluded by
Michael Aschbacher
Stephen Smith
in 2004: the last chapter of the proof was described in a not so technical paper only by Aschbacher
and after in two mathematical monographs
. The complete classification was finally published in 2011 in the monograph
The Classification of Finite Simple Groups: Groups of Characteristic 2 Type
by Aschbacher, Lyons, Smith, and Solomon (
Mathematical Surveys and Monographs
, vol. 172). This work wins the 2012 Leroy P. Steele Prize for Mathematical Exposition
In this paper, the authors, who have done foundational work in the classification of finite simple groups, offer to the general mathematical public an articulate and readable exposition of the
classification of characteristic 2 type groups.
An interesting and not complete story of the
classification theorem
is in the Ron Solomon's paper
On Finite Simple Groups and Their Classification^(4)
. Reading the paper we can see that just in 1995 the list of all finite simple groups was completed, but there isn't a real complete proof that all groups in the list are finite simple groups. So the
Aschbacher and Smith's proof is needed to complete the classification and the
atlas of finite representations
In 2001 Solomon, at the dawn of the conclusion of the work, wrote
A brief history of the classification of the finite simple groups^(5)
, and in
Applications and others developments
Thus the classi cation of all nite groups is completely infeasible. Nevertheless experience shows that most of the nite groups which occur "in nature" - in the broad sense not simply of chemistry
and physics, but of number theory, topology, combinatorics, etc. - are "close" either to simple groups^(2) or to groups such as dihedral groups, Heisenberg groups, etc., which arise naturally in
the study of simple groups. And so both the methodologies and the database of information generated in the course of the Classi cation Project remain of vital importance to the resolution of
questions arising in other disciplines.
The story could indeed influence other disciplines, for example physics. Indeed one of the finite simple group is E8, and I'he just explored connection with physics in
The universe and the flowers
. Another connection with physics is quotedin the first Solomon's work about some particular sporadic simple group
: the Monster groups. Following Solomon, Monster groups are connected to quantum field theory, and so I try something about this assertion, finding
Our Mathematical Universe: I. How the Monster Group Dictates All of Physics
Franklin Potter^(8)
. In paper Potter try to connect the Monster group with Standard Model. The idea, in principle, is not wrong: the Standard Model presents us a universe composed by some finite families of fermions,
leptons and bosons, a finite number of elements, like finite groups. Furthermore the three families are constituted by elementary particles, or in other worlds: particles made by themselves, like
simple group. So it would be possible to connect Standard Model with some finite simple group, like a Monster group.
I don't know if the way proposed by Potter is correct or not (I read quickly his paper), but he predict the existence of two new quarks, one at 80 GeV and one at 2600 GeV, and this is a good tool in
order to say if Monster Group and Standard Model are connected.
Instead Potter is certain that his hypothesis is true:
In this brief article I have outlined specific connections between the mathematics of the Monster Group and fundamental physics particles and rules. These connections support the three hypotheses
ERH, MUH, and CUH^(3), so I conclude that the Universe is mathematical and that we live in the only possible one. I await the empirical confirmation by the discovery of the 4th quark family,
particularly the b' quark at about 80 GeV. Hopefully, the wait will not be long.
About finite simple groups:
Philosophy of the Classification of Finite Simple Groups An enormous theorem: the classification of finite simple groups
Richard Elwes
The most important people in
calssification program
was Daniel Gorenstein. Ho was a leader in the group and you can read a bibliography about his work in
in the two Solomon's papers that I used for this post
^(4, 5)
(1) About them he write
Like the elementary particles of physics, sporadic simple groups were often predicted several years before their existence was confirmed. For example, the Monster was predicted in 1973, but not
constructed until 1980.
(2) In this sense we can intend the finite simple groups like the
of the group theory.
(3) ERH (External Reality Hypothesis): there exists an external physical reality completely independent of us humans; MUH (Mathematical Universe Hypothesis): our external physical reality is a
mathematical structure; CUH (Computable Universe Hypothesis): the mathematical structure that is our external physical reality is defined by computable functions.
Ronald Solomon
On Finite Simple Groups and Their Classification
. Notices of the American Mathematical Society 42 (02) (
Solomon, R. (2001). A brief history of the classification of the finite simple groups Bulletin of the American Mathematical Society, 38 (03), 315-353 DOI: 10.1090/S0273-0979-01-00909-0
Michael Aschbacher
The Status of the Classification of the Finite Simple Groups
. Notices of the American Mathematical Society 51 (07) (
Michael Aschbacher
Stephen Smith
The Classification of Quasithin Groups: I. Structure of Strongly Quasithin $\mathcal{K}$-groups
II. Main Theorems: The Classification of Simple QTKE-groups
Mathematical Surveys and Monographs
, vol.111-112
Franklin Potter
(2011) .
Our Mathematical Universe: I. How the Monster Group Dictates All of Physics
Progress in Physics
vol.4 (
Kehoe, E. (2012). 2012 Steele Prizes Notices of the American Mathematical Society, 59 (04) DOI: 10.1090/noti826
2 comments:
1. I included this post in Carnival of Mathematics 85.
2. Thanks!
I tweet soon as possible!
Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS | {"url":"http://docmadhattan.fieldofscience.com/2012/03/classification-of-finite-simple-groups.html?showComment=1333615994934","timestamp":"2014-04-17T09:43:53Z","content_type":null,"content_length":"162801","record_id":"<urn:uuid:1f6ebe95-a578-439d-8f11-35a8d57b3517>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topos Theory Can Make You a Predicativist
Posted by Mike Shulman
Recall that predicative mathematics is mathematics which rejects “impredicative definitions,” and especially power sets. (Power sets are “impredicative” because you can say things like “let $X$ be
the intersection of all blah subsets of $S$” and proceed to prove that $X$ is itself blah; thus $X$ has been defined “impredicatively” in terms of a collection of which it is a member.) So it might
seem odd to claim that topos theory can make you a predicativist, since the basic ingredient in the definition of an elementary topos is a power object.
However, I mean instead to refer to Grothendieck topos theory. This is usually regarded as a sub-field of elementary topos theory, since every Grothendieck topos is an elementary topos. But actually,
that’s really only true if $Set$ is itself an elementary topos! So if we were predicativists, we would instead say that $Set$ is some kind of pretopos, and we would conclude instead that any category
of sheaves is a pretopos of the same kind. Moreover, we could do lots of ordinary Grothendieck-topos-theory with these “Grothendieck pretopoi” (modulo the usual sorts of difficulties that come up in
doing any sort of mathematics predicatively). So “Grothendieck topos theory” is, aside from the confusion of names, fully compatible with predicativism.
But how can Grothendieck topos theory make you a predicativist?
To be precise, it’s not really ordinary Grothendieck 1-topos theory that I’m referring to, but higher topos theory and lower topos theory. Just as a 1-topos (I’m going to drop the “Grothendieck” from
now on), or (1,1)-topos, is a category of sheaves of sets on a site, and sets are 0-groupoids, so an (n,1)-topos is an $(n,1)$-category of sheaves of $(n-1)$-groupoids on an $(n,1)$-site. It’s
established by now, due to the work of many people (but Jacob Lurie wrote a big book about it), that the theory of $(n,1)$-topoi makes good sense for any $0\le n\le \infty$, and mirrors the behavior
of 1-topos theory in many ways (and improves on it in others).
In particular, that is so for $n=0$. A (0,1)-topos is a (0,1)-category of sheaves of (-1)-groupoids on a (0,1)-site. There’s a nice description of this process at (0,1)-site; sheaves in this case are
often called ideals.
However, when one starts to think about $(n,1)$-topoi for arbitrary $n$, one is struck by some curiously distinctive features of the case $n=0$. Most of these arise from the fact that:
• a (0,1)-topos, unlike an $(n,1)$-topos, is an (essentially) small category. As a poset, it can be identified with a frame or a locale, depending on which direction one takes the morphisms as
And of course, this fact is a consequence of the power set axiom for $Set$. So if you think about these things a lot, you might start to wonder (as I have), why should the case $n=0$ be different
from all other values of $n$? We’ve learned not to expect the $(n+1)$-category of all small $n$-categories to be itself a small $(n+1)$-category for any value of $n\ge 0$; why should we expect it to
be so for $n=-1$?
By the way, the analogue for higher $n$ of the impredicative definition “let $X$ be the intersection of all blah subsets of $S$” would be “let $X$ be the limit of the diagram of all blah presheaves
on $S$.” We know that a presheaf category is complete, but that only means that it has small limits, so if we don’t know that there are only set-many blah presheaves, that latter “definition” of $X$
is illegitimate. For instance, “let $X$ be the product of all (set-valued) presheaves on $S$” is certainly illegitimate! So why should we expect to be able to say something analogous like “let $X$ be
the intersection of all subsets of $S$”? (Since a subset of $S$ is just a (-1)-groupoid-valued presheaf on $S$.)
So that’s half of how topos theory can make you a predicativist: you might start wondering why we believe that the case $n=0$ is special. The other half is that topos theory (and, really, category
theory in general) can show you that often it doesn’t really matter whether or not $n=0$ is special. When you first encounter predicativism, you might be shocked at the idea of not having a power set
and thus not being able to do any of the familiar things with power sets. (I know I was.) But the negative-thinking approach suggests that that should be no more or less shocking than the idea of not
having a category of all sets.
But, of course, we do have a category of all sets and we work with it all the time; it’s just a large category, which is itself a proper class instead of a set. Likewise, predicatively any set has a
power class. In particular, when doing topos theory, we can work with frames and locales in the same way as before, except that they will no longer necessarily be small (0,1)-categories. Rather, they
will just be assumed to have a small set of generators, just like we do for Grothendieck $(n,1)$-topoi for all other values of $n$. Yes, certainly, some things will get a bit more tedious, but they
won’t come completely crashing down around our ears. This is, I believe, essentially what predicative mathematicians do under the name of formal topology.
Posted at January 23, 2011 8:26 PM UTC
Re: Topos Theory Can Make You a Predicativist
Interesting. Simple as it is now that you say it, I had not been aware of this connection.
So what precisely do we do to set us up in the predicative context in which $(0,1)$-sheaf toposes do show size behaviour analogous to $(n,1)$-sheaf toposes for higher $n$?
• discard the power set axiom from the definition of $SET$;
• and from the definition of Grothendieck universes.
Anything else we need/want to fine tune about our foundations?
Posted by: Urs Schreiber on January 23, 2011 11:40 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Well, of course (as you probably know) we have to be a bit careful about what’s left when we “discard power sets.” For instance, if we start with “Set is a well-pointed category with finite limits
and power objects and the axiom of choice” (one version of ETCS), and we discard power objects (and choice), we won’t be left with a very good definition of $Set$. The predicative version of ETCS
says that $Set$ is a well-pointed pretopos, possibly locally cartesian closed (depending on the sort of predicativist you are). There seems to be some debate about whether to use classical logic
either, although if you want function sets then you have to use intuitionistic logic, since a cartesian closed Boolean category is an elementary topos.
But regarding the particular matter at hand, I believe that for the basic theory of Grothendieck (0,1)-toposes, it should be enough to just assume that $Set$ is a well-pointed pretopos. Obviously we
won’t be able to use the parts of topos theory that invoke subobject classifiers, but if we assume function sets in $Set$ then we’d be able to use the parts that invoke cartesian closure.
Hmm, I just thought of another interesting connection between higher topos theory and predicativism. Subobject classifiers in 1-topoi generalize to “(truncated-)object classifiers” in (n,1)-topoi,
except that for $n\gt 1$ you can’t classify all objects, only some cardinality-bounded collection of them. So it might be reasonable to consider the claim that $Set$ (and likewise any 1-topos) has,
not a single subobject classifier, but a family of them which jointly “exhast all subobjects”. I think people have considered that sort of axiom before also, in the direction of “predicative
elementary topos”. (Similarly, I believe the proof that an (n,1)-topos has object classifiers uses the existence of arbitrarily large regular cardinals, which is not a theorem constructively but has
to be taken as an additional axiom.)
Posted by: Mike Shulman on January 24, 2011 12:14 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
The moment you said this here:
Obviously we won’t be able to use the parts of topos theory that invoke subobject classifiers,
I had this very same thought:
Subobject classifiers in 1-topoi generalize to “(truncated-)object classifiers” in $(n,1)$-topoi, except that for $n \gt 11$ you can’t classify all objects, only some cardinality-bounded
collection of them.
So maybe the axioms of elementary toposes deserve some refinement after all:
So it might be reasonable to consider the claim that Set (and likewise any 1-topos) has, not a single subobject classifier, but a family of them which jointly “exhast all subobjects”.
That sounds like it would make a lot of things fall nicely into place.
That reminds me, I need to think about and understand explicit descriptions of $k$-truncated object classifiers in more general $(n,1)$-toposes than $(n-1)Grpd$. Have you thought about that?
Posted by: Urs Schreiber on January 24, 2011 1:15 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I realize that probably we wouldn’t get all that far without function sets, since otherwise $Set$ isn’t locally small or complete, we don’t have a good adjoint functor theorem, and our geometric
morphisms probably don’t have right adjoint parts. I think a $\Pi$-pretopos with “object classifiers” has been studied by someone under a name like “stratified pseudo-topos”, but I can’t remember who
it was at the moment.
I need to think about and understand explicit descriptions of k-truncated object classifiers in more general (n,1)-toposes
I would guess that it’s just something like, for each $U$ in the site $C$, consider the category of k-sheaves on $C/U$ of cardinality below some bound. As $U$ varies that should form a (k+1)-sheaf on
$C$ which is an object of the (n,1)-topos in question when n is sufficiently larger than k.
Posted by: Mike Shulman on January 24, 2011 5:05 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I think a $\Pi$-pretopos with “object classifiers” has been studied by someone under a name like “stratified pseudo-topos”, but I can’t remember who it was at the moment.
Moerdijk and Palmgren
Posted by: Richard Garner on January 25, 2011 5:57 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Right, thank you.
Posted by: Mike Shulman on January 25, 2011 6:27 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Geometric logic also points in the direction of predicativism: the powerset construction is not geometric. This is a reason why many constructions in formal topology are, in fact, geometric. Steve
Vickers has a program to restrict even to arithmetical universes.
Posted by: Bas Spitters on January 24, 2011 9:08 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
[I’ve picked this up very late, but I hope the comment is still of interest.]
I thank Bas for mentioning my programme. I want to elaborate on that by explaining how topos theory did make me a predicativist.
The realm of geometric logic is essentially that fragment of the internal mathematics of Grothendieck toposes that is preserved by the inverse image functors of geometric morphisms. (In fact that is
pretopos structure + some other stuff, so this provides one answer to Mike’s original thought of Grothendieck toposes as being “some kind of” pretopos.)
The wonderful thing about this fragment is that if you use it to reason about points of a point-free space X, then the reasoning applies not only to global points (maps 1 -> X), of which there may be
insufficient, but also to generalized points (maps W -> X) such as the generic point (Id: X -> X). This overcomes most of the disadvantages that arise from the insufficiency of points in a
non-spatial space.
In short, if your reasoning is geometric you can deal with point-free spaces as though they had enough points.
This is a huge improvement on having to work explicitly with the frames or their presentations, and for a decade or two now I have been doing my best to work geometrically wherever possible.
Since geometric reasoning does not include powersets, I have found that that in effect makes me a predicativist and I have found it very easy to participate in predicative mathematics such as formal
To repeat, topos theory did make me a predicativist.
Note that geometric reasoning does not even include exponentials (function types). This sounds terribly weak, but in practice there are ways round. One important observation is that the exponential
of sets, although not another set, is a locale. What underlies this is that the exponential of sets has a natural topology that is non-discrete. You can see this very clearly if you think of equality
of functions: it is of a different nature from the equality on the sets, and it is not open in the function space topology.
Bas also mentioned Joyal’s Arithmetic Universes. These are pretoposes equipped with parametrized list objects. The idea is that the infinitary disjunctions of geometric logic can (in practical
examples) be eliminated in favour of finitary disjunctions and existential quantification over types such as N and Q constructible within AUs.
Here the lack of exponentials is impossible to fudge: they are not just absent from the logic, they are completely absent from the categories (AUs instead of Grothendieck toposes). Milly Maietti and
I have started to explore how to work in these, in “An induction principle for consequence in arithmetic universes”.
Posted by: Steve Vickers on April 26, 2012 1:06 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Does the predicative/impredicative distinction correspond to anything that a working mathematician, say, an analyst, might notice in their work, in the way that the constructive/nonconstructive
distinction can make itself felt:
I hope that he [the sort of hard-nosed ‘working mathematician’ who regards logic like a disease] might…notice the fact that a constructively valid proof of a given theorem is generally more
elegant than one which relies heavily on the law of excluded middle; constructivity is almost as much a matter of style as of logic. (Johnstone, Stone Spaces, xi)
Posted by: David Corfield on January 24, 2011 2:06 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes. For example: given a set X, and a collection R of subsets of X, how do you construct the smallest topology making every subset in R open?
Predicative answer (“bottom-up”): take R. Throw in all unions of sets in R. Throw in all finite intersections. Throw in all unions. Throw in all finite intersections. Repeat (possibly transfinitely)
until you have a topology.
Impredicative answer (“top-down”): consider the set of all topologies on X making every subset in R open. This set is closed under arbitrary intersection. So take the intersection of everything in
Posted by: Richard Garner on January 25, 2011 6:06 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
take R. Throw in all unions of sets in R. Throw in all finite intersections. Throw in all unions.
You're done now (and you could have skipped the first step about unions).
To generate a $\sigma$-algebra instead of a topology, however, that requires transfinite iteration in general.
Regardless, it's a great example, probably better than mine.
Posted by: Toby Bartels on January 25, 2011 6:37 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
You’re done now (and you could have skipped the first step about unions).
Although predicatively, what you’ve “got” now is a proper class rather than a set.
To generate a $\sigma$-algebra instead of a topology, however, that requires transfinite iteration in general.
In this case, I think each step should be a set if $\mathbb{N}$ is exponentiable (which I suppose a predicativist who believes in classical logic might object to, since in classical logic any
exponentiable set has a power set). But can one prove, predicatively, that the transfinite iteration converges in any sense? The nLab seems skeptical.
Posted by: Mike Shulman on January 25, 2011 6:51 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
But can one prove, predicatively, that the transfinite iteration converges in any sense?
One can prove this in mathematics which is predicative over $\omega_1$. I don't think that this weak level of predicativity is very popular (although what I wrote at the $n$Lab now sounds unduly
pessimistic to me), but as we're just discussing the feel of predicativity here, I don't think that this matters very much.
Posted by: Toby Bartels on January 25, 2011 7:31 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Good question. I think predicativity can also roughly be described under the heading of “constructive”, in the informal sense of “giving a construction”, even if it doesn’t necessarily imply
intuitionistic logic. An impredicative definition like “let $R$ be the intersection of all equivalence relations containing $S$” is certainly not as “constructive” a way to define $R$ as the
predicative version “let $x R y$ mean that there is a finite sequence $(x=x_0,x_1,\dots,x_n=y)$ such that for all $0\le i\lt n$, either $x_i S x_{i+1}$ or $x_{i+1} S x_i$.” In particular, a
constructive predicative proof can often have an algorithm formally extracted from it via the Curry-Howard correspondence for $\lambda$-calculus or Martin-Löf type theory. So perhaps one answer is
that predicativity may be noticed by someone trying to make their proofs effectively computable.
One could of course try to make the same argument for predicativity as Johnstone does for constructivity. I have to say that I find a lot of impredicative proofs and definitions more elegant than
their predicative versions (to the extent that I occasionally waste time wishing that we could make (n,1)-toposes more like (0,1)-toposes, rather than the other way round), but others may disagree.
Posted by: Mike Shulman on January 25, 2011 6:17 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, constructive mathematics is elegant, while predicative mathematics is concrete; these don't always go together.
Posted by: Toby Bartels on January 25, 2011 6:46 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Do note that while Martin-Löf type theory is predicative, there are impredicative type theories. For instance, anything in the line of System F (which includes the calculus of constructions, and
typically extensions thereof) allows types like $\forall (T : Type). T \to T$, which is a type that quantifies over all types.
Coq, for instance, originally had an impredicative universe of types at its lowest level, and still has a switch for enabling it (such a universe is anti-classical, in that it can be used to prove
$eg (\forall (P : Prop). P \vee eg P)$, though, so it’s off by default). Using such facilities, you can construct a data type that models a constructive set theory with a powerset operation:
$data\, Z : Set = con : (I : Type) \to (I \to Z) \to Z$
Where $Set$ is Coq’s impredicative universe. This allows $I$ to be instantiated to something like $J \to Prop$ for some other index type $J$. So if a set $s : Z$ is indexed by $J$, we can create a
set indexed by its power type $J \to Prop$.
By contrast, something like Agda, based on Martin-Löf can build a similar type, but it must either be stratified, with the powerset of a $Z_i$ set living in $Z_{i+1}$, or one must just ditch the
Posted by: Dan Doel on January 25, 2011 10:56 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Coq, for instance, originally had an impredicative universe of types at its lowest level
Yes, I’ve always been somewhat intrigued by this. That sort of universe is actually even more impredicative than just having power sets; it seems to me to be more along the lines of making 1-toposes
act like (0,1)-toposes.
To be more specific: it seems that what’s really impredicative is not power sets, but quantification over sets or subsets. So in particular, the set-theoretic axiom of unbounded separation is also
impredicative. Power sets are only impredicative because power sets + bounded separation allows you to quantify over subsets, and not many people are willing to do away with bounded separation.
As I understand it, the impredicativity of Coq’s “impredicative Set” encompasses not just power sets, but also unbounded separation, and something even stronger. If we interpret the sort Prop as a
subobject classifier, then from exponentiation we get power types, and the “impredicativity of Prop” means that in defining propositions we allow quantified variables ranging over types, which is
like unbounded separation. Those sorts of impredicativity are, I believe, still there in the current version of Coq; what’s been turned off by default is something still stronger, saying something
like “for any set A, there is a set of functions $Set \to A$.” Am I right about that?
I’ve always been curious how one shows that such an “anti-classical impredicative universe” is consistent; it seems quite a precarious assumption.
Posted by: Mike Shulman on January 26, 2011 4:02 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
The discussion is here.
In standard Coq you can write:
Definition id: Type := forall X:Set,X->X.
But you need the option -impredicative-set to write:
Definition id: Set := forall X:Set,X->X.
Please note the type of id. (Type is a universe containing Set).
Posted by: Bas Spitters on January 26, 2011 8:33 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, I’ve always been somewhat intrigued by this. That sort of universe is actually even more impredicative than just having power sets; it seems to me to be more along the lines of making
1-toposes act like (0,1)-toposes.
I’m afraid I don’t know enough to really comment on this. However, you did mention that (0,1)-toposes are in some sense small. I’ve been reading off and on about modelling type theory with fibred
categories, and the book mentions how these sort of ‘products over all types’ cannot be modeled (other than trivially) by fibrations derived from $Fam(C)$ over $Sets$, because products over the set
of all objects in $C$ are ‘too big’ to live in $C$ itself. Perhaps that’s related somehow? Are (0,1)-toposes ‘small’ enough to have products indexed by their own collection of objects?
To be more specific: it seems that what’s really impredicative is not power sets, but quantification over sets or subsets.
This is certainly what gets directly labelled as impredicative in type theories: types that are defined via quantification over all types.
If we interpret the sort Prop as a subobject classifier, then from exponentiation we get power types, and the “impredicativity of Prop” means that in defining propositions we allow quantified
variables ranging over types, which is like unbounded separation. Those sorts of impredicativity are, I believe, still there in the current version of Coq …
Yes, Prop is still impredicative in Coq. And to some degree, that makes sense. Even though Prop lives along side the other universes, and interacts with them in ways that make it distinctly type
theory rather than a first-order logical theory, Prop is supposed to fill that sort of role. Predicativists stratify their sets (or whatever), but they don’t stratify their propositions.
what’s been turned off by default is something still stronger, saying something like “for any set A, there is a set of functions Set→A.” Am I right about that?
Yes, and more generally, if $\Gamma, T : Set \vdash U : Set$ then $\Gamma \vdash (\Pi_{T : Set} U) : Set$. And even if $T$ inhabits an arbitrarily high universe (so long as $U$ still inhabits $Set$).
I’ve always been curious how one shows that such an “anti-classical impredicative universe” is consistent; it seems quite a precarious assumption.
It certainly is precarious. In fact, it’s very easy to introduce inconsistency. For instance, the above is about $\Pi$ types. If we try to introduce $\Sigma$ types:
$\frac{\Gamma, T : Type \vdash U : Set}{\Gamma \vdash \Sigma_{T:Type} U}$
Then we can do the following:
$(Set, tt) : \Sigma_{T:Type}\top : Set$
So we can trick $Set$ into (essentially) inhabiting a $Set$. Coq deals with this by disallowing strong and/or large elimination of impredicative inductives (so, you can define $\Sigma^w_{T:Type}\top$
, but not $\pi^w_1 : \Sigma^w_{T:Type}\top \to Type$, which makes the type isomorphic to $\top$).
And also, there’s Girard’s paradox, which shows that if we have a universe hierarchy like $Type_1 : Type_2 : Type_3$, and $Type_2$ allows impredicative $\Pi$ types, then a contradiction is derivable
($Type_1$ may need to be impredicative, too, I forget).
So, allowing one impredicative universe at the bottom is something of a balancing act that no one has yet proved inconsistent (kind of like set theory :)). And I think there are plenty of folk that
prefer more predicative theories (I lean that way myself). Impredicative theories do have certain appealing properties, though. For instance, any inductive type (minus the strong eliminator) can be
encoded using impredicative $\Pi$ types via Church encoding.
I don’t know what really counts as far as demonstrating consistency goes. System F and the calculus of constructions (and many others) have proofs of strong normalization. And although they don’t
have non-trivial set-theoretic models, I think they do have trivial ones. Perhaps that lends confidence? Certainly if you trust set theory, you can probably trust the normalization theorems. Of
course, if you don’t trust impredicativity, maybe you shouldn’t trust set theory. :)
Coq may be another matter, since I think it’s been shown to be as powerful as something like (constructive?) ZF + axioms about inaccessible cardinals.
Posted by: Dan Doel on January 26, 2011 8:50 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Perhaps that’s related somehow?
Yes, that’s exactly it. Classically, only a preorder (= a (0,1)-category) can have products indexed by its own collection of morphisms.
Predicativists stratify their sets (or whatever), but they don’t stratify their propositions.
I’m not sure exactly what you mean, but many predicativists do indeed restrict the ability to define propositions by quantification over all types (or even over all propositions). That’s what I meant
by saying that set-theoretic unbounded separation is impredicative. All the replies to David’s question were of that nature. I could be wrong, but to me it seems that to a mathematician with a
classical set-theoretic intuition, quantifying over all sets in defining a proposition is “impredicative” (if they know what that word means) and defining a set $\{ f: T \to U \}_{T \in Set}$ is
“obviously impossible.”
System F and the calculus of constructions (and many others) have proofs of strong normalization.
Can you explain to a non-type-theorist why that should be regarded as evidence of consistency?
Of course, if you don’t trust impredicativity, maybe you shouldn’t trust set theory. :)
I have no trouble trusting impredicative definitions of propositions, which is all that ordinary set theory allows (i.e. unbounded separation). It’s the stronger form of “impredicativity” that makes
me queasy.
Posted by: Mike Shulman on January 27, 2011 3:54 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, that’s exactly it. Classically, only a preorder (= a (0,1)-category) can have products indexed by its own collection of morphisms.
Ah, okay. In that case, I can probably elaborate a little more, although I’m a little hazy on this myself.
The key, I think, is that when you make set-theoretic models of (say) System F, one sets it up as follows:
1. Let’s consider $\mathbf{Fam(C)}$ fibred over $\mathbf{Set}$. Objects of $\mathbf{Fam(C)}$ are set-indexed families of objects of $\mathbf{C}$, and morphisms are a function between the index sets
together with an arbitrary family of functions that respect the index function.
2. Our model is a pullback of that fibration along the inclusion functor from a subcategory of $\mathbf{Set}$ containing as objects only finite products of the set of elements of $\mathbf{C}$ ($\
mathbf{C_0}$ is standard notation for that set, I believe).
The only such models that are closed under $\mathbf{C_0}$-indexed products are posets, or preorders, or what have you. However, I emphasized a bit above. Arbitrary families of functions are a poor
match up to the functions we are able to write in System F. If we consider an endomorphism on $\{X\}_{X \in \mathbf{C_0}}$, the $\mathbb{N} \to \mathbb{N}$ component could be the successor function,
while every other component could be the identity. However, it’s impossible to write such a function in System F, because there is no type-case construct. Every morphism denotable in System F is
uniform in some sense.
The book I’ve been reading gives the following example: let $\mathbf{C}$ be the category whose objects are subsets of the natural numbers, and morphisms $X \to Y$ are endofunctions $f$ on the natural
numbers such that $f(X) \subseteq Y$. Then there are two fibrations that can be viewed as set-indexed versions of this category:
1. Objects are set indexed families of sets of natural numbers, and morphisms $\{X_i\}_{i \in I} \to \{Y_j\}_{j \in J}$ are a function $\phi : I \to J$ together with a family $\{f_i : X_i \to Y_{\
phi(i)}\}$ where each $f_i$ is an appropriate $\mathbf{C}$-morphism.
2. Objects are the same as above, but morphisms are a function $\phi : I \to J$ together with a single function $f$ that is a valid $\mathbf{C}$-morphism $X_i \to Y_{\phi(i)}$ for all $i$.
Now, I’m afraid that this is where my knowledge runs out. However, the idea is that System F is appropriately modeled by this sort of fibration with uniform morphisms, and that when this fibration is
used, the category admits products indexed by ‘all types’, and you get impredicative quantification in this manner. The uniformity is key to getting non-trivial models, though, and even my hazy
understanding can see where that’d be a problem for a set-based model (because, what the hell is a uniform family of set theoretic functions?).
This uniformity is often referred to as “parametricity,” which you may have heard of before. As a punchline, consider the type $\Pi_{A:Set}. A \to A$. Looks pretty big, product over all types/sets.
And in set theory, it certainly would be big. However, a type theorist will tell you that, by consequence of parametricity, this set has exactly one inhabitant.
I’m not sure exactly what you mean, but many predicativists do indeed restrict the ability to define propositions by quantification over all types (or even over all propositions). That’s what I
meant by saying that set-theoretic unbounded separation is impredicative. All the replies to David’s question were of that nature. I could be wrong, but to me it seems that to a mathematician
with a classical set-theoretic intuition, quantifying over all sets in defining a proposition is “impredicative” (if they know what that word means) and defining a set {f:T→U} T∈Set is “obviously
I mean, for instance, we could have 0-sets, and then 1-sets, which are allowed to be defined by quantifying over 0-sets, and 2-sets that are allowed to be defined by quantifying over 1-sets, and so
on. But I would expect our logic would not also be divided into 0-propositions and 1-propositions and whatnot. Perhaps it is necessary to classify them as such in order to make the stratification of
sets happen, though. I’m not sure.
I’d kind of not expect propositions to be able to quantify over propositions at all in a “logic.” As opposed to a type theory where we’re interpreting propositions as types. But the lines can be
blurry, I guess.
Can you explain to a non-type-theorist why that should be regarded as evidence of consistency?
It is like proving cut elimination for a logic. If a type theory is strongly normalizing, then every reduction strategy is terminating and results in a canonical form for every term. The only
remaining piece would be a proof that there is no canonical inhabitant of $\Pi_{A:Set}. A$, or, perhaps we have a primitive false type, for which there is by definition no canonical inhabitant.
Hopefully this formats correctly. The math prettifying seems to be stopping half-way through the preview on my obscure browser.
Posted by: Dan Doel on January 27, 2011 6:38 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
the idea is that System F is appropriately modeled by this sort of fibration with uniform morphisms, and that when this fibration is used, the category admits products indexed by ‘all types’, and
you get impredicative quantification in this manner.
Ah, that sounds a bit familiar. What you describe sounds very much like a preliminary to the construction of the effective topos, which does contain a complete small category, and now I seem to
recall hearing that that category can be used in modeling this sort of type theory.
I’d kind of not expect propositions to be able to quantify over propositions at all in a “logic.”
Well, I think most logicians would disagree with you, as would most non-type-theorists. In classical logic, up to equivalence there are only two propositions (“true” and “false”) so we can certainly
quantify over them. In impredicative constructive logic, we can’t assert that there are only two propositions, but we still have a set of all of them, so we can quantify over them; one way to define
it is as the power set of the one-element set.
The only remaining piece would be a proof that there is no canonical inhabitant of $\Pi _{A:Set}.A$
Ah, that’s what I would call a “consistency proof.” (Except that I would probably want there to be no inhabitant at all, not just no canonical one, but maybe in the presence of “parametricity” the
two are equivalent?) So in language that’s familiar to me, maybe you’re saying that strong normalization is a necessary prerequisite to a consistency proof.
Posted by: Mike Shulman on January 27, 2011 4:18 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Ah, that sounds a bit familiar. What you describe sounds very much like a preliminary to the construction of the effective topos, which does contain a complete small category, and now I seem to
recall hearing that that category can be used in modeling this sort of type theory.
Yes, that sounds right. In fact, the book I’m reading is called An Introduction to Fibrations, Topos Theory, the Effective Topos and Modest Sets.
Well, I think most logicians would disagree with you, as would most non-type-theorists. In classical logic, up to equivalence there are only two propositions (“true” and “false”) so we can
certainly quantify over them. In impredicative constructive logic, we can’t assert that there are only two propositions, but we still have a set of all of them, so we can quantify over them; one
way to define it is as the power set of the one-element set.
I don’t think we’re speaking the same language. When I think of propositions, I think of the things that can be substituted for the metavariable $P$ in $\Gamma \vdash P$. And that are elligible for
use with the logical connectives. For instance, I wouldn’t expect to see something like:
$\Gamma \vdash \forall_{Prop} \, p. p \vee eg p$
Not in a logic. I think I know what you mean, though.
Ah, that’s what I would call a “consistency proof.” (Except that I would probably want there to be no inhabitant at all, not just no canonical one, but maybe in the presence of “parametricity”
the two are equivalent?) So in language that’s familiar to me, maybe you’re saying that strong normalization is a necessary prerequisite to a consistency proof.
I don’t know about necessary, but it makes a consistency proof easier.
So, the goal of a consistency proof is indeed to show that there is no proof of $\bot$, whatever that is. What strong normalization shows is that if there is any proof of a proposition $P$, then
there is a canonical such proof, which can be derived via a terminating procedure. Cut elimination for a sequent calculus does the same thing. So, once you’ve proved strong normalization/cut
elimination, all you have to do is show that there are no canonical proofs. In a sequent calculus, there is by definition no rule:
$\frac{}{\vdash \bot}$
So we’re done. For something like $\Pi_{A:Set}A$, I expect it’s more like:
$\frac{A : Set \vdash e : A}{\vdash (\Lambda A. e) : \Pi_{A:Set}A}$
And no rules match the top part, so we’re again done. Is that satisfying enough? I realize my description isn’t particularly rigorous.
Posted by: Dan Doel on January 28, 2011 12:48 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I wouldn’t expect to see something like: $\Gamma\vdash_{Prop} p. p\vee eg p$. Not in a logic.
I think your notion of “logic” is more narrow than mine, and more narrow than many that I’ve seen. For instance, in most presentations of the “higher-order logic” of an elementary topos, this is a
perfectly meaningful statement (which is, of course, only true if the topos is Boolean).
Unless the point is to distinguish between propositions as syntactic constructions, on the one hand, and truth values in the logic, on the other? Type theory seems to blur the distinction, to my
mind, so I assumed you were also.
Is that satisfying enough?
Yes, that makes sense. I wonder how strong of a metatheory is used in the proof of strong normalization?
Posted by: Mike Shulman on January 28, 2011 3:08 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Unless the point is to distinguish between propositions as syntactic constructions, on the one hand, and truth values in the logic, on the other? Type theory seems to blur the distinction, to my
mind, so I assumed you were also.
Yes, that is roughly the point. It is possible even to cast this distinction in type theory, now that I think about it, though.
For instance, we can imagine a Coq in which $Prop$ is classical. However, there is still a distinction between inhabitants of $Prop$ and inhabitants of $Bool : Set$. Coq is not really a good example,
though, because it actually allows quantification over $Prop$, and for elements of $Set$s to contain $Prop$s.
However, some folks have studied what are called logic-enriched type theories. These have a separate $Prop$ universe, like Coq, but it is unrelated to whatever hierarchy of sets is also in the
theory. And in particular, there is no quantification over $Prop$, No $\Pi_{P : Prop} ...$. And so something like $\vdash p : \forall P:Prop. P \vee eg P$ could only be an abuse of notation for
something more like $\vdash p : \forall b:Bool. True(b) \vee eg True(b)$, where $Bool$ is a “set of truth values,” and $True : Bool \to Prop$ is a predicate.
And as the name “logic-enriched type theory” suggests, this separation seems particularly logic-like, as opposed to the wacky type theorists who really don’t make any distinction between types and
propositions. Perhaps I’ve just missed logics that aren’t this way, though.
Yes, that makes sense. I wonder how strong of a metatheory is used in the proof of strong normalization?
Depends on the type theory, of course. Girard’s Proofs and Types proves normalization for (I think) three different theories. The first is the simply typed lambda calculus. He notes that you can
prove normalization for it in a relatively weak metatheory (certainly in first-order arithmetic). The next two are System T and System F. T is related to Peano arithmetic via Curry-Howard, and so
strong normalization of T implies the consistency of Peano arithmetic. So that might give you some idea of how strong a theory you’ll need. System F is in a similar situation, except with
second-order arithmetic.
Posted by: Dan Doel on January 28, 2011 5:59 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Okay. But I thought we were comparing “quantifying over propositions” to quantifying over sets, not quantifying over syntactic presentations of sets. It seems to me that quantifying over truth values
is the natural thing to compare to quantifying over sets. And if you don’t want to quantify over syntactic presentations of propositions, then you shouldn’t be able to quantify over syntactic
presentations of sets either, should you?
Posted by: Mike Shulman on January 28, 2011 8:15 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I may not be expressing myself well, but I don’t think the distinction has to do with quantifying over syntax versus actual sets and whatnot. Syntax is just how logic enforces the distinction (while
type theory can use types, like the stuff with logic-enriched theories).
I’m not really sure how to explain myself better. But, maybe I can make another example. Pretty much every functional programming language has higher-order functions. So, when you write a definition
f x = ..., that f : S -> T is useful both as a mapping from type S to type T, and as a value that can be passed to some higher-order function g : (S -> T) -> U.
Now, when we go to model this in category theory (with, say, Cartesian closed categories), these notions of value and function are kept separate. (Global) values of a type T are given by morphisms $1
\to T$. And so, we have a conceptual distinction between functions-as-morphisms $f : S \to T$ and functions-as-values $f : 1 \to T^S$. They are in correspondence, but you cannot just use one where
the other is expected.
And similarly, in ZF, $\forall b \in \wp\wp\varnothing. b \vee eg b$ is not a well-formed formula, because even though $b$ ranges over the set of truth values, $b$ is still a set variable, not a
proposition variable. And I (and others, I think) expect logics (even higher-order logics) to be this way. Quantification builds propositions, but the variables range over particulars, or sets of
particulars, not, strictly speaking, propositions. There may be a subset classifier, and functions into that classifier may correspond to subsets, which in turn correspond to predicates, but purely
from a formal rules perspective, the elements of that set aren’t propositions.
At this point, though, I’ve forgotten why we’re even talking about this. :) I don’t think it’s a fundamentally important distinction. Just something that sounded odd to me.
As an amusing side note, the paper where I heard about logic-enriched type theories is called: Weyl’s Predicative Classical Mathematics as a Logic-Enriched Type Theory. So it’s very on-topic since
it’s about predicativism.
Posted by: Dan Doel on January 29, 2011 2:00 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, perhaps this discussion is going nowhere interesting. But it does seem to me that what you describe is a distinction without a difference for purposes of quantification. You can’t literally use
a value $1\to T^S$ where a function $S\to T$ is expected, but if you can quantify over one, then for all practical purposes you can quantify over the other as well. Similarly maybe you can’t
literally write $\forall b\in \wp\wp\emptyset, b\vee eg b$, but you can write $\forall b\in \wp\wp\emptyset, b\cup b^c = \wp\wp\emptyset$, which is functionally the same thing.
Posted by: Mike Shulman on January 30, 2011 11:41 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, you’re right about how it works. The basic idea is that you get to index over all types[1] when constructing types – that is, you can have types like $\forall \alpha:\mathrm{Type}.\; \alpha \to
I think you would like the way models of System F and its higher-kinded cousins are constructed. The basic idea is to give a relational realizability model (I understand you might have heard the idea
described as “fibrations of modest sets”).
First, you start with a universe of computational realizers. Traditionally, computer scientists start with a domain theoretic model of the untyped lambda calculus ($V \simeq V \to V$), and logicians
start with $V \triangleq \mathbb{N}$, the set of Godel codes. Then, each type is interpreted as a partial equivalence relation (ie, symmetric and transitive, but not necessarily reflexive relations)
on the universe $V$.
Then, the interpretation of the function space $A \to B$ in a type environment $\eta$ is the set:
$[\!\![A \to B]\!\!]\;\eta = \{(f,g) \in V \times V \;|\; \forall (u,v) \in [\!\![A]\!\!]\;\eta.\; (f\;u, g\;v) \in [\!\![B]\!\!]\;\eta\}$
That is, the interpretation of the function space is the set of functions that take related arguments to related results. The interpretation of the universal quantifier, which is what you’re probably
most interested in, goes like this:
$[\!\![\forall \alpha:\mathrm{Type}.\;A]\!\!]\;\eta = \bigcap_{R \subseteq V \times V} [\!\![A]\!\!]\;(\eta,R)$
The interpretation of the universal quantifier are just the pairs of values which are in every type interpretation, no matter what relation is chosen for $\alpha$.
By fixing a universe $V$ at the outset, we can take subrelations over it impredicatively when we need to interpret $\forall$.
[1] As an aside, I am deliberately not saying “set”, since Andy Pitts has shown that the powerset axiom is inconsistent with F-style indexing. I don’t think he needed full separation, since IIRC he
used a topos model. I’m not sure about this, though, since I don’t know much set theory.
Posted by: Neel Krishnaswami on February 2, 2011 9:24 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Thanks Neel! I wonder if the following description is accurate. Impredicativism seems to be justified by a belief (or an assumption) that there is a prior existing “true universe of things” about
which we prove things, as opposed to the predicative conception whereby we merely construct things one after another, so that we can’t make such a construction with reference to objects that we
haven’t constructed yet. The difference between set-theoretic impredicativism (power sets, unbounded separation) and type-theoretic impredicativism (system F) is that in the former, the “true
universe of things” consists of sets, whereas in the latter it consists of… well, I was going to say “functions,” but that could be misinterpreted (e.g. categorial set theory can be defined with just
“functions” as basic objects, but those are set-theoretic functions, not type-theoretic ones); maybe “instructions” or “algorithms” would be more appropriate.
Posted by: Mike Shulman on February 13, 2011 10:48 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Impredicativism seems to be justified by a belief (or an assumption) that there is a prior existing “true universe of things” about which we prove things, as opposed to the predicative conception
whereby we merely construct things one after another, so that we can’t make such a construction with reference to objects that we haven’t constructed yet.
This sounds remarkably similar to the debate in the late 1920s between Finsler and Baer over set theory. In my post on coalgebra I discuss Herbert Breger’s paper on the debate. He writes in that
To the revolutionary, the most striking difference between Zermelo’s and Finsler’s axioms is the certain ontological flavour of Finsler’s axioms. To the conservative, the philosophical background
of Zermelo’s axioms is the implicit assumption that sets do not exist unless they can be derived from given sets by axiomatically fixed rules. Axiom 3 [the axiom of completeness] is of particular
interest. It is the analogue of Hilbert’s somewhat problematic axiom of completeness for geometry. Weyl and Fraenkel purposefully took the contrary into consideration, namely an axiom of
restriction postulating the minimal system which fulfils the other axioms. Weyl’s and Fraenkel’s axiom is obviously motivated by the revolutionary idea that axioms and definitions create objects,
and that sets which are too big should not be brought into existence, whereas Finsler’s axiom of completeness is motivated by the conservative idea that big sets exist anyway, so set theory
should investigate them. (Breger, 1992, pp. 258-259)
But maybe the impredicative/predicative distinction concerns a rather specific sense of construction, so can be found already operating on the ‘algebraic’ side in algebraic set theory.
Posted by: David Corfield on February 14, 2011 8:54 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, phrased that way it does sound similar! It’s ironic, then, that Zeremlo’s axioms are so impredicative. I guess there are, as you say, different conceptions of “construction” or “derivation”
involved. Although I suppose a predicativist might argue that Zermelo and his school did not honestly follow the precept that “sets do not exist unless they can be derived from given sets by
axiomatically fixed rules” to its logical conclusion (namely predicativism).
On the other hand, if the main difference between Zermelo and Finsler set theory was well-foundedness, as Aczel seems to think, then one might argue that that has nothing to do with predicativism:
all four combinations of predicative/impredicative and well-founded/non-well-founded are perfectly consistent.
Posted by: Mike Shulman on February 14, 2011 7:24 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I think of the underying type-theoretic intuition as actually being a bit different from the set theoretic view. Informally, I would gloss it as “impredicativity is a safe method of definition for
sufficiently uniform definitions”.
The idea is that since the rules of type theory let you do nothing exciting to a term of variable type $\alpha$ (not even testing membership or equality), then a definition which is parametric in $\
alpha$ will work for any possible choice of $\alpha$. (Hence the name “parametric polymorphism”.) This then justifies impredicative indexing over any possible type, including other polymorphic types
— the term simply doesn’t depend on the properties of the specific choice of type. This principle of uniform definition seems to me to differ from the size restrictions of set theory.
If I had infinite time, I would like to try developing category theory within an impredicative type theory. I have a vague hunch that many of the “big” constructions of category theory (e.g.,
involving functor categories) are so well-behaved that they are expressible using polymorphic types. For example, the Yoneda embedding is expressible as the isomorphism $\forall \alpha, \beta.\; (\
alpha \to \beta) \Leftrightarrow (\forall \gamma.\; (\gamma \to \alpha) \to (\gamma \to \beta))$.
It would be fun to see what did and didn’t work.
Posted by: Neel Krishnaswami on February 15, 2011 4:38 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Interesting; I see what you mean. Is there any way that set-theoretic impredicativism could (or some weaker form of it) could be interpreted in a similar way, do you think?
Posted by: Mike Shulman on February 16, 2011 3:35 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
For set theories of the strength of bounded ZF/topos logic, I’m confident it’s doable, since the calculus of constructions has the same consistency strength as higher-order arithmetic. Offhand, I
don’t know an elegant way to do it, since for obvious reasons the majority interest is in giving models of polymorphism in set theory, rather than the other way around.
For set theories with unbounded separation, I don’t know how. Logics such as Coq are equiconsistent with various extensions of ZFC, but these type theories are basically stratified systems with a
universe hierarchy.
Posted by: Neel Krishnaswami on February 17, 2011 5:31 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I would be interested even in seeing an “inelegant” way to do it. Do you just mean “build the model of set theory inside type theory that you use to show it has the same consistency strength?”
Posted by: Mike Shulman on February 17, 2011 6:32 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, that’s what I meant. IMO it’s not a construction that offers much enlightenment on this question.
I think it might be tricky to do well, too. I mentioned Andy Pitts’ paper, but I should give the reference explicitly, since it bears directly on this question:
“Nontrivial Power Types Cannot be Subtypes of Polymorphic Types”
This paper establishes a new, limitative relation between the polymorphic lambda calculus and the kind of higher-order type theory which is embodied in the logic of toposes. It is shown that any
embedding in a topos of the cartesian closed category of (closed) types of a model of the polymorphic lambda calculus must place the polymorphic types well away from the powertypes, P(X), of the
topos, in the sense that P(X) is a subtype of a polymorphic type only in the case that X is empty (and hence P(X) is terminal). As corollaries, we obtain strengthenings of Reynolds’ result on the
non-existence of set-theoretic models of polymorphism.
Post-hoc, it shouldn’t be surprising that combining two different techniques to tame unbounded impredicativity can subvert the invariants each relies on.
Posted by: Neel Krishnaswami on February 18, 2011 1:58 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Hi Neel, did you mean something analogous to this post? There it seems one can work with a universe, but since the universe you started with was arbitrary, then the results hold always. Mike explains
it much better after the link.
Posted by: David Roberts on February 16, 2011 3:48 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Yes, certainly! A predicative definition (predicativism is more about definitions, while constructivism is more about proofs, although of course these can't be completely disentangled) has a much
more concrete feel than an impredicative definition.
Compare these two (nonconstructive) proofs of the (classical) Intermediate Value Theorem (and so much for my claim that predicativism is more about definitions, since this example is a proof).
Theorem: If $f\colon [0,1] \to \mathbb{R}$ is continuous, $f(0) \lt 0$, and $f(1) \gt 0$, then $f(c) = 0$ for some $c$.
Proof 1: Let $c$ be $\sup \{ x \;|\; f(x) \leq 0 \}$, which exists (in $[0,1]$) since $[0,1]$ is compact. If $f(c) \lt 0$ or $f(c) \gt 0$, we use continuity to derive a contradiction (skipped here).
Therefore, $f(c) = 0$.
Proof 2: Let $a_0$ be $0$, let $b_0$ be $1$, and inductively define:
• $c_n = (a_n + b_n)/2$,
• $a_{n+1} = a_n$ if $f(c_n) \gt 0$ but $a_{n+1} = c_n$ if $f(c_n) \geq 0$,
• $b_{n+1} = b_n$ if $f(c_n) \lt 0$ but $b_{n+1} = c_n$ if $f(c_n) \leq 0$.
Then $(c_n)_n$ is a Cauchy sequence (by elementary algebra, skipped here), so has a limit $c$ (in $[0,1]$, since that is Cauchy complete), and $f(c) = 0$ (again by contradiction and continuity,
Proof 1 is impredicative and abstract, while Proof 2 is predicative and concrete. In Proof 1, $c$ is basically defined as the largest number that will do; in Proof 2, $c$ is explicitly constructed.
You could try to program a computer to calculate $c$ (to any desired degree of accuracy) using Proof 2 as a guide; good luck trying that with Proof 1! (Because Proof 2 is not constructive, your
program may eventually have to give up and round off when trying to decide the sign of $f(c_n)$, but it'll be good enough for many purposes. Trying to overcome this limitation leads to various
constructive versions of the IVT.)
Here's another example (a definition finally, and incidentally constructively valid either way).
We may define a smooth (or whatever) manifold as a set equipped with a covering collection of mutually compatible charts (called an atlas) and we wish to define the maximal atlas $M(A)$ associated to
the original atlas $A$. (This is allegedly useful since two atlases with the same maximal atlas are be regarded as defining the same manifold structure, so that we may formally define a smooth
manifold as a set equipped with a maximal smooth atlas. Whether this is actually useful is another matter.)
Definition 1: $M(A)$ is the union of all those atlases $M$ with the property that every chart in $M$ is compatible with every chart in $A$.
Definition 2: $M(A)$ is the collection of all charts $C$ such that $C$ is compatible with every chart in $A$.
(In both cases, we should go on to prove the desired claims about $M(A)$, beginning with the fact that it is an atlas at all.)
I have seen Definition 1 in textbooks. Definition 2 strikes me as obviously superior: both simpler and more concrete. While Definition 1 is impredicative, Definition 2 is predicative. (Predicatively,
$M(A)$ may be a proper class even if $A$ is not, which kind of obviates the motivation for defining $M(A)$, but that was a silly motivation anyway. And even if you impredicatively accept that $M(A)$
is a small set, Definition 2 is still more concrete.)
Posted by: Toby Bartels on January 25, 2011 6:29 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
As I said, there is a close connection between predicativity and geometricity, preservation under geometric morphisms. By working directly with a theory, rather then the set, or topological space, of
its models, we deal with a more robust notion. This saves a lot of bookkeeping.
In another direction, there have been heated discussions on the FOM-list on predicativity (in this context combined with classical logic). Nik Weaver claims that the spaces in functional analysis
that have a name are all predicatively definable, illustrating his point that real mathematicians (functional analysists) do not use impredicativity. As a warning I should add that there is a
distinction between predicative (roughly, upto $\Gamma_0$ and generalized predicative (roughly, no power set).
Posted by: Bas Spitters on January 25, 2011 8:14 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
[The links above should be fixed; the desired URI is in the title instead of the href.]
I've heard ‘generalized predicative’ to mean no power sets but with function sets (in a constructive framework where this is even possible).
I've never understood where people get these ordinals.
Posted by: Toby Bartels on January 25, 2011 3:20 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
I don’t know how people decide what ordinals are the cap for predicative theories. However, Martin-Löf type theory has proof theoretic ordinal $\Gamma_0$ (apparently), yet has function types, which I
assume is similar in power to having function sets. And it’s usually considered predicative.
Beyond that, you can add arbitrary inductive-recursive definitions to such a theory and get something much stronger (I don’t think anyone knows the proof-theoretic ordinal), but I think the theory is
still arguably predicative. You cannot define a model of (constructive) set theory with a true power set like you can in Coq (I don’t have a proof, but my attempts have failed, and I’ve spent a fair
amount of time on them :)).
But then, there are folks who take predicative to be even more restrictive (like, ‘Heyting Arithmetic is impredicative due to the induction schema’).
Posted by: Dan Doel on January 25, 2011 11:22 PM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
The links above should be fixed; the desired URI is in the title instead of the href.
Apparently that’s what Markdown does if you put quotation marks around the URI, inside the parentheses. I.e. [link]("http://place") produces <a href="" title="http://place">link</a>.
I’ve fixed the links in the comment above.
Posted by: Mike Shulman on January 26, 2011 4:26 AM | Permalink | Reply to this
Re: Topos Theory Can Make You a Predicativist
Wonderful post! I was already suspicious of the suboject classifier in toposes; type theory is much more elegant without an equivalent of this (Prop, the type of propositions). While I heard of
predicativism, I didn’t know my suspicion were exactly in favour of that (I thought the very existence of the category of sets, even if it was a large category, is a nonpredicativistic perspective).
Additionally, I wondered why toposes have exactly the axioms they have – No more, no less. It turns out toposes aren’t a canonical objects at all, since weaker properties are enough to do anything
topos theorists want to do (Grothendieck toposes on the other hand…). Now that these inelegances are out of the way, I am much more interested in topos theory.
Posted by: Itai Bar-Natan on January 25, 2011 10:45 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2011/01/topos_theory_can_make_you_a_pr.html","timestamp":"2014-04-21T12:09:39Z","content_type":null,"content_length":"173596","record_id":"<urn:uuid:e7f1589e-6a16-4857-a05d-380f41a13770>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Fisher's exact test in r x c contingency tables
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Fisher's exact test in r x c contingency tables
From "Neil Shephard" <mdeasnds@fs1.ser.man.ac.uk>
From Marcus Keupp <marcus.keupp@unisg.ch>
To statalist@hsphsun2.harvard.edu
To statalist@hsphsun2.harvard.edu
Subject Re: st: Fisher's exact test in r x c contingency tables
Subject st: Fisher's exact test in r x c contingency tables
Date Thu, 02 Jun 2005 09:49:37 +0100
Date sent: Thu, 2 Jun 2005 07:25:20 +0200
Send reply to: statalist@hsphsun2.harvard.edu
> Dear Listers,
> in my understanding Fisher's exact test (FET) is applicable in
> 2x2 contingency tables only. However, Stata help contends
> that "... Fisher's exact test ... may be applied to r x c tables
> as well as to 2x2 tables".
> I have heard of some algorithm in SAS that seems to apply FET
> to r x c tables, but I still wonder whether this is appropriate. Is
> Stata maybe computing something similar? The only difference in output
> is that in 2x2 tables, 1- and 2-sided p values are computed, whereas
> in r x c tables one (apparently one-sided) p value only is shown.
> What is your opinion? At the moment I rather think about applying
> tests that were designed for ordinal / ordinal relationships in r x c
> contingency tables (say, Goodman and Kruskal's gamma) than embarking
> on a potentially hazardous procedure.
The original refence for the exact test is I believe Example 1 in Fisher, R. A. (1935). The
logic of inductive inference. Journal of the Royal Statistical Society Series A 98, 39–54
(obtainable from the Statistics section of
http://www.library.adelaide.edu.au/digitised/fisher/papers.html which btw has virtually all
of Fisher's papers which make for some interesting reading).
Neither this paper, or Fisher's "Statistical Methods for Research Wrokers" (to the best
of my memory, I don't have a copy to hand at the moment), stipulate that the exact test
is only applicable to 2 x 2 tables, but back then computers weren't so readily available.
I believe the exact tests implemented in Stata are not exactly as the described in the
above references, as the main problem with applying Fisher's exact test to r x c tables
is the exponential increase in computational time required for calculation (for further
details of exactly how the exact p-values are calculated in r x c tables see the
references at the end of the help for -tabulate-).
Personally I don't have a problem with applying the exact test to r x c tables, but I
wouldn't bother trying to apply it to anything with too many dimensions or too large a
sample size, as the computational time is too long (but see comments under exact
option in -tabulate-'s help).
Neil Shephard
Genetics Statistician
ARC Epidemiology Unit, University of Manchester
"If your result needs a statistician then you should design a better experiment" -
Ernest Rutherford
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-06/msg00034.html","timestamp":"2014-04-17T21:34:08Z","content_type":null,"content_length":"8460","record_id":"<urn:uuid:68d348c5-3b2a-468e-b560-8d9a0583e4f2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
predicate forms implications
September 20th 2011, 04:01 PM #1
Dec 2009
predicate forms implications
(a) Let $I$ be an interpetation, and let $v$ be an I-assignment.
Suppose that $I \models_v A$. We want $I \models_v \exists x A$. And we know that $I \models_v \exists x A$ iff there is some $d \in D$ with $I \models_{v \frac{d}{x}}A$. But how do we show that
we have such a d?
(b) Suppose $I \models_v \exists x Ax$, I have to show that $I ot{\models_v} Ax$. But I'm confused because assuming $I \models_v \exists x Ax$ means there is some d in the domain with $I \models_
{v \frac{d}{x}} Ax$. What could I do? Are there better ways to show that the implication doesn't hold?
Re: predicate forms implications
Again, one should use the property that if x is not free in B, then $I\models_{v\frac{d}{x}}B$ iff $I\models_v B$.
For (a), suppose that v maps x to d, i.e., $v=v'\frac{d}{x}$ for some smaller v'. Then $I\models_v A$ means $I\models_{v'\frac{d}{x}}A$, so $I\models_{v'}\exists x\,A$ and by the property above,
$I\models_v\exists x\,A$.
For (b), suppose that $I\models_v\exists x\,Ax$, which means that $I\models_{v\frac{d}{y}}Ay$ for some d. However, v does not have to map x to this particular d, so there is no reason for $I\
models_v Ax$. You should construct a counterexample.
Re: predicate forms implications
To give a counter example, can $I$ be an interpetation holding iff the variable $x$ is even? Then $I \models_v \exists x A$ means that $I \models_{v \frac{d}{x}}Ax$ for some $d$ which is even. On
the right hand side of the implication we have $Ax$ without any quantifiers before it. So this $x$ doesn't have to be $d$, it could be an odd number $e$. Is this correct?
P.S. Normally, when we give counter-examples for implications involving 2 variables we use an interpetation such that $A^I(m,n)$ holding iff $m<n$, then it's easy to use natural numbers as the
domain to get a counter example. But here since here we have a single variable I'm not sure what kind of counter-example to use...
Re: predicate forms implications
Let the carrier of I be natural numbers and $A^I(m)$ hold iff m is even. Let also v map a single variable x to 3. Then $I\models_v\exists x\,Ax$, but $Iot\models_v Ax$.
September 21st 2011, 11:49 AM #2
MHF Contributor
Oct 2009
September 21st 2011, 10:32 PM #3
Dec 2009
September 22nd 2011, 01:22 AM #4
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/188434-predicate-forms-implications.html","timestamp":"2014-04-17T16:57:42Z","content_type":null,"content_length":"46948","record_id":"<urn:uuid:24bb2e56-87b7-4e71-9669-6fdcf39e2746>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATLAB Second order nonlinear differential equation
December 15th 2008, 05:04 PM
MATLAB Second order nonlinear differential equation
d2θ/dt2 + c/(mL)dθ/dt + (g/L)sin(θ) = 0
Initial Conditions:
How can this be solved in MATLAB. I am new to it and still not very good. Would ode solvers work? What other possible functions can I use? What does the code look like?
December 15th 2008, 05:17 PM
Matlab's ode solvers can only be used if you want to evaluate the solution at a specific point or a number of specific points. If you want an equation for the solution, you should take a look at
the dsolve function.
December 15th 2008, 07:00 PM
The ode solvers will give a numerical solution once you convert this into a first order system. See this thread. | {"url":"http://mathhelpforum.com/math-software/65156-matlab-second-order-nonlinear-differential-equation-print.html","timestamp":"2014-04-18T07:09:35Z","content_type":null,"content_length":"11133","record_id":"<urn:uuid:0249465f-3a31-4b1e-92bc-c803a7c778fb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
[GAME] Digit Time – released
Viewing 6 posts - 1 through 6 (of 6 total)
Author Posts
Author Posts
May 15, 2012 at 12:20 pm #241525
Digit Time is a simple match three game with a mathematical twist. Easy to play, the only skills you need are addition, subtraction and a eye for finding matches. Play by finding three
gryphon digit tiles in a row. Two of the tiles need to be identical and a third tile should combine with a neighbor with addition or subtraction to equal the other two tiles. Race to find as many
matches as possible before time runs out.
send pm
May 15, 2012 at 1:34 pm #377160
Participant My computer says your link is broken.
send pm
May 15, 2012 at 1:36 pm #377161
Thanks. That’s what I get for not proof reading.
send pm
May 15, 2012 at 1:41 pm #377162
Participant I looked at it and thought it was a clone of another game I’ve seen, but than I saw the addition and subtraction. That is a cool idea.
send pm
May 15, 2012 at 2:08 pm #377163
Thanks. I hoped it was a fairly original.
send pm
May 15, 2012 at 3:17 pm #377164
Hire a designer
send pm
Viewing 6 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. | {"url":"http://www.cocos2d-iphone.org/forums/topic/game-digit-time-released/","timestamp":"2014-04-20T23:29:16Z","content_type":null,"content_length":"27063","record_id":"<urn:uuid:383b3d43-91d0-4ef9-9b04-98980feb8100>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the The cross-product of Z scores ? - Docsity Answers: questions and answers (q&a) from college students
In 1991, India adopted liberal and free-market oriented principles and liberalized its economy to international trade under the guidance of Manmohan Singh, who then was the Finance Minister of India
under the leadership of P.V. Narasimha Rao the then Prime Minister who eliminated License Raj a pre- and post-British Era mechanism of strict government control on setting up new industry. Following
these strong economic reforms, and a strong focus on developing national infrastructure such as the Golden Quadrilateral project by Atal Bihari Vajpayee, the then Prime Minister, the country's
economic growth progressed at a rapid pace with very high rates of growth and large increases in the incomes of people.
"The cross product a × b is defined as a vector c that is perpendicular to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the
vectors span. "
"In mathematics, the cross product, vector product, or Gibbs' vector product is a binary operation on two vectors in three-dimensional space. It results in a vector which is perpendicular to both of
the vectors being multiplied and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering. "
"The cross-product of Z scores the result of multiplying a person’s Zscore on one variable by the person’s Zscore on another variable."
What is the The cross-product of Z scores ?
"The cross product, also called the vector product, is an operation on two vectors. The cross product of two vectors produces a third vector which is perpendicular to the plane in which the first two
lie. That is, for the cross of two vectors, A and B, we place A and B so that their tails are at a common point. Then, their cross product, A x B, gives a third vector, say C, whose tail is also at
the same point as those of A and B. The vector C points in a direction perpendicular (or normal) to both A and B. The direction of C depends on the Right Hand Rule. "
"To emphasize the fact that the result of a dot product is a scalar, while the result of a cross product is a vector, Gibbs also introduced the alternative names scalar product and vector product for
the two operations. These alternative names are still widely used in the literature. " | {"url":"http://en.docsity.com/answers/38987/what-is-the-the-cross-product-of-z-scores","timestamp":"2014-04-19T13:13:28Z","content_type":null,"content_length":"188716","record_id":"<urn:uuid:f3d65bce-9a5d-410a-aeea-b5a1dfa92baa>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |