content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
oint Presentations
The t - test - University of South Florida PPT
Presentation Summary : The t-test Inferences about Population Means Questions How are the distributions of z and t related? Given that construct a rejection region.
Source : http://luna.cas.usf.edu/%7Embrannic/files/regression/7%20The%20t-test.ppt
Presentation Summary : Title: PowerPoint Presentation Author: Mischel Lab Last modified by: Lara Kammrath Created Date: 7/21/2003 4:10:22 PM Document presentation format
Source : http://www.columbia.edu/cu/psychology/courses/S1610q/lectures/08TwoSampleTests.ppt
Presentation Summary : Title: PowerPoint Presentation Author: Del Siegle Last modified by: Del Siegle Created Date: 11/30/2000 4:15:10 AM Document presentation format: On-screen Show
Source : http://www.gifted.uconn.edu/siegle/research/t-test/ttest.pps
Presentation Summary : P-value=.12 Are the observations independent or correlated? Paired ttest: compares means between two related groups (e.g., the same subjects before and after) ...
Source : http://www.stanford.edu/%7Ekcobb/hrp259/lecture10.ppt
Independent Samples t-Test (or 2-Sample t-Test) PPT
Presentation Summary : Independent Samples t-Test (or 2-Sample t-Test) Advanced Research Methods in Psychology - lecture - Matthew Rockloff When to use the independent samples t-test The ...
Source : http://psychologyaustralia.homestead.com/index_files/04a_Independent_Sample_t-Test.ppt
T-Tests in SAS - School of Public Health PPT
Presentation Summary : PROC TTEST DATA = response; TITLE 'Two-sample T-test example'; class group; var time; RUN; Title: T-Tests in SAS Author: Preferred Customer Last modified by: Stelke
Source : http://www.biostat.umn.edu/~susant/Lab6415/Lab3.ppt
Presentation Summary : ... a ttest is inappropriate it takes longer for the CLT to kick in and the sample means do not immediately follow a t-distribution… This is the source of the ...
Source : http://www.stanford.edu/%7Ekcobb/hrp259/lecture8.ppt
Presentation Summary : The t-test Introduction to Using Statistics for Hypothesis Testing Overview of the t-test The t-test is used to help make decisions about population values.
Source : http://luna.cas.usf.edu/%7Embrannic/files/resmeth/lecture/7_ttest.ppt
Two Sample t-Test Practice – 1 - University of Texas at Austin PPT
Presentation Summary : Title: Two Sample t-Test Practice – 1 Author: Kim, Jinseok Last modified by: js Created Date: 4/8/2004 3:45:46 PM Document presentation format: On-screen Show
Source : http://www.utexas.edu/courses/schwab/sw318_spring_2004/SolvingProblems/Class22_TwoSamplesTTest.ppt
Presentation Summary : ... Useful in real-life research ttest varname= µ STATA: obtaining the critical value Example: Concentration of benzene in cigars Hypothesis Test: ...
Source : http://isites.harvard.edu/fs/docs/icb.topic148570.files/Presentation_3-_t_test.ppt
SAS_Enterprise_Guide_Two_sample_t-test.ppt PPT
Presentation Summary : One sample t-test Tests the mean of a single sample against an hypothesized value From the Task List select t-test Select as t Test type the One Sample t-test Select ...
Source : http://168.105.175.200/davis/02_16_06/SAS_Enterprise_Guide_Two_sample_t-test.ppt
Presentation Summary : The TTEST command can also be used for comparing dependent samples when the samples are proportions. Assume you have downloaded a data set with case-by-case data for ...
Source : http://www.bsos.umd.edu/socy/vanneman/socy601/601class10.ppt
Presentation Summary : Dependent t-Test CJ 526 Statistical Analysis in Criminal Justice Overview Dependent Samples Repeated-Measures When to Use a Dependent t-Test Two Dependent Samples ...
Source : http://cstl-hhs.semo.edu/cveneziano/Dependent-t-Test.ppt
Presentation Summary : T-TEST Research Methods University of Massachusetts at Boston ©2006 William Holmes T-TEST: COMPARING TWO GROUP MEANS Set criterion (alpha level).
Source : http://www.faculty.umb.edu/william.holmes/ttest.ppt
Presentation Summary : Idea: we look at the difference between the response from trts A and B: Di=YiA-YiB SAS paired test proc ttest data=newhorses; paired MobilityA*MobilityB; run ...
Source : http://www.lisa.stat.vt.edu/sites/default/files/anova_t-tests_0.ppt
T-Test (difference of means test) - CSU Bakersfield PPT
Presentation Summary : T-Test (difference of means test) T-Test = used to compare means between two groups. Level of measurement: DV (Interval/Ratio) IV (Nominal—groups)
Source : http://www.csub.edu/~pjennings/stat.400/ttest.ppt
Presentation Summary : Title it T-test Click on the next open cell next to the temperature readings at 18 degrees Go up to fx and choose TTEST, hit ok Array 1 is the readings (of 100%) ...
Source : http://users.bergen.org/donleo/EXPTECH/MARCH4th/Tests%20for%20Significance.ppt
Presentation Summary : Key assumptions of linear models Assumptions for linear models (ttest, ANOVA, linear correlation, linear regression, paired ttest, repeated-measures ANOVA, ...
Source : http://zimmer.csufresno.edu/%7Elburger/Math%20137+type1_2%20error.ppt
Statistics with SAS - University of Rhode Island PPT
Presentation Summary : Proc ttest This procedure is used to test the hypothesis of equality of means for two normal populations from which independent samples have been obtained.
Source : http://web.uri.edu/its/files/ppt/statistics.sas.ppt
SESSION # 8 - Web Posting Information PPT
Presentation Summary : Testing Statistical Hypothesis The One Sample t-Test Heibatollah Baghi, and Mastee Badii Parametric and Nonparametric Tests Parametric tests estimate at least one ...
Source : http://gunston.gmu.edu/healthscience/597/One%20Sample%20t-test.PPT
Presentation Summary : ... set start; proc ttest; paired rockNPP*sandNPP; run; -Example code for paired test -make sure they line up by appropriate pairing unit significance level ...
Source : http://www.eeescience.utoledo.edu/Faculty/Mayer/Biostats%20ppt%2006/2006/paired%20t-test%20and%20power.ppt
Presentation Summary : ... Ttest ANOVA Linear correlation Linear regression Paired ttest Repeated-measures ANOVA Mixed models/GEE modeling Outcome is normally distributed ...
Source : http://community.mis.temple.edu/mis2502sec003s11/files/2011/04/regression-instruction.ppt
Introduction to SAS - Madan Gopal Kundu PPT
Presentation Summary : module 6: sas/stat 16july2009 topics sas/stat proc means proc univariate proc freq proc corr proc ttest proc reg proc anova proc glm proc mixed proc univariate ...
Source : http://mgkundu.webs.com/Day%206.ppt
Presentation Summary : ... command for ttest, using a data set . * next, perform the t-test . * ttest for two means using a stored data set . ttest impscore, by(treat) ...
Source : http://www.bsos.umd.edu/socy/vanneman/socy601/601class09.ppt
Frequency Distributions - University of Texas at Austin PPT
Presentation Summary : Paired-Samples T-Test of Population Mean Differences Key Points about Statistical Test Sample Homework Problem Solving the Problem with SPSS Logic for Paired-Samples ...
Source : http://www.utexas.edu/courses/schwab/sw388r6_fall_2006/SolvingProblems/Homework%20Problems%20-%20Paired%20Samples%20T-Test.ppt
If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a
presentation that is using one of your presentation without permission, contact us immidiately at | {"url":"http://www.xpowerpoint.com/ppt/The-ttest--PPT.html","timestamp":"2014-04-17T18:24:48Z","content_type":null,"content_length":"22410","record_id":"<urn:uuid:af8d4a0f-b12f-4069-8f4d-72a09f2cfc3a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/emcrazy14/answered","timestamp":"2014-04-17T16:18:40Z","content_type":null,"content_length":"112532","record_id":"<urn:uuid:beb43ba3-4c07-443b-88c6-b462deb609ac>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much power can you get from a windmill? Windmills have long been used to pump water from wells, and are now being used to generate electricity on a somewhat wide scale. The most efficient
windmills are a little under 50% efficient, and it is physically impossible to exceed 59%. For an excellent windmill (50% efficiency) the power available is given by the following formula:
where the R is the radius of the propeller (1/2 of the diameter), and the v is the wind speed. The symbol means that the power is approximately equal to the value calculated on the right; however,
the formula is correct for only the very best wind turbines, and only for moderate wind speeds.
Most wind systems are designed to produce constant power above a certain wind speed. For example, a 200-kW system will produce power according to that formula until the wind speed reaches, say, 14 m/
s, after which the wind turbine produces 200 kilowatts for all wind speeds until, say, 25 m/s, after which the machine must be shut down lest it blow apart.
The formula is an overestimate for all but the very best wind turbines. For example, one popular 1-kW unit produces only half as much power as the formula predicts.
California ... California has some huge windmills ---some 3200 of them --- covering mountain sides in their windy areas. (Tehachapi, Altamont Pass, San Gorgonio) All together, they produce --- at a
rare full wind --- about 300 MW, which is about 1/4 as much power as a moderately large nuclear power plant produces, and is less than 10% of the electricity the small state of Connecticut consumes. | {"url":"http://www.energyadvocate.com/wind.htm","timestamp":"2014-04-20T13:19:10Z","content_type":null,"content_length":"4390","record_id":"<urn:uuid:59a1b51e-345a-4824-933d-a0cdcd761680>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Re: Numeric type classes
Lennart Augustsson lennart at augustsson.net
Wed Sep 13 08:36:15 EDT 2006
The sum function really only needs the argument list to be a monoid.
And the same is true for the product function, but with 1 and * as
the monoid operators. Sum and product are really the same function. :)
I don't think Haskell really has the mechanisms for setting up an
algebraic class hierarchy the right way. Consider some classes we
might want to build:
The problem is that going from, say, AbelianMonoid to SemiRing you
want to add a new Monoid (the multiplicative) to the class. So
SemiRing is a subclass of Monoid in two different way, both for + and
for *.
I don't know of any nice way to express this is Haskell.
-- Lennart
On Sep 13, 2006, at 03:26 , ajb at spamcop.net wrote:
> G'day all.
> Quoting Jason Dagit <dagit at eecs.oregonstate.edu>:
>> I was making an embedded domain specific language for excel
>> spreadsheet formulas recently and found that making my formula
>> datatype an instance of Num had huge pay offs.
> Just so you know, what we're talking about here is a way to make that
> even _more_ useful by dicing up Num.
>> I can even use things like Prelude.sum to
>> add up cells.
> Ah, but the sum function only needs 0 and (+), so it doesn't need
> the full power of Num. It'd be even _more_ useful if it worked on
> all data types which supported 0 and (+), but not necessarily (*):
> sum :: (AdditiveAbelianMonoid a) => [a] -> a
> product :: (MultiplicativeAbelianMonoid a) => [a] -> a
> Those are bad typeclass names, but you get the idea.
> Right now, to reuse sum, people have to come up with fake
> implementations for Num operations that simply don't make sense on
> their data type, like signum on Complex numbers.
>> All I really needed was to define Show and Num
>> correctly, neither of which took much mental effort or coding
>> tricks.
> You also needed to derive Eq, which gives you, in your case,
> structural
> equality rather than semantic equality (which is probably
> undecidable for
> your DSL).
> Cheers,
> Andrew Bromage
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2006-September/018087.html","timestamp":"2014-04-17T16:29:11Z","content_type":null,"content_length":"5582","record_id":"<urn:uuid:7000b959-32da-4472-9774-41e29959ee26>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Sudoko SolverFree SUDOKU SOLVER
Free Sudoko Solver
// ]]>
Four techniques to Solve Sudoku
Sudoku is a puzzle involving logic and no arithmetic or guessing is required. The basic idea of completing puzzles is to find cells-the small squares, where it is sure that only one value is a valid
placement. The basic rules of Sudoku are that fill a number in to every cell in the grid, using the numbers 1 to 9. The constraint is that use each number only once in each row, each column, and in
each of the 3×3 boxes. Given below, four techniques can help you solve Sudoku puzzle more easily and quickly.
Single Position
This is the simplest technique to apply by eye, select a row, column or box, and then go through each of the numbers that has not already been placed. Because of other placements, the positions where
the number could be placed will be limited. Often there will be two or three places that are valid, sometimes, there will only be one place. If you have narrowed it down to only one valid place where
you can put the number, you can fill that number straight in, since it cannot go anywhere else. Sometimes this technique is referred to as a ‘Hidden Single’ if the value is hidden alongside other
possible candidates. You should still be able to spot it though.
Single Candidate
If you are using pencil marks to store what candidates are still possible within each cell. If you’ve managed to rule out all other possibilities for a particular cell by examining the nearby column,
row and box, so there is only one number left that could possibly fit there – fill in that number. Once you have filled these in, you will soon see lot of more single candidates, and with this
technique you may be able to go on to complete many of the simpler Sudoku puzzles.
If you are using a computer program to assist you, then you will probably do most of your placements with this method. If you are doing your pencil marks by hand, double check that you have filled
them in otherwise you might make a placement that is not valid. Going a little further, there are some extra techniques which assist you to find either valid placements, or to assist you to remove
some of the pencil marks. These are certainly quite tricky to manage without using pencil marks.
Candidate Line
This technique does not actually tell you where to place a number, but instead assists you to find out places where you cannot place a number. If you are re using pencil marks, then this will assist
you to remove candidates, and from there you should be able to make placements. If you look within a box, and find that all of the places where you can put a particular number lie along a single
line, then you can be sure that wherever you put the number in that box, it has to be on the line.
Double Pair
This technique relies on spotting two pairs of candidates for a value, and using these to rule out candidates from other boxes. This technique is logically easy to spot because you only need to see
candidate pairs in two blocks. | {"url":"http://freesudokusolver.net/","timestamp":"2014-04-20T10:47:03Z","content_type":null,"content_length":"11062","record_id":"<urn:uuid:3fd1c4aa-1169-404b-915e-1e1a30902806>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
s mo
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Comparison and distinction of a caricature of a KCM with a LGM. In the KCM, any configuration is allowed, but move may only be made if a particle has at least one missing neighbor before and after
the move. In the LGM, the global configuration is defined such that all particles must have at least one missing neighbor, and all dynamical moves must respect this rule. Note that the local
environment around the moving particle is identical in this example, while the global configurations are distinct. Periodic boundary conditions are assumed for both panels.
Crystallization thermodynamics in LGM. Top: The t154 model. refers to the chemical potential of the type 1 particles. The maximum density observed for the lattice is 0.5479 (exactly 1849 out of 3375
lattice sites occupied). The three plotted quenching rates vary between a 0.01 and 0.05 increase of per 10000 cycles. Bottom: A close up of the equivalent plot for the BM model. Note the clear
discontinuity upon crystallization. Slower -increase rates produce a sharper discontinuity.
Decay of the self-intermediate scattering function for . Densities are 0.3, 0.4, 0.45, 0.48, 0.50, 0.51, 0.52, 0.53, 0.535, 0.5375, 0.5400, 0.5425 from fastest relaxation to slowest relaxation. These
densities are used in all plots in this paper unless otherwise indicated. Top: Plotted on a linear-log scale. Bottom: Same data as upper panel plotted on a vs scale. Lowest density curves are at the
top left.
Top: (time at which ) as a function of density, . Plotted for , with lowest at the top. Center: Beta stretching exponent of [from terminal fits ]. Lowest curve is at the top of the plot. Bottom: Plot
of log scale against chemical potential of type 2 particles. The behavior is consistent with .
Examples of stringlike motion apparent in the t154 model. (a) An example of a string with all neighboring particles removed. (b) A similar string in the context of other particles. Note that the
string here is truly isolated in space, away from other mobile particles. In these figures, type 1 particles are white, type 2 particles are blue, and type 3 are green. Sites occupied at the initial
time but vacated at the final time are shown in red. These pictures show only the differences in position of particles between the origin of time and the final time, not the path the particles took
to achieve that displacement. All figures are at a density of 0.5400, with times in (a) 251, (b) 199 526. The -relaxation time for at this density is about .
Examples cluster shapes in the (a) the t154, model, density and (b) the KA model, density . Arrows indicate motion between initial and final times. Time separation is 1/10th of the -relaxation time.
In the t154 model, we see more fractal and disconnected clusters, while in the KA model, mobile domains tend to be smoother clusters.
Violation of the Stokes–Einstein relation, , using at . Data has been normalized to at the lowest density.
van Hove function for and various times. Distances are measured independently along each coordinate axis. The times plotted, from left to right, are , 316227 (approximately the -relaxation time), and
. An exponential fit to the tail of the case is shown by a dotted line.
-dependent diffusion . Densities of 0.3000 (upper) and 0.5425 (lower). The higher density curve is multiplied by a scale factor of for ease of comparison. A dotted flat line is included for reference
of behavior expected in the purely Fickian case.
Top: Plot of at for densities 0.51, 0.52, 0.53, and 0.54. Bottom: Plot of for the same densities. Peak values correspond to lower bounds for of the value of in the upper panel at . | {"url":"http://scitation.aip.org/content/aip/journal/jcp/132/4/10.1063/1.3298877","timestamp":"2014-04-17T19:41:37Z","content_type":null,"content_length":"99182","record_id":"<urn:uuid:82274398-5d58-4858-8284-6bf18fcd8e34>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sloane, Neil J. A. - AT&T Labs
• The Cell Structures of Certain Lattices* J. H. Conway
• A Highly Symmetric Four-Dimensional Quasicrystal* Veit Elser and N. J. A. Sloane
• Doublyeven Doublyeven
• Multiple Description Vector Quantization with Lattice Codebooks: Design and Analysis
• On Asymmetric Coverings and Covering Numbers David Applegate, E. M. Rains 1 and N. J. A. Sloane
• A New Approach to the Construction of Optimal Designs* R. H. Hardin
• A New Upper Bound for the Minimum of an Integral Lattice of Determinant One*
• Soft Decoding Techniques for Codes and Lattices, Including the Golay Code and the Leech Lattice *
• D 4 : X = 1 6 : X = \Gamma
• Some Canonical Sequences of Integers \Lambda M. Bernstein and N. J. A. Sloane
• Quaternary Constructions for the Binary Single-Error-Correcting Codes of Julin, Best and Others
• Quantizing Using Lattice Intersections N. J. A. Sloane
• [1] A. Bonnecaze, A. R. Calderbank and P. Sol'e, Quaternary quadratic residue codes and unimodular lattices, IEEE Trans. Inform. Theory, 41 (1995), 366--377 and 1536.
• The EKG Sequence J. C. Lagarias, E. M. Rains and N. J. A. Sloane
• Codes from Symmetry Groups, and a [32, 17, 8] Code* Ying Cheng**
• A New Upper Bound for the Minimum of an Integral Lattice of Determinant One*
• McLaren's Improved Snub Cube and Other New Spherical Designs in Three Dimensions
• Inorg. Chem. 1985, 24, 4545-4558 4545 Contribution from AT&T Bell Laboratories,
• Asymmetric Multiple Description Lattice Vector Suhas N. Diggavi \Lambda , N. J. A. Sloane, and Vinay A. Vaishampayan
• A New Operation on Sequences: The Boustrophedon J. Millar, N. J. A. Sloane and N. E. Young
• Low-Dimensional Lattices IV: The Mass Formula
• The Antipode Construction for Sphere Packings J. H. Conway
• The Solution to Berlekamp's Switching Game* P. C. Fishburn and N. J. A. Sloane
• A STRENGTHENING OF THE ASSMUS-MATTSON THEOREM* A. R. Calderbank
• Interleaver design for Turbo codes H. R. Sadjadpour, N. J. A. Sloane, and G. Nebe, M. Salehi
• A Linear Construction for Certain Kerdock and Preparata Codes*
• A Linear Construction for Certain Kerdock and Preparata Codes*
• A GroupTheoretic Framework for the Construction of Packings in Grassmannian Spaces
• McLaren's Improved Snub Cube and Other New Spherical Designs in Three Dimensions
• On the Apparent Duality of the Kerdock and Preparata Codes ?
• Doc. Math. J. DMV 1 The Sphere Packing Problem
• The Binary Self-Dual Codes of Length Up To 32: A Revised Enumeration* J. H. Conway
• A Family of Optimal Packings in Grassmannian Manifolds P. W. Shor and N. J. A. Sloane
• The Antipode Construction for Sphere Packings J. H. Conway
• SelfDual Codes over the Integers Modulo 4* J. H. Conway
• The Automorphism Group of an [18, 9, 8] Quaternary Code * Ying Cheng**
• The Nordstrom-Robinson Code is the Binary Image of the Octacode
• Lattices with Few Distances* J. H. Conway
• pp + pp 012 pp + pp 023
• The Cell Structures of Certain Lattices * J. H. Conway
• LowDimensional Lattices V: Integral Coordinates for Integral Lattices*
• The NordstromRobinson Code is the Binary Image of the Octacode #
• Unsolved Problems in Graph Theory Arising from the Study of Codes* N. J. A. Sloane
• Interleaver Design for Turbo Codes H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe
• The Shadow Theory of Modular and Unimodular Lattices E. M. Rains and N. J. A. Sloane
• [14] B. W. Jones, The Arithmetic Theory of Quadratic Forms, Math. Assoc. America, 1950. [15] R. A. Rankin, A minimum problem for the Epstein zetafunction, Proc. Glasgow Math.
• Four Icosahedra Can Meet at a Point* Henri Rossat
• SIAM J. ALG. DISC. METH. Vol. 1, No. 4, December 1980
• The Optimal Lattice Quantizer in Three Dimensions1 E. S. Barnes
• Cyclic Self-Dual Codes1 N. J. A. Sloane
• A Monster Lie Algebra? R. E. Borcherds, J. H. Conway, L. Queen and N. J. A. Sloane
• R. L. Graham and N. J. A. Sloane Mathemutics and Statistics Research Center
• Soft Decoding Techniques for Codes and Lattices, Including the Golay Code and the Leech Lattice*
• Lexicographic Codes: Error-Correcting Codes from Game Theory* J. H. Conway
• An improvement to the Minkowski-Hlawka bound for packing superballs*
• The Automorphism Group of an [18, 9, 8] Quaternary Code* Ying Cheng**
• A [45, 13] Code with Minimal Distance 16* J. H. Conway
• Low-Dimensional Lattices V: Integral Coordinates for Integral Lattices*
• A New Upper Bound on the Minimal Distance of Self-Dual Codes* J. H. Conway
• Unsolved Problems in Graph Theory Arising from the Study of Codes* N. J. A. Sloane
• Low-Dimensional Lattices VI: Voronoi Reduction of Three-Dimensional J. H. Conway
• Computer-Generated Minimal (and Larger) Response-Surface Designs: (I) The Sphere
• Computer-Generated Minimal (and Larger) Response-Surface Designs: (II) The Cube
• Codes (Spherical) and Designs (Experimental) R. H. Hardin and N. J. A. Sloane
• Algebraic Description of Coordination Sequences and Exact Topological Densities for Zeolites
• Multiple Description Vector Quantization with Lattice Codebooks: Design and Analysis
• A Zador-Like Formula for Quantizers Based on Periodic Tilings N. J. A. Sloane and Vinay A. Vaishampayan
• Clifford-Weil groups of quotient representations. Annika Gunther and Gabriele Nebe
• Unsolved Problems Related to the Covering Radius of Codes* N. J. A. Sloane
• LowDimensional Lattices I: Quadratic Forms of Small Determinant
• The Optimal Lattice Quantizer in Three Dimensions 1 E. S. Barnes
• Algebraic Description of Coordination Sequences and Exact Topological Densities for Zeolites
• The Number of Hierarchical Orderings N. J. A. Sloane
• [1] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups, Second edition, SpringerVerlag, New York, 1993.
• ComputerGenerated Minimal (and Larger) ResponseSurface Designs: (I) The Sphere
• Unsolved Problems Related to the Covering Radius of Codes * N. J. A. Sloane
• New Trellis Codes Based on Lattices and Cosets * A. R. Calderbank and N. J. A. Sloane
• The Solution to Berlekamp's Switching Game * P. C. Fishburn and N. J. A. Sloane
• Low-Dimensional Lattices I: Quadratic Forms of Small Determinant
• [13] M. Klemm, Selbstduale Codes uber dem Ring der ganzen Zahlen modulo 4, Archiv Math.,
• ComputerGenerated Minimal (and Larger) ResponseSurface Designs: (II) The Cube
• Cyclic SelfDual Codes 1 N. J. A. Sloane
• [Map] B. W. Char et al., Maple V language reference manual, SpringerVerlag, NY, 1991. [CoM] H. S. M. Coxeter and W. O. J. Moser, Generators and relations for discrete groups,
• The Number of Hierarchical Orderings N. J. A. Sloane
• The Primary Pretenders John H. Conway, Richard K. Guy, W. A. Schneeberger & N. J. A. Sloane
• Lattices with Few Distances* J. H. Conway
• A ZadorLike Formula for Quantizers Based on Periodic Tilings N. J. A. Sloane and Vinay A. Vaishampayan
• ISIT2000, Sorrento, Italy, June 25--30, 2000 Asymptotic Performance of Multiple Description
• Multiple Description Lattice Vector Quantization \Lambda Sergio D. Servetto y Vinay A. Vaishampayan z N. J. A. Sloane z
• On the Covering Radius of Codes * R. L. Graham
• Interleaver design for short block length Turbo codes H. R. Sadjadpour, M. Salehi, N. J. A. Sloane, and G. Nebe
• Kepler Confirmed N. J. A. Sloane
• Lexicographic Codes: ErrorCorrecting Codes from Game Theory * J. H. Conway
• A Lower Bound on the Average Error of Vector Quantizers
• On the Covering Radius of Codes* R. L. Graham
• Quantum Error Correction and Orthogonal Geometry A. R. Calderbank, 1 E. M. Rains, 2 P. W. Shor, 1 and N. J. A. Sloane 1
• Four Icosahedra Can Meet at a Point * Henri Rossat
• On Kissing Numbers in Dimensions 32 to 128 Mathematisches Institut der Universitat
• Packing Lines, Planes, etc.: Packings in Grassmannian Spaces J. H. Conway
• On the Covering Multiplicity of Lattices* J. H. Conway
• LowDimensional Lattices IV: The Mass Formula
• Anti-Hadamard Matrices* R. L. Graham and N. J. A. Sloane
• LowDimensional Lattices VI: Voronoi Reduction of ThreeDimensional J. H. Conway
• A Nonadditive Quantum Code Eric M. Rains, R. H. Hardin, Peter W. Shor, and N. J. A. Sloane
• [38] M. Yamada, Distanceregular digraphs of girth 4 over an extension ring of Z=4Z, Graphs and Combinatorics, 6 (1990), 381--394.
• A [45, 13] Code with Minimal Distance 16* J. H. Conway
• A Nonadditive Quantum Code Eric M. Rains, R. H. Hardin, Peter W. Shor, and N. J. A. Sloane
• Self-Dual Codes over the Integers Modulo 4* J. H. Conway
• 4-Linearity of Kerdock, Preparata, Goethals and Related Codes A. Roger Hammons, Jr.**
• [1] E. F. Assmus, Jr., and H. F. Mattson, Jr., personal communication. [2] J. H. Conway and N. J. A. Sloane, SpherePackings, Lattices and Groups, SpringerVerlag,
• Mukerjee, R., & Wu, C. F. J. (1995). On the existence of saturated and nearly saturated asymmetrical orthogonal arrays. Ann. Statist. To appear.
• [Ha92] T. C. Hales, The sphere packing problem, J. Computational and Applied Math., 44 (1992), 41--76.
• AntiHadamard Matrices * R. L. Graham and N. J. A. Sloane
• A Lower Bound on the Average Error of Vector Quantizers
• Codes from Symmetry Groups, and a [32, 17, 8] Code* Ying Cheng**
• A Monster Lie Algebra? R. E. Borcherds, J. H. Conway, L. Queen and N. J. A. Sloane
• Design of Asymmetric Multiple Description Lattice Vector Quantizers
• UPDATED TABLES OF PARAMETERS OF (T; M;S)NETS Andrew T. Clayman, K. Mark Lawrence, Gary L. Mullen,
• PennyPacking and TwoDimensional Codes * R. L. Graham and N. J. A. Sloane
• A New Upper Bound on the Minimal Distance of SelfDual Codes* J. H. Conway
• A STRENGTHENING OF THE ASSMUSMATTSON THEOREM* A. R. Calderbank
• Efficient Regression Verification R. H. Hardin*
• A New Operation on Sequences: The Boustrophedon Transform #
• Quaternary Constructions for the Binary SingleErrorCorrecting Codes of Julin, Best and Others #
• On the Existence of Similar Sublattices J. H. Conway
• 4 M. BERNSTEIN AND N. J. A. SLOANE [C] J.H. Conway and N.J.A. Sloane, On Lattices Equivalent to Their Duals, Journal of Number
• Interleaver design for short block length Turbo codes H. R. Sadjadpour, M. Salehi, N. J. A. Sloane, and G. Nebe
• On the Existence of Similar Sublattices J. H. Conway
• An improvement to the MinkowskiHlawka bound for packing superballs *
• The Z 4 Linearity of Kerdock, Preparata, Goethals and Related Codes # A. Roger Hammons, Jr.**
• A Highly Symmetric FourDimensional Quasicrystal * Veit Elser and N. J. A. Sloane
• The EKG Sequence J. C. Lagarias, E. M. Rains and N. J. A. Sloane
• On the Covering Multiplicity of Lattices* J. H. Conway
• On Asymmetric Coverings and Covering Numbers David Applegate, E. M. Rains1 and N. J. A. Sloane
• Penny-Packing and Two-Dimensional Codes* R. L. Graham and N. J. A. Sloane
• Interleaver Design for Turbo Codes H. R. Sadjadpour, N. J. A. Sloane, M. Salehi, and G. Nebe
• A New Approach to the Construction of Optimal Designs* R. H. Hardin
• The Binary SelfDual Codes of Length Up To 32: A Revised Enumeration* J. H. Conway
• Figure 1: Putatively optimal clusters of N spheres, for N = 4 \Gamma 10 (figures (a)--(g)) and 13--20 (figures (h)--(o)). For greater clarity the spheres have been reduced in size, contacts
• [15] R. H. Hardin and N. J. A. Sloane, ``A new approach to the construction of optimal designs,'' preprint, 1992.
• The Primary Pretenders John H. Conway, Richard K. Guy, W. A. Schneeberger & N. J. A. Sloane
• Correction to: "The Ternary Golay Code, the Integers Mod 9 and the Coxeter-Todd Lattice," [IEEE Trans. Inform. Theory, | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/10/617.html","timestamp":"2014-04-19T18:06:56Z","content_type":null,"content_length":"31124","record_id":"<urn:uuid:47d7bf2f-4ba5-4996-9c49-888ce89ebc5a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Kind of Science: The NKS Forum - Space roar and four cosmological principles
David Brown
Registered: May 2009
Posts: 173
Space roar and four cosmological principles
*** UPDATE ADDED 31 Dec. 2013 ***
"Everything happens as if MOND were the effective force law." — Stacy McGaugh
http://www.astro.umd.edu/~ssm/mond/burn1.html "The MOND pages, Why Consider Mond?"
http://www.astro.umd.edu/~ssm/mond/moti_bullet.html Milgrom's perspective on the Bullet Cluster
"I was quite happy with the CCM, as everyone else ..." — Pavel Kroupa http://www.astro.uni-bonn.de/~pavel..._cosmology.html Pavel Kroupa: Dark Matter, Cosmology and Progress
Does M-theory provide a way of explaining the flyby anomaly?
http://vixra.org/abs/1203.0036 "Does the Rańada-Milgrom Effect Explain the Flyby Anomaly?"
If X is to string theory as Kepler’s laws are to Newtonian mechanics, then what is X?
http://vixra.org/abs/1312.0193 “Is the space roar an empirical proof that the inflaton field exists?”
http://quantumfrontiers.com/2013/11...hwarz/#comments (refs. 5, 6, 7)
On 12/20/13 5:17 AM, David Brown wrote:
Prof. Witten: Do you have an opinion concerning the comments posted for?
— D. Brown
On Fri, Dec 20, 2013 at 3:54 AM, Edward Witten wrote:
I am generally sympathetic with these observations
Edward Witten
*** END OF UPDATE ***
If we are looking for a fundamental theory without general covariance, it is likely that this theory should not have an underlying spacetime. This point is further motivated by the fact that General
Relativity has no local observables and perhaps no local gauge invariant degrees of freedom. Therefore, there is really no need for an underlying spacetime. Spacetime and general covariance should
appear as approximate concepts which are only valid macroscopically. — Nathan Seiberg, “Emergent Spacetime”, p. 10
Contemporary developments in theoretical physics suggest that another revolution may be in progress, through which a new source of “fuzziness” may enter physics, and spacetime itself may be
reinterpreted as an approximate, derived concept. — Edward Witten, “Reflections on the Fate of Spacetime”, p. 24
Apart from the general predictions that I have stressed, string theory also leads in a simple way to elegant and qualitatively correct models that combine quantum gravity and the other known forces
in nature, recovering the main features of the standard model. To improve these constructions further, the most vital need is probably to understand the vanishing (or extreme smallness) of the
cosmological constant (the energy density of the vacuum) after supersymmetry breaking. That remains out of reach. — Edward Witten, “Magic, Mystery, and Matrix”, p. 1128
What is the problem with explaining the nonzero cosmological constant (also known as “dark energy”)? Dark energy has 3 possible methods of explanation:
(1) generalized Chapline-Laughlin approach using dark energy scalar fields associated with modified black holes;
(2) generalized Fredkin-Wolfram approach using weird forces from alternate universes;
(3) generalized Witten-Seiberg approach using dark energy scalar fields associated with particles.
The empirical fact of the space roar shoots down (1) and (3) and endorses (2).
Why is NKS Chapter 9 correct? Why is Einstein’s dream of a deterministic trans-quantum theory correct? If there is a Wolframian updating parameter that runs the multiverse, then where is the
physical evidence?
Claim 1: Space roar proves, given plausible physical hypotheses, that Wolfram’s automaton underlies quantum field theory and also that modified M-theory with the Nambu transfer machine is
empirically valid.
Claim 2: Recall 't Hooft’s conjecture: Four dimensional SU(N) quantum gauge theory, i.e. QCD, is equivalent to a string theory. The only valid way to prove 't Hooft’s conjecture is to use Nambu
digital data and N as the number of alternate universes within Wolfram’s automaton for the multiverse.
What is the evidence for Claims 1 and 2? Consider 4 cosmological principles that might, or might not, be true:
Witten’s cosmological principle: The existence of gravitons and plausible physical hypotheses imply that M-theory in some form is the only valid way to unify gravitation and quantum field theory by
means of a mathematical theory that predicts gravitons.
Wolfram’s cosmological principle: The maximum physical wavelength is the Planck length times the Fredkin-Wolfram constant.
Fredkin’s cosmological principle of alternate universes: Einstein’s equivalence principle is valid for virtual mass-energy if and only if there do not exist weird forces from alternate universes.
Einstein’s cosmological principle of determinism: There exists some deterministic trans-quantum theory underlying quantum theory.
Physicists who doubt the value of M-theory need read no further. If Witten’s cosmological principle is true, then there are two basic ways to explain dark energy: either a new, esoteric, physical
principle that only M-theorists would be able to understand or a new physical principle that all physicists would immediately understand. If the second alternative holds true, then Wolfram’s
cosmological principle is an obvious candidate and both Fredkin’s cosmological principle and Einstein’s cosmological principle are likely to be true. However, the point is moot, because the
existence of space roar makes Wolfram’s automaton the winner in all cosmological arguments.
If you believe the preceding analysis, then you should believe that space roar proves the empirical validity of M-theory. Without M-theory, there would be no possibility of defining the Nambu
transfer machine.
MAIN CONJECTURE: M-theory as originally formulated is the limit, as the Fredkin-Wolfram constant approaches infinity, of modified M-theory with the Nambu transfer machine. Here, the Nambu transfer
machine requires for its definition detailed empirical data on paradigm-breaking photons that explain the GZK paradox.
Why should paradigm-breaking photons exist? If the black hole model as found in the original formulation of M-theory is replaced by a finite, digitized black hole model, then there should be dramatic
physical evidence for such ultra-weirdness. A study of unexplained physical phenomena points to the GZK paradox as the only likely candidate.
Why should the -1/2 in Einstein’s field equations need to be replaced by -1/2 plus an extremely small positive constant between 1 part in 100 million and 1 part in 10 thousand? Einstein’s field
equations are based upon the equivalence principle, and dark energy suggests that Fredkin’s cosmological principle is the only remotely plausible way that the equivalence principle might fail.
Therefore, dark energy explains the nonzero cosmological constant according to Wolfram’s automaton, and dark matter demands that -1/2 in the field equations be replaced by -1/2 plus a very small
positive constant.
Is Einstein’s cosmological principle of determinism the way of simplicity? Is the negation of Einstein’s cosmological principle of determinism the way of an absurd explosion of mathematical
Consider three hypotheses:
First Cosmological Foundational Hypothesis: Assuming the cosmological principles of Witten and Einstein, there are plausible physical hypotheses that imply the cosmological principles of Wolfram and
Second Cosmological Foundational Hypothesis: Assuming the cosmological principles of Witten, Wolfram, and Fredkin, there are plausible physical hypotheses that imply the cosmological principle of
Third Cosmological Foundational Hypothesis: The experimental details of the explanation of the GZK paradox lead to a unique model that justifies Einstein’s cosmological principle of determinism.
Why should physicists believe Einstein’s cosmological principle of determinism? In quantum theory, if there are Kolmogorov’s axioms for probability distributions, then where do the axioms come
from? The answer would seem to be mysticism or hidden determinism. Bell’s theorem says that local hidden variables are out, so if mysticism fails, then non-local hidden variables should be the
answer. Wolfram’s updating parameter is the champion of the non-local hidden variables approach. Furthermore, space roar confirms that Wolfram’s updating parameter actually has a physical
manifestation. Therefore, if the -1/2 in Einstein’s field equations is replaced by -1/2 plus an extremely small positive constant, then the empirical evidence will vote for or against the
particular Fredkin-Wolfram theory that I advocate. If my theory is confirmed, then Einstein’s cosmological principle of determinism is confirmed. If my theory is disconfirmed, determinism still has
philosophy on its side. Hidden variables are better than mysticism.
Last edited by David Brown on 12-31-2013 at 11:22 AM
Report this post to a moderator | IP: Logged | {"url":"http://forum.wolframscience.com/showthread.php?postid=6413","timestamp":"2014-04-19T22:40:16Z","content_type":null,"content_length":"25115","record_id":"<urn:uuid:157ef800-8a87-40e5-9900-1137384ce764>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
ArticlesVulnerable Derivatives and Good Deal BoundsConstant Proportion Debt Obligations (CPDOs)The Aggregate Demand for Treasury DebtCorporate bond liquidity before and after the onset of the subprime crisis
http://hdl.handle.net/10398/7932 2014-04-20T11:02:25Z 2014-04-20T11:02:25Z Murgoci, Agatha http://hdl.handle.net/10398/8899 2014-03-21T08:37:29Z 2014-03-21T00:00:00Z Vulnerable Derivatives and Good
Deal Bounds Murgoci, Agatha We price vulnerable derivatives - i.e. derivatives where the counter- party may default. These are basically the derivatives traded on the OTC markets. Default is modeled
in a structural framework. The technique employed for pricing is Good Deal Bounds. The method imposes a new restriction in the arbitrage free model by setting upper bounds on the Sharpe ratios of the
assets. The potential prices which are eliminated represent unreasonably good deals. The constraint on the Sharpe ratio translates into a constraint on the stochastic discount factor. Thus, tight
pricing bounds can be obtained. We provide a link between the objec- tive probability measure and the range of potential risk neutral measures which has an intuitive economic meaning. We also provide
tight pricing bounds for European calls and show how to extend the call formula to pricing other nancial products in a consistent way. Finally, we numeri- cally analyze the behavior of the good deal
pricing bounds. 2014-03-21T00:00:00Z Cont, Rama Jessen, Cathrine http://hdl.handle.net/10398/8890 2014-02-27T13:56:58Z 2014-02-27T00:00:00Z Constant Proportion Debt Obligations (CPDOs) Cont, Rama;
Jessen, Cathrine Constant Proportion Debt Obligations (CPDOs) are structured credit derivatives which generate high coupon payments by dynamically leveraging a position in an underlying portfolio of
investment grade index default swaps. CPDO coupons and principal notes received high initial credit ratings from the major rating agencies, based on complex models for the joint transition of ratings
and spreads for all names in the underlying portfolio. We propose a parsimonious model for analyzing the performance of CPDO strategies using a top-down approach which captures the essential risk
factors of the CPDO. Our approach allows to compute default probabilities, loss distributions and other tail risk measures for the CPDO strategy and analyze the dependence of these risk measures on
various parameters describing the risk factors. We nd that the probability of the CPDO defaulting on its coupon payments is found to be small{and thus the credit rating arbitrarily high{ by
increasing leverage, but the ratings obtained strongly depend on assumptions on the credit environment (high spread or low spread). More importantly, CPDO loss distributions are found to be bimodal
with a wide range of tail risk measures inside a given rating category, suggesting that credit ratings are insu cient performance indicators for such complex leveraged strategies. A worst-case
scenario analysis indicates that CPDO strategies have a high exposure to persistent spread-widening scenarios CPDO ratings are shown to be quite unstable during the lifetime of the strategy.
2014-02-27T00:00:00Z Krishnamurthy, Arvind Vissing-Jørgensen, Annette http://hdl.handle.net/10398/8882 2014-02-21T13:43:22Z 2014-02-21T00:00:00Z The Aggregate Demand for Treasury Debt Krishnamurthy,
Arvind; Vissing-Jørgensen, Annette Investors value the liquidity and safety of US Treasuries. We document this by showing that changes in Treasury supply have large effects on a variety of yield
spreads. As a result, Treasury yields are reduced by 73 basis points, on average, from 1926 to 2008. Both the liquidity and safety attributes of Treasuries are driving this phenomenon. We document
this by analyzing the spread between assets with different liquidity (but similar safety) and those with different safety (but similar liquidity). The low yield on Treasuries due to their extreme
safety and liquidity suggests that Treasuries in important respects are similar to money. 2014-02-21T00:00:00Z Dick-Nielsen, Jens Feldhütter, Peter Lando, David http://hdl.handle.net/10398/8864
2014-01-03T09:52:22Z 2014-01-03T00:00:00Z Corporate bond liquidity before and after the onset of the subprime crisis Dick-Nielsen, Jens; Feldhütter, Peter; Lando, David We analyze liquidity
components of corporate bond spreads during 2005–2009 using a new robust illiquidity measure. The spread contribution from illiquidity increases dramatically with the onset of the subprime crisis.
The increase is slow and persistent for investment grade bonds while the effect is stronger but more short-lived for speculative grade bonds. Bonds become less liquid when financial distress hits a
lead underwriter and the liquidity of bonds issued by financial firms dries up under crises. During the subprime crisis, flight-to-quality is confined to AAA-rated bonds. 2014-01-03T00:00:00Z | {"url":"http://openarchive.cbs.dk/feed/atom_1.0/10398/7932","timestamp":"2014-04-20T11:02:25Z","content_type":null,"content_length":"6768","record_id":"<urn:uuid:39c87bc6-23af-40c9-9e6f-6dc440b70460>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
and Edward Greenberg (1995),“Understanding the Metropolis-Hastings Algorithm
Results 1 - 10 of 318
- Journal of Financial Economics , 1999
"... When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation. The OLS estimator's
"nite-sample properties, derived here, can depart substantially from the standard regression setting. Bayesian ..."
Cited by 257 (16 self)
Add to MetaCart
When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation. The OLS estimator's
"nite-sample properties, derived here, can depart substantially from the standard regression setting. Bayesian posterior distributions for the regression parameters are obtained under speci"cations
that di!er with respect to (i) prior beliefs about the autocorrelation of the regressor and (ii) whether the initial observation of the regressor is speci"ed as "xed or stochastic. The posteriors di!
er across such speci"cations, and asset allocations in the presence of estimation risk exhibit sensitivity to those
- Econometric Review , 1999
"... This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on
the combination of models and the development of predictive distributions. Moving beyond conditioning on a ..."
Cited by 199 (15 self)
Add to MetaCart
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the
combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for
formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example,
econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators.
A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and
provides illustrations using two simple econometric models. *This paper was originally prepared for the Australasian meetings of the Econometric Society in Melbourne, Australia,
- Econometrica , 1998
"... This paper is concerned with the Bayesian estimation of non-linear stochastic differential equations when only discrete observations are available. The estimation is carried out using a tuned
MCMC method, in particular a blocked Metropolis-Hastings algorithm, by introducing auxiliary points and usin ..."
Cited by 155 (18 self)
Add to MetaCart
This paper is concerned with the Bayesian estimation of non-linear stochastic differential equations when only discrete observations are available. The estimation is carried out using a tuned MCMC
method, in particular a blocked Metropolis-Hastings algorithm, by introducing auxiliary points and using the Euler-Maruyama discretisation scheme. Techniques for computing the likelihood function,
the marginal likelihood and diagnostic measures (all based on the MCMC output) are presented. Examples using simulated and real data are presented and discussed in detail.
- Statistica Sinica , 1997
"... Abstract: This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate
SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytic ..."
Cited by 124 (5 self)
Add to MetaCart
Abstract: This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS
formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification. Hyperparameter settings which base selection on practical significance, and
the implications of using mixtures with point priors are discussed. Computational methods for posterior evaluation and exploration are considered. Rapid updating methods are seen to provide feasible
methods for exhaustive evaluation using Gray Code sequencing in moderately sized problems, and fast Markov Chain Monte Carlo exploration in large problems. Estimation of normalization constants is
seen to provide improved posterior estimates of individual model probabilities and the total visited probability. Various procedures are illustrated on simulated sample problems and on a real problem
concerning the construction of financial index tracking portfolios.
- Bioinformatics , 2003
"... Motivation: Bayesian networks have been applied to infer genetic regulatory interactions from microarray gene expression data. This inference problem is particularly hard in that interactions
between hundreds of genes have to be learned from very small data sets, typically containing only a few doze ..."
Cited by 109 (3 self)
Add to MetaCart
Motivation: Bayesian networks have been applied to infer genetic regulatory interactions from microarray gene expression data. This inference problem is particularly hard in that interactions between
hundreds of genes have to be learned from very small data sets, typically containing only a few dozen time points during a cell cycle. Most previous studies have assessed the inference results on
real gene expression data by comparing predicted genetic regulatory interactions with those known from the biological literature. This approach is controversial due to the absence of known gold
standards, which renders the estimation of the sensitivity and specificity, that is, the true and (complementary) false detection rate, unreliable and difficult. The objective of the present study is
to test the viability of the Bayesian network paradigm in a realistic simulation study. First, gene expression data are simulated from a realistic biological network involving DNAs, mRNAs, inactive
protein monomers and active protein dimers. Then, interaction networks are inferred from these data in a reverse engineering approach, using Bayesian networks and Bayesian learning with Markov chain
Monte Carlo. Results: The simulation results are presented as receiver operator characteristics curves. This allows estimating the proportion of spurious gene interactions incurred for a specified
target proportion of recovered true interactions. The findings demonstrate how the network inference performance varies with the training set size, the degree of inadequacy of prior assumptions, the
experimental sampling strategy and the inclusion of further, sequence-based information.
- BIOMETRIKA , 1998
"... This paper provides a practical simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by
Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the ..."
Cited by 100 (6 self)
Add to MetaCart
This paper provides a practical simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov
chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the EM algorithm. A practical approach for the computation of Bayes factors from the simulation
output is also developed. The methods are applied to a dataset with a bivariate binary response, to a four-year longitudinal dataset from the Six Cities study of the health effects of air pollution
and to a sevenvariate binary response dataset on the labour supply of married women from the Panel Survey of Income Dynamics.
- FORTHCOMING IN THE JOURNAL OF ECONOMETRICS , 2001
"... In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of
the prior. In particular, “diffuse” priors on model-specific parameters can lead to quite unexpected consequ ..."
Cited by 94 (5 self)
Add to MetaCart
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the
prior. In particular, “diffuse” priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large)
number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an “automatic” or “benchmark” prior structure that can be used in such cases. We
focus on the Normal linear regression model with uncertainty in the choice of regressors. We propose a partly noninformative prior structure related to a Natural Conjugate g-prior specification,
where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We
investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample
implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (1995), combined with efficient coding in Fortran, makes it feasible to conduct large
simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and
contrasted with results in the literature. The main findings of the paper will lead us to propose a “benchmark” prior specification in a linear regression context with model uncertainty.
- Institute of Mathematical Statistics , 2001
"... In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the
data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is r ..."
Cited by 85 (3 self)
Add to MetaCart
In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the
posterior distribution provides a coherent post data summary of the remaining uncertainty which is relevant for model selection. However, the practical implementation of this approach often requires
carefully tailored priors and novel posterior calculation methods. In this article, we illustrate some of the fundamental practical issues that arise for two different model selection problems: the
variable selection problem for the linear model and the CART model selection problem.
- STATISTICAL SCIENCE , 2001
"... Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue
after burn-in? Developing rigorous answers to these questions presently requires a detailed study of the ..."
Cited by 74 (19 self)
Add to MetaCart
Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue
after burn-in? Developing rigorous answers to these questions presently requires a detailed study of the convergence properties of the underlying Markov chain. Consequently, in most practical
applications of MCMC, exact answers to (Q1) and (Q2) are not sought. The goal of this paper is to demystify the analysis that leads to honest answers to (Q1) and (Q2). The authors hope that this
article will serve as a bridge between those developing Markov chain theory and practitioners using MCMC to solve practical problems. The ability to formally address (Q1) and (Q2) comes from
establishing a drift condition and an associated minorization condition, which together imply that the underlying Markov chain is geometrically ergodic. In this paper, we explain exactly what drift
and minorization are as well as how and why these conditions can be used to form rigorous answers to (Q1) and (Q2). The basic ideas are as follows. The results of Rosenthal (1995) and Roberts and
Tweedie (1999) allow one to use drift and minorization conditions to construct a formula giving an analytic upper bound on the distance to stationarity. A rigorous answer to (Q1) can be calculated
using this formula. The desired characteristics of the target distribution are typically estimated using ergodic averages. Geometric ergodicity of the underlying Markov chain implies that there are
central limit theorems available for ergodic averages (Chan and Geyer 1994). The regenerative simulation technique (Mykland, Tierney and Yu 1995, Robert 1995) can be used to get a consistent estimate
of the variance of the asymptotic nor...
, 2000
"... This paper proposes and estimates a more general parametric stochastic variance model of equity index returns than has been previously considered using data from both underlying and options
markets. The parameters of the model under both the objective and riskneutral measures are estimated simultane ..."
Cited by 72 (1 self)
Add to MetaCart
This paper proposes and estimates a more general parametric stochastic variance model of equity index returns than has been previously considered using data from both underlying and options markets.
The parameters of the model under both the objective and riskneutral measures are estimated simultaneously. I conclude that the square root stochastic variance model of Heston (1993) and others is
incapable of generating realistic returns behavior and find that the data are more accurately represented by a stochastic variance model in the CEV class or a model that allows the price and variance
processes to have a time-varying correlation. Specifically, I find that as the level of market variance increases, the volatility of market variance increases rapidly and the correlation between the
price and variance processes becomes substantially more negative. The heightened heteroskedasticity in market variance that results generates realistic crash probabilities and dynamics and causes
returns to display values of skewness and kurtosis much more consistent with their sample values. While the model dramatically improves the fit of options prices relative to the square root process,
it falls short of explaining the implied volatility smile for short-dated options. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=16400","timestamp":"2014-04-18T12:44:59Z","content_type":null,"content_length":"42426","record_id":"<urn:uuid:c8811429-2f5a-4add-b027-b1e4fb1657fd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imaginary quadratic field contained in Hecke orbit field?
up vote 3 down vote favorite
Let $\tau$ in the upper half plane lie in an imaginary quadratic field $K$.
Then is $K \subset \mathbb{Q}(\{j(g \tau) \ | \ g \in GL_2^+(\mathbb{Q}) \})$?
(here $j$ is the modular $j$-function, and $GL_2^+(\mathbb{Q})$ means positive determinant - i.e. we adjoined all $j$ values of elliptic curves with CM by an order in $K$).
If the above isn't true, then is $K$ contained in $\mathbb{Q}(S)$ where $S$ is the set of $j$ values of all CM elliptic curves?
I'm also interested in whether the appropriate statement is true for a general Shimura variety.
modular-forms elliptic-curves shimura-varieties
add comment
1 Answer
active oldest votes
"No" for both questions about CM elliptic curves and "I'm not even sure I know what the question would be" about general Shimura varieties.
Basic idea: If $j$ is the $j$-invariant of a CM elliptic curve, then there is some imaginary quadratic discriminant $\Delta \equiv 0$ or $1 \bmod 4$ such that $\mathbf{Q}(j) \cong \mathbf{Q}
[X]/H_\Delta(X)$ where $H_\Delta(X) \in \mathbf{Z}[X]$ is the Hilbert Class Polynomial of discriminant $\Delta$, whose roots are the $j$-invariants of elliptic curves over the complex numbers
(actually $\overline{\mathbf{Q}}$ is enough) with CM by $\mathbf{Z}\left[\dfrac{ \Delta + \sqrt \Delta}{2}\right]$.
The point is that now we can see that there is an embedding $\mathbf{Q}(j) \hookrightarrow \mathbf{R}$, for all possible $j$. Therefore there is an embedding $\mathbf{Q}(S) \hookrightarrow \
mathbf{R}$. To see this, it's enough to note that for any two CM $j$-invariants $j_1$ and $j_2$ that there exists an embedding $\mathbf{Q}(j_1,j_2)\hookrightarrow \mathbf{R}$. Let $J_1$ and
$J_2$ be the canonical image of $j_1$ and $j_2$ in the real numbers. Then $\mathbf{Q}(j_1)$ embeds into the real numbers as $\mathbf{Q} + \mathbf{Q}J_1 + \dots + \mathbf{Q}J_1^{h_1 -1}$ and $
\mathbf{Q}(j_2)$ embeds into the real numbers as $\mathbf{Q} + \mathbf{Q}J_2 + \dots + \mathbf{Q}J_2^{h_2 -1}$. Therefore $\mathbf{Q} + \mathbf{Q}J_1 + \mathbf{Q}J_2 + \dots + \mathbf{Q}J_1^
up vote {h_1 -1}J_2^{h_2 -1}$ is a copy of $\mathbf{Q}(j_1,j_2)$ inside of $\mathbf{R}$. Notice that I didn't use direct sums because $\mathbf{Q}(j_1)$ and $\mathbf{Q}(j_2)$ might not be linearly
2 down disjoint over $\mathbf{Q}$! This is also the reason I didn't use a tensor product argument. In any case, this inductive step allows us to work with direct limits and embed $\mathbf{Q}(S)$
vote into $\mathbf{R}$.
Therefore if we assume that there is an embedding $K\hookrightarrow \mathbf{Q}(S)$ then there must be an embedding $K\hookrightarrow\mathbf{Q}(S) \hookrightarrow \mathbf{R}$, which is absurd.
Therefore, there is no embedding $K\hookrightarrow \mathbf{Q}(S)$.
To show that we have an embedding $\mathbf{Q}(j)\hookrightarrow\mathbf{R}$, consider that there is some $\tau \in \mathcal{H}$ of the form $\dfrac{1}{2}\sqrt\Delta$ or $\dfrac{ 1 + \sqrt \
Delta}{2}$ such that $j(\tau)$ is a root of $H_\Delta(X)$. But then $j(\tau)$ is real, because the inverse image of the reals under the $j$-function contains the lines $\lbrace iy : y \ge 1\
rbrace$ and $\lbrace 1/2 + iy : y \ge (1/2)\sqrt 3\rbrace$. Therefore we have our embedding $\mathbf{Q}(j) \hookrightarrow \mathbf{R}$.
@stankewicz:Fair enough, you have an embedding $\mathbb{Q}(j)$ into $\mathbb{R}$. But could you please give more details on how you embed $\mathbb{Q}(S)$ into $\mathbb{R}$ (surely $H_{\
Delta}(X)$ could have some complex roots)?. – Adam Harris Oct 8 '12 at 13:25
@stankewicz: Isn't it true that if the class of $\mathfrak{a}$ in the ideal class group has order greater than 2 then $j(\mathfrak{a})$ isn't real? – Adam Harris Oct 8 '12 at 13:33
I edited the answer to be more explicit. It is very possible for Hilbert Class Polynomials to have complex roots and it's easy to find examples by quickly playing around with MAGMA or sage.
It just doesn't matter, because as soon as there is ONE real embedding, any subfield has to also have a real embedding. – stankewicz Oct 8 '12 at 15:13
@stankewicz: Thanks, but I'm still confused: What is this `canonical image' of $j_i$ in the reals? If it is $j(\mathcal{O})$ where $\mathcal{O}$ is your order then what if the canonical
images are equal i.e. $J_1=J_2$? i.e. how do you embed the splitting field of $H_D(X)$ in the reals if it has a complex root? – Adam Harris Oct 8 '12 at 20:08
I don't embed the splitting field of $H_D(X)$ into the reals because you can't. Is your Hecke orbit field the union of $\mathbf{Q}(j(E))$ over the set of elliptic curves with CM by an order
inside $K$, or is it the union of the Galois closures? If you mean Galois closures, the answer becomes yes and in fact the Galois closure for $E$ with CM by ANY order in $K$ will work. –
stankewicz Oct 8 '12 at 21:25
show 9 more comments
Not the answer you're looking for? Browse other questions tagged modular-forms elliptic-curves shimura-varieties or ask your own question. | {"url":"http://mathoverflow.net/questions/108903/imaginary-quadratic-field-contained-in-hecke-orbit-field/108950","timestamp":"2014-04-17T19:06:15Z","content_type":null,"content_length":"59791","record_id":"<urn:uuid:a5440ddf-9776-496a-ba34-cfea26367094>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total Fixed and Variable Cost Curves
This diagram shows the total fixed, total variable and total cost curves where there are constant returns to the variable factor.
Fixed costs are fixed (!) and so the TFC curve will be constant at all levels of output. The TVC curve in this diagram increases at a constant rate showing constant returns to the variable factor.
This will mean that the marginal cost is also constant. The TC curve is simply the fixed and variable costs added together and so runs parallel to the TVC curve, the gap between the two being the
level of fixed costs. | {"url":"http://www.bized.co.uk/reference/diagrams/Total-Fixed-and-Variable-Cost-Curves","timestamp":"2014-04-16T15:59:57Z","content_type":null,"content_length":"12611","record_id":"<urn:uuid:d6d45280-9d81-4b80-be1a-ea4b61efb97b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
After 8 years, I have forgotten a thing or two
August 31st 2008, 04:56 PM #1
Aug 2008
North Texas
Hey everyone, first post. I found your website when I was looking for a forum regarding mathematics help. I am getting close to graduating with a non science major, but I have to complete my lab
hours that I started 8 years ago.
My background is Physics I in 1998 B, and Cal I and Cal II in 1998 and 1999.. B and a C I think...
Anyway I am slowly relearning everything but I have come to a factoring issue that I am needing someone to step through so I can understand the right way.
Since it is early algebra, I think, I am not finding any examples in my Physics or Calculus Texts.
I believe I have the correct beginning formulas, and the book has provided the answer, I am looking for the meat in between.
Any help would be much appreciated.
if you put a small (+) test charge, +q, at point P, it will be affected by both charges +Q and -Q using the principle of superposition.
charge +Q exerts a force ...
$F_1 = \frac{kQq}{(x+a)^2}$
charge -Q exerts a force in the opposite direction ...
$F_2 = -\frac{kQq}{(x-a)^2}$
the net force is the sum ...
$F_1 + F_2 = \frac{kQq}{(x+a)^2} - \frac{kQq}{(x-a)^2}$
$F_1 + F_2 = kQq \left(\frac{1}{(x+a)^2} - \frac{1}{(x-a)^2}\right)$
$F_1 + F_2 = kQq \left(\frac{(x-a)^2}{(x-a)^2(x+a)^2} - \frac{(x+a)^2}{(x-a)^2(x+a)^2}\right)$
$F_1 + F_2 = kQq \left(\frac{(x-a)^2 - (x+a)^2}{(x-a)^2(x+a)^2} \right)$
$F_1 + F_2 = kQq \left(\frac{(x^2 -2xa + a^2) - (x^2 + 2xa +a^2)}{(x-a)^2(x+a)^2} \right)$
$F_1 + F_2 = kQq \left(\frac{-4xa}{(x-a)^2(x+a)^2} \right)$
$F_1 + F_2 = -kQq \left(\frac{4xa}{(x-a)^2(x+a)^2} \right)$
the electric field at point P is $E_P = \frac{F_1 + F_2}{q}<br />$ ...
$E_P = -kQ \left(\frac{4xa}{(x-a)^2(x+a)^2} \right) = \frac{-4kQxa}{(x^2 - a^2)^2}$
the (-) sign indicates the direction of the electric field (left) as you would expect since the charge -Q is nearer point P than the +Q charge.
hope you can follow the algebra ... I tried hard not to leave out any step.
That is exactly what I was looking for Skeeter... Thanks SO MUCH!
I worked through this step by step, and the only part that I am missing is how $(x-a)^2 (x+a)^2 = (x^2 - a^2)^2$.
I broke it down to $(x^2 - 2ax +a^2)(x^2 + 2ax +a^2)$ but I am unsure of the next step, or I could be completely wrong at that.
$(x-a)^2(x+a)^2 =$
$(x-a)(x-a)(x+a)(x+a) =$
$[(x-a)(x+a)][(x-a)(x+a)] =$
$(x^2-a^2)(x^2-a^2) = (x^2-a^2)^2$
Guess I owe you a beverage...
Thanks man.
Re-learning is fun!
August 31st 2008, 05:20 PM #2
August 31st 2008, 05:24 PM #3
Aug 2008
North Texas
August 31st 2008, 05:40 PM #4
Aug 2008
North Texas
August 31st 2008, 06:00 PM #5
August 31st 2008, 06:01 PM #6
Aug 2008
North Texas | {"url":"http://mathhelpforum.com/advanced-applied-math/47277-after-8-years-i-have-forgotten-thing-two.html","timestamp":"2014-04-19T23:36:15Z","content_type":null,"content_length":"49225","record_id":"<urn:uuid:04512fa1-89f4-42d8-a374-eeb65b9ef55a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unformatted Document Excerpt
Hong Kong Shue Yan
Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
SITE REFERENCES FOR STATISTICS AND ANALYSIS DATA Abraham, B., & Ledolter, J. (1983). Statistical methods for forecasting. New York: Wiley. Adorno, T. W., Frenkel-Brunswik, E., Levinson, D. J., &
Sanford, R. N. (1950). The authoritarian personality. New York: Harper. Agrawal, R., Imielinski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. Proceedings
of the 1993 ACM SIGMOD Conference, Washington, DC. Agrawal, R. & Srikant, R. (1994). Fast algorithms for mining association rules. Proceedings of the 20th VLDB Conference. Santiago, Chile. Agresti,
Alan (1996). An Introduction to Categorical Data Analysis. New York: Wiley. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B. N. Petrov and F. Csaki
(Eds.), Second International Symposium on Information Theory. Budapest: Akademiai Kiado. Akaike, H. (1983). Information measures and model selection. Bulletin of the International Statistical
Institute: Proceedings of the 44th Session, Volume 1. Pages 277-290. Aldrich, J. H., & Nelson, F. D. (1984). Linear probability, logit, and probit models. Beverly Hills, CA: Sage Publications. Almon,
S. (1965). The distributed lag between capital appropriations and expenditures. Econometrica, 33, 178-196. American Supplier Institute (1984-1988). Proceedings of Supplier Symposia on Taguchi
Methods. (April, 1984; November, 1984; October, 1985; October, 1986; October, 1987; October, 1988), Dearborn, MI: American Supplier Institute. Anderson, O. D. (1976). Time series analysis and
forecasting. London: Butterworths. Anderson, S. B., & Maier, M. H. (1963). 34,000 pupils and how they grew. Journal of Teacher Education, 14, 212-216. Anderson, T. W. (1958). An introduction to
multivariate statistical analysis. New York: Wiley. Anderson, T. W. (1984). An introduction to multivariate statistical analysis (2nd ed.). New York: Wiley. Anderson, T. W., & Rubin, H. (1956).
Statistical inference in factor analysis. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability. Berkeley: The University of California Press. Andrews, D. F. (1972).
Plots of high-dimensional data. Biometrics, 28, 125-136. ASQC/AIAG (1990). Measurement systems analysis reference manual. Troy, MI: AIAG. ASQC/AIAG (1991). Fundamental statistical process control
reference manual. Troy, MI: AIAG. AT&T (1956). Statistical quality control handbook, Select code 700-444. Indianapolis, AT&T Technologies. Auble, D. (1953). Extended tables for the Mann-Whitney
statistic. Bulletin of the Institute of Educational Research, Indiana University, 1, No. 2. Bagozzi, R. P., & Yi, Y. (1989). On the use of structural equation models in experimental design. Journal
of Marketing Research, 26, 271-284. Bagozzi, R. P., Yi, Y., & Singh, S. (1991). On the use of structural equation models in experimental designs: Two extensions. International Journal of Research in
Marketing, 8, 125140. Bailey, A. L. (1931). The analysis of covariance. Journal of the American Statistical Association, 26, 424-435. Bails, D. G., & Peppers, L. C. (1982). Business fluctuations:
Forecasting techniques and applications. Englewood Cliffs, NJ: Prentice-Hall. Bain, L. J. (1978). Statistical analysis of reliability and life-testing models. New York: Decker. Bain, L. J. and
Engelhardt, M. (1989) Introduction to Probability and Mathematical Statistics. Kent, MA: PWS. Baird, J. C. (1970). Psychophysical analysis of visual space. New York: Pergamon Press. Baird, J. C., &
Noma, E. (1978). Fundamentals of scaling and psychophysics. New York: Wiley. Barcikowski, R., & Stevens, J. P. (1975). A Monte Carlo study of the stability of canonical correlations, canonical
weights, and canonical variate-variable correlations. Multivariate Behavioral Research, 10, 353-364. Barker, T. B. (1986). Quality engineering by design: Taguchi's philosophy. Quality Progress, 19,
32-42. Barlow, R. E., & Proschan, F. (1975). Statistical theory of reliability and life testing. New York: Holt, Rinehart, & Winston. Barlow, R. E., Marshall, A. W., & Proschan, F. (1963). Properties
of probability distributions with monotone hazard rate. Annals of Mathematical Statistics, 34, 375-389. Barnard, G. A. (1959). Control charts and stochastic processes. Journal of the Royal
Statistical Society, Ser. B, 21, 239. Bartholomew, D. J. (1984). The foundations of factor analysis. Biometrika, 71, 221-232. Bates, D. M., & Watts, D. G. (1988). Nonlinear regression analysis and
its applications. New York: Wiley. Bayne, C. K., & Rubin, I. B. (1986). Practical experimental designs and optimization methods for chemists. Deerfield Beach, FL: VCH Publishers. Becker, R. A.,
Denby, L., McGill, R., & Wilks, A. R. (1986). Datacryptanalysis: A case study. Proceedings of the Section on Statistical Graphics, American Statistical Association, 92-97. Bellman, R. (1961).
Adaptive Control Processes: A Guided Tour. Princeton University Press. Belsley, D. A., Kuh, E., and Welsch, R. E. (1980). Regression Diagnostics. New York: Wiley. Bendat, J. S. (1990). Nonlinear
system analysis and identification from random data. New York: Wiley. Bentler, P. M, & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures.
Psychological Bulletin, 88, 588-606. Bentler, P. M. (1986). Structural modeling and Psychometrika: A historical perspective on growth and achievements. Psychometrika, 51, 35-51. Bentler, P. M.
(1989). EQS Structural equations program manual. Los Angeles, CA: BMDP Statistical Software. Bentler, P. M., & Weeks, D. G. (1979). Interrelations among models for the analysis of moment structures.
Multivariate Behavioral Research, 14, 169-185. Bentler, P. M., & Weeks, D. G. (1980). Linear structural equations with latent variables. Psychometrika, 45, 289-308. Benzcri, J. P. (1973). L'Analyse
des Donnes: T. 2, I' Analyse des correspondances. Paris: Dunod. Bergeron, B. (2002). Essentials of CRM: A guide to customer relationship management. NY: Wiley. Berkson, J. (1944). Application of the
Logistic Function to Bio-Assay. Journal of the American Statistical Association, 39, 357-365. Berkson, J., & Gage, R. R. (1950). The calculation of survival rates for cancer. Proceedings of Staff
Meetings, Mayo Clinic, 25, 250. Berry, M., J., A., & Linoff, G., S., (2000). Mastering data mining. New York: Wiley. Bhote, K. R. (1988). World class quality. New York: AMA Membership Publications.
Binns, B., & Clark, N. (1986). The graphic designer's use of visual syntax. Proceedings of the Section on Statistical Graphics, American Statistical Association, 36-41. Birnbaum, Z. W. (1952).
Numerical tabulation of the distribution of Kolmogorov's statistic for finite sample values. Journal of the American Statistical Association, 47, 425-441. Birnbaum, Z. W. (1953). Distribution-free
tests of fit for continuous distribution functions. Annals of Mathematical Statistics, 24, 1-8. Bishop, C. (1995). Neural Networks for Pattern Recognition. Oxford: University Press. Bishop, Y. M. M.,
Fienberg, S. E., & Holland, P. W. (1975). Discrete multivariate analysis. Cambridge, MA: MIT Press. Bjorck, A. (1967). Solving linear least squares problems by Gram-Schmidt orthonormalization. Bit,
7, 1-21. Blackman, R. B., & Tukey, J. (1958). The measurement of power spectral from the point of view of communication engineering. New York: Dover. Blackwelder, R. A. (1966). Taxonomy: A text and
reference book. New York: Wiley. Blalock, H. M. (1972). Social statistics (2nd ed.). New York:McGraw-Hill Bliss, C. I. (1934). The method of probits. Science, 79, 38-39. Bloomfield, P. (1976).
Fourier analysis of time series: An introduction. New York: Wiley. Bock, R. D. (1963). Programming univariate and multivariate analysis of variance. Technometrics, 5, 95-117. Bock, R. D. (1975).
Multivariate statistical methods in behavioral research. New York: McGraw-Hill. Bolch, B.W., & Huang, C. J. (1974). Multivariate statistical methods for business and economics. Englewood Cliffs, NJ:
Prentice-Hall. Bollen, K. A. (1989). Structural equations with latent variables. New York: John Wiley & Sons. Borg, I., & Lingoes, J. (1987). Multidimensional similarity structure analysis. New York:
Springer. Borg, I., & Shye, S. (in press). Facet Theory. Newbury Park: Sage. Bouland, H. and Kamp, Y. (1988). Auto-association by multilayer perceptrons and singular value decomposition. Biological
Cybernetics 59, 291-294. Bowker, A. G. (1948). A test for symmetry in contingency tables. Journal of the American Statistical Association, 43, 572-574. Bowley, A. L. (1897). Relations between the
accuracy of an average and that of its constituent parts. Journal of the Royal Statistical Society, 60, 855-866. Bowley, A. L. (1907). Elements of Statistics. London: P. S. King and Son. Box, G. E.
P. (1953). Non-normality and tests on variances. Biometrika, 40, 318-335. Box, G. E. P. (1954a). Some theorems on quadratic forms applied in the study of analysis of variance problems: I. Effect of
inequality of variances in the one-way classification. Annals of Mathematical Statistics, 25, 290-302. Box, G. E. P. (1954b). Some theorems on quadratic forms applied in the study of analysis of
variance problems: II. Effect of inequality of variances and of correlation of errors in the twoway classification. Annals of Mathematical Statistics, 25, 484-498. Box, G. E. P., & Anderson, S. L.
(1955). Permutation theory in the derivation of robust criteria and the study of departures from assumptions. Journal of the Royal Statistical Society, 17, 1-34. Box, G. E. P., & Behnken, D. W.
(1960). Some new three level designs for the study of quantitative variables. Technometrics, 2, 455-475. Box, G. E. P., & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal
Statistical Society, 26, 211-253. Box, G. E. P., & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society, B26, 211-234. Box, G. E. P., & Draper, N. R. (1987).
Empirical model-building and response surfaces. New York: Wiley. Box, G. E. P., & Jenkins, G. M. (1970). Time series analysis. San Francisco: Holden Day. Box, G. E. P., & Jenkins, G. M. (1976). Time
series analysis: Forecasting and control. San Francisco: Holden-Day. Box, G. E. P., & Tidwell, P. W. (1962). Transformation of the independent variables. Technometrics, 4, 531-550. Box, G. E. P., &
Wilson, K. B. (1951). On the experimental attainment of optimum conditions. Journal of the Royal Statistical Society, Ser. B, 13, 1-45. Box, G. E. P., Hunter, W. G., & Hunter, S. J. (1978).
Statistics for experimenters: An introduction to design, data analysis, and model building. New York: Wiley. Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and
regression trees. Monterey, CA: Wadsworth & Brooks/Cole Advanced Books & Software. Brenner, J. L., et al. (1968). Difference equations in forecasting formulas. Management Science, 14, 141-159. Brent,
R. F. (1973). Algorithms for minimization without derivatives. Englewood Cliffs, NJ: Prentice-Hall. Breslow, N. E. (1970). A generalized Kruskal-Wallis test for comparing K samples subject to unequal
pattern of censorship. Biometrika, 57, 579-594. Breslow, N. E. (1974). Covariance analysis of censored survival data. Biometrics, 30, 89-99. Bridle, J.S. (1990). Probabilistic interpretation of
feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogelman Soulie and J. Herault (Eds.), Neurocomputing: Algorithms, Architectures and
Applications, 227-236. New York: SpringerVerlag. Brigham, E. O. (1974). The fast Fourier transform. Englewood Cliffs, NJ: Prentice-Hall. Brillinger, D. R. (1975). Time series: Data analysis and
theory. New York: Holt, Rinehart. & Winston. Broomhead, D.S. and Lowe, D. (1988). Multivariable functional interpolation and adaptive networks. Complex Systems 2, 321-355. Brown, D. T. (1959). A note
on approximations to discrete probability distributions. Information and Control, 2, 386-392. Brown, M. B., & Forsythe, A. B. (1974). Robust tests for the equality of variances. Journal of the
American Statistical Association, 69, 264-267. Brown, R. G. (1959). Statistical forecasting for inventory control. New York: McGraw-Hill. Browne, M. W. (1968). A comparison of factor analytic
techniques. Psychometrika, 33, 267-334. Browne, M. W. (1974). Generalized least squares estimators in the analysis of covariance structures. South African Statistical Journal, 8, 1-24. Browne, M. W.
(1982). Covariance Structures. In D. M. Hawkins (Ed.) Topics in Applied Multivariate Analysis. Cambridge, MA: Cambridge University Press. Browne, M. W. (1984). Asymptotically distribution free
methods for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology, 37, 62-83. Browne, M. W., & Cudeck, R. (1990). Single sample cross-validation indices
for covariance structures. Multivariate Behavioral Research, 24, 445-455. Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. In K. A. Bollen and J. S. Long (Eds.), Testing
structural equation models. Beverly Hills, CA: Sage. Browne, M. W., & DuToit, S. H. C. (1982). AUFIT (Version 1). A computer programme for the automated fitting of nonstandard models for means and
covariances. Research Finding WS-27. Pretoria, South Africa: Human Sciences Research Council. Browne, M. W., & DuToit, S. H. C. (1987). Automated fitting of nonstandard models. Report WS-39.
Pretoria, South Africa: Human Sciences Research Council. Browne, M. W., & DuToit, S. H. C. (1992). Automated fitting of nonstandard models. Multivariate Behavioral Research, 27, 269-300. Browne, M.
W., & Mels, G. (1992). RAMONA User's Guide. The Ohio State University: Department of Psychology. Browne, M. W., & Shapiro, A. (1989). Invariance of covariance structures under groups of
transformations. Research Report 89/4. Pretoria, South Africa: University of South Africa Department of Statistics. Browne, M. W., & Shapiro, A. (1991). Invariance of covariance structures under
groups of transformations. Metrika, 38, 335-345. Browne, M.W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long, (Eds.), Testing structural equation models.
Beverly Hills, CA: Sage. Brownlee, K. A. (1960). Statistical Theory and Methodology in Science and Engineering. New York: John Wiley. Buffa, E. S. (1972). Operations management: Problems and models
(3rd. ed.). New York: Wiley. Buja, A., & Tukey, P. A. (Eds.) (1991). Computing and Graphics in Statistics. New York: Springer-Verlag. Buja, A., Fowlkes, E. B., Keramidas, E. M., Kettenring, J. R.,
Lee, J. C., Swayne, D. F., & Tukey, P. A. (1986). Discovering features of multivariate data through statistical graphics. Proceedings of the Section on Statistical Graphics, American Statistical
Association, 98-103. Burman, J. P. (1979). Seasonal adjustment - a survey. Forecasting, Studies in Management Science, 12, 45-57. Burns, L. S., & Harman, A. J. (1966). The complex metropolis, Part V
of profile of the Los Angeles metropolis: Its people and its homes. Los Angeles: University of Chicago Press. Burt, C. (1950). The factorial analysis of qualitative data. British Journal of
Psychology, 3, 166185. Campbell D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105 Carling, A. (1992).
Introducing Neural Networks. Wilmslow, UK: Sigma Press. Carmines, E. G., & Zeller, R. A. (1980). Reliability and validity assessment. Beverly Hills, CA: Sage Publications. Carrol, J. D., Green, P.
E., and Schaffer, C. M. (1986). Interpoint distance comparisons in correspondence analysis. Journal of Marketing Research, 23, 271-280. Carroll, J. D., & Wish, M. (1974). Multidimensional perceptual
models and measurement methods. In E. C. Carterette and M. P. Friedman (Eds.), Handbook of perception. (Vol. 2, pp. 391-447). New York: Academic Press. Cattell, R. B. (1966). The scree test for the
number of factors. Multivariate Behavioral Research, 1, 245-276. Cattell, R. B., & Jaspers, J. A. (1967). A general plasmode for factor analytic exercises and research. Multivariate Behavioral
Research Monographs. Chambers, J. M., Cleveland, W. S., Kleiner, B., & Tukey, P. A. (1983). Graphical methods for data analysis. Bellmont, CA: Wadsworth. Chan, L. K., Cheng, S. W., & Spiring, F.
(1988). A new measure of process capability: Cpm. Journal of Quality Technology, 20, 162-175. Chen, J. (1992). Some results on 2(nk) fractional factorial designs and search for minimum aberration
designs. Annals of Statistics, 20, 2124-2141. Chen, J., & Wu, C. F. J. (1991). Some results on s(nk) fractional factorial designs with minimum aberration or optimal moments. Annals of Statistics, 19,
1028-1041. Chen, J., Sun, D. X., & Wu, C. F. J. (1993). A catalog of two-level and three-level fractional factorial designs with small runs. International Statistical Review, 61, 131-145. Chernoff,
H. (1973). The use of faces to represent points in k-dimensional space graphically. Journal of American Statistical Association, 68, 361-368. Christ, C. (1966). Econometric models and methods. New
York: Wiley. Clarke, G. M., & Cooke, D. (1978). A basic course in statistics. London: Edward Arnold. Clements, J. A. (1989). Process capability calculations for non-normal distributions. Quality
Progress. September, 95-100. Cleveland, W. S. (1979). Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74, 829-836. Cleveland, W. S.
(1984). Graphs in scientific publications. The American Statistician, 38, 270280. Cleveland, W. S. (1985). The elements of graphing data. Monterey, CA: Wadsworth. Cleveland, W. S. (1993). Visualizing
data. Murray Hill, NJ: AT&T. Cleveland, W. S., Harris, C. S., & McGill, R. (1982). Judgements of circle sizes on statistical maps. Journal of the American Statistical Association, 77, 541-547. Cliff,
N. (1983). Some cautions concerning the application of causal modeling methods. Multivariate Behavioral Research, 18, 115-126. Cochran, W. G. (1950). The comparison of percentages in matched samples.
Biometrika, 37, 256-266. Cohen, J. (1977). Statistical power analysis for the behavioral sciences. (Rev. ed.). New York: Academic Press. Cohen, J. (1983). Statistical power analysis for the
behavioral sciences. (2nd Ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 9971003. Cole, D. A., Maxwell, S. E., Arvey, R., &
Salas, E. (1993). Multivariate group comparisons of variable systems: MANOVA and structural equation modeling. Psychological Bulletin, 114, 174-184. Connor, W. S., & Young, S. (1984). Fractional
factorial experiment designs for experiments with factors at two and three levels. In R. A. McLean & V. L. Anderson (Eds.), Applied factorial and fractional designs. New York: Marcel Dekker. Connor,
W. S., & Zelen, M. (1984). Fractional factorial experiment designs for factors at three levels. In R. A. McLean & V. L. Anderson (Eds.), Applied factorial and fractional designs. New York: Marcel
Dekker. Conover, W. J. (1974). Some reasons for not using the Yates continuity correction on 2 x 2 contingency tables. Journal of the American Statistical Association, 69, 374-376. Conover, W. J.,
Johnson, M. E., & Johnson, M. M. (1981). A comparative study of tests for homogeneity of variances with applications to the outer continental shelf bidding data. Technometrics, 23, 357-361. Cook, R.
D. (1977). Detection of influential observations in linear regression. Technometrics, 19, 15-18. Cook, R. D., & Nachtsheim, C. J. (1980). A comparison of algorithms for constructing exact Doptimal
designs. Technometrics, 22, 315-324. Cook, R. D., & Weisberg, S. (1982). Residuals and Influence in Regression. (Monographs on statistics and applied probability). New York: Chapman and Hall. Cooke,
D., Craven, A. H., & Clarke, G. M. (1982). Basic statistical computing. London: Edward Arnold. Cooley, J. W., & Tukey, J. W. (1965). An algorithm for the machine computation of complex Fourier
series. Mathematics of Computation, 19, 297-301. Cooley, W. W., & Lohnes, P. R. (1971). Multivariate data analysis. New York: Wiley. Cooley, W. W., & Lohnes, P. R. (1976). Evaluation research in
education. New York: Wiley. Coombs, C. H. (1950). Psychological scaling without a unit of measurement. Psychological Review, 57, 145-158. Coombs, C. H. (1964). A theory of data. New York: Wiley.
Corballis, M. C., & Traub, R. E. (1970). Longitudinal factor analysis. Psychometrika, 35, 79-98. Corbeil, R. R., & Searle, S. R. (1976). Restricted maximum likelihood (REML) estimation of variance
components in the mixed model. Technometrics, 18, 31-38. Cormack, R. M. (1971). A review of classification. Journal of the Royal Statistical Society, 134, 321-367. Cornell, J. A. (1990a). How to run
mixture experiments for product quality. Milwaukee, Wisconsin: ASQC. Cornell, J. A. (1990b). Experiments with mixtures: designs, models, and the analysis of mixture data (2nd ed.). New York: Wiley.
Cox, D. R. (1957). Note on grouping. Journal of the American Statistical Association, 52, 543547. Cox, D. R. (1959). The analysis of exponentially distributed life-times with two types of failures.
Journal of the Royal Statistical Society, 21, 411-421. Cox, D. R. (1964). Some applications of exponential ordered scores. Journal of the Royal Statistical Society, 26, 103-110. Cox, D. R. (1970).
The analysis of binary data. New York: Halsted Press. Cox, D. R. (1972). Regression models and life tables. Journal of the Royal Statistical Society, 34, 187-220. Cox, D. R., & Oakes, D. (1984).
Analysis of survival data. New York: Chapman & Hall. Cramer, H. (1946). Mathematical methods in statistics. Princeton, NJ: Princeton University Press. Cristianini, N., & Shawe-Taylor, J. (2000).
Introduction to support vector machines and other kernel-based learning methods. Cambridge, UK: Cambridge University Press. Crowley, J., & Hu, M. (1977). Covariance analysis of heart transplant
survival data. Journal of the American Statistical Association, 72, 27-36. Cudeck, R. (1989). Analysis of correlation matrices using covariance structure models. Psychological Bulletin, 105, 317-327.
Cudeck, R., & Browne, M. W. (1983). Cross-validation of covariance structures. Multivariate Behavioral Research, 18, 147-167. Cutler, S. J., & Ederer, F. (1958). Maximum utilization of the life table
method in analyzing survival. Journal of Chronic Diseases, 8, 699-712. Dahlquist, G., & Bjorck, A. (1974). Numerical Methods. Englewood Cliffs, NJ: Prentice-Hall. Daniel, C. (1976). Applications of
statistics to industrial experimentation. New York: Wiley. Daniell, P. J. (1946). Discussion on symposium on autocorrelation in time series. Journal of the Royal Statistical Society, Suppl. 8, 88-90.
Daniels, H. E. (1939). The estimation of components of variance. Supplement to the Journal of the Royal Statistical Society, 6, 186-197. Darlington, R. B. (1990). Regression and linear models. New
York: McGraw-Hill. Darlington, R. B., Weinberg, S., & Walberg, H. (1973). Canonical variate analysis and related techniques. Review of Educational Research, 43, 433-454. DataMyte (1992). DataMyte
handbook. Minnetonka, MN. David, H. A. (1995). First (?) occurrence of common terms in mathematical statistics. The American Statistician, 49, 121-133. Davies, P. M., & Coxon, A. P. M. (1982). Key
texts in multidimensional scaling. Exeter, NH: Heinemann Educational Books. Davis, C. S., & Stephens, M. A. Approximate percentage points using Pearson curves. Applied Statistics, 32, 322-327. De
Boor, C. (1978). A practical guide to splines. New York: Springer-Verlag. DeCarlo, L. T. (1998). Signal detection theory and generalized linear models, Psychological Methods, 186-200. De Gruijter, P.
N. M., & Van Der Kamp, L. J. T. (Eds.). (1976). Advances in psychological and educational measurement. New York: Wiley. de Jong, S (1993) SIMPLS: An Alternative Approach to Partial Least Squares
Regression, Chemometrics and Intelligent Laboratory Systems, 18, 251-263 de Jong, S and Kiers, H. (1992) Principal Covariates regression, Chemometrics and Intelligent Laboratory Systems, 14, 155-164
Deming, S. N., & Morgan, S. L. (1993). Experimental design: A chemometric approach. (2nd ed.). Amsterdam, The Netherlands: Elsevier Science Publishers B.V. Deming, W. E., & Stephan, F. F. (1940). The
sampling procedure of the 1940 population census. Journal of the American Statistical Association, 35, 615-630. Dempster, A. P. (1969). Elements of Continuous Multivariate Analysis. San Francisco:
AddisonWesley. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39, 1-38. Dennis, J. E., &
Schnabel, R. B. (1983). Numerical methods for unconstrained optimization and nonlinear equations. Englewood Cliffs, NJ: Prentice Hall. Derringer, G., & Suich, R. (1980). Simultaneous optimization of
several response variables. Journal of Quality Technology, 12, 214-219. Diamond, W. J. (1981). Practical experimental design. Belmont, CA: Wadsworth. Dijkstra, T. K. (1990). Some properties of
estimated scale invariant covariance structures. Psychometrika, 55, 327-336. Dinneen, L. C., & Blakesley, B. C. (1973). A generator for the sampling distribution of the Mann Whitney U statistic.
Applied Statistics, 22, 269-273. Dixon, W. J. (1954). Power under normality of several non-parametric tests. Annals of Mathematical Statistics, 25, 610-614. Dixon, W. J., & Massey, F. J. (1983).
Introduction to statistical analysis (4th ed.). New York: McGraw-Hill. Dobson, A. J. (1990). An introduction to generalized linear models. New York: Chapman & Hall. Dodd, B. (1979). Lip reading in
infants: Attention to speech presented in- and out-of-synchrony. Cognitive Psychology, 11, 478-484. Dodge, Y. (1985). Analysis of experiments with missing data. New York: Wiley. Dodge, Y., Fedorov,
V. V., & Wynn, H. P. (1988). Optimal design and analysis of experiments. New York: North-Holland. Dodson, B. (1994). Weibull analysis. Milwaukee, Wisconsin: ASQC. Doyle, P. (1973). The use of
Automatic Interaction Detection and similar search procedures. Operational Research Quarterly, 24, 465-467. Duncan, A. J. (1974). Quality control and industrial statistics. Homewood, IL: Richard D.
Irwin. Duncan, O. D., Haller, A. O., & Portes, A. (1968). Peer influence on aspiration: a reinterpretation. American Journal of Sociology, 74, 119-137. Dunnett, C. W. (1955). A multiple comparison
procedure for comparing several treatments with a control. Journal of the American Statistical Association, 50, 1096-1121. Durbin, J. (1970). Testing for serial correlation in least-squares
regression when some of the regressors are lagged dependent variables. Econometrica, 38, 410-421. Durbin, J., & Watson, G. S. (1951). Testing for serial correlations in least squares regression. II.
Biometrika, 38, 159-178. Dykstra, O. Jr. (1971). The augmentation of experimental data to maximize |X'X|. Technometrics, 13, 682-688. Eason, E. D., & Fenton, R. G. (1974). A comparison of numerical
optimization methods for engineering design. ASME Paper 73-DET-17. Edelstein, H., A. (1999). Introduction to data mining and knowledge discovery (3rd ed). Potomac, MD: Two Crows Corp. Edgeworth, F.
Y. (1885). Methods of statistics. In Jubilee Volume, Royal Statistical Society, 181217. Efron, B. (1982). The jackknife, the bootstrap, and other resampling plans. Philadelphia, Pa. Society for
Industrial and Applied Mathematics. Eisenhart, C. (1947). The assumptions underlying the analysis of variance. Biometrics, 3, 1-21. Elandt-Johnson, R. C., & Johnson, N. L. (1980). Survival models and
data analysis. New York: Wiley. Elliott, D. F., & Rao, K. R. (1982). Fast transforms: Algorithms, analyses, applications. New York: Academic Press. Elsner, J. B., Lehmiller, G. S., & Kimberlain, T.
B. (1996). Objective classification of Atlantic hurricanes. Journal of Climate, 9, 2880-2889. Enslein, K., Ralston, A., & Wilf, H. S. (1977). Statistical methods for digital computers. New York:
Wiley. Euler, L. (1782). Recherches sur une nouvelle espece de quarres magiques. Verhandelingen uitgegeven door het zeeuwsch Genootschap der Wetenschappen te Vlissingen, 9, 85-239. (Reproduced in
Leonhardi Euleri Opera Omnia. Sub auspiciis societatis scientiarium naturalium helveticae, 1st series, 7, 291-392.) Evans, M., Hastings, N., & Peacock, B. (1993). Statistical Distributions. New York:
Wiley. Everitt, B. S. (1977). The analysis of contingency tables. London: Chapman & Hall. Everitt, B. S. (1984). An introduction to latent variable models. London: Chapman and Hall. Ewan, W. D.
(1963). When and how to use Cu-sum charts. Technometrics, 5, 1-32. Fahlman, S.E. (1988). Faster-learning variations on back-propagation: an empirical study. In D. Touretzky, G.E. Hinton and T.J.
Sejnowski (Eds.), Proceedings of the 1988 Connectionist Models Summer School, 38-51. San Mateo, CA: Morgan Kaufmann. Fausett, L. (1994). Fundamentals of Neural Networks. New York: Prentice Hall.
Fayyad, U. M., Piatetsky-Shapiro, G., Smyth, P., & Uthurusamy, R. (Eds.). 1996. Advances in Knowledge Discovery and Data Mining. Cambridge, MA: The MIT Press. Fayyad, U. S., & Uthurusamy, R. (Eds.)
(1994). Knowledge Discovery in Databases; Papers from the 1994 AAAI Workshop. Menlo Park, CA: AAAI Press. Feigl, P., & Zelen, M. (1965). Estimation of exponential survival probabilities with
concomitant information. Biometrics, 21, 826-838. Feller, W. (1948). On the Kolmogorov-Smirnov limit theorems for empirical distributions. Annals of Mathematical Statistics, 19, 177-189. Fetter, R.
B. (1967). The quality control system. Homewood, IL: Richard D. Irwin. Fienberg, S. E. (1977). The analysis of cross-classified categorical data. Cambridge, MA: MIT Press. Finn, J. D. (1974). A
general model for multivariate analysis. New York: Holt, Rinehart & Winston. Finn, J. D. (1977). Multivariate analysis of variance and covariance. In K. Enslein, A. Ralston, and H. S. Wilf (Eds.),
Statistical methods for digital computers. Vol. III. (pp. 203-264). New York: Wiley. Finney, D. J. (1944). The application of probit analysis to the results of mental tests. Psychometrika, 9, 31-39.
Finney, D. J. (1971). Probit analysis. Cambridge, MA: Cambridge University Press. Firmin, R. (2002). Advanced time series modeling for semiconductor process control: The fab as a time machine. In
Mackulak, G. T., Fowler, J. W., & Schomig, A. (eds.). Proceedings of the International Conference on Modeling and Analysis of Semiconductor Manufacturing (MASM 2002). Fisher, R. A. (1918). The
correlation between relatives on the supposition of Mendelian inheritance. Transactions of the Royal Society of Edinburgh, 52, 399-433. Fisher, R. A. (1922). On the interpretation of Chi-square from
contingency tables, and the calculation of p. Journal of the Royal Statistical Society, 85, 87-94. Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics. Philosophical
Transactions of the Royal Society of London, Ser. A, 222, 309-368. Fisher, R. A. (1926). The arrangement of field experiments. Journal of the Ministry of Agriculture of Great Britain, 33, 503-513.
Fisher, R. A. (1928). The general sampling distribution of the multiple correlation coefficient. Proceedings of the Royal Society of London, Ser. A, 121, 654-673. Fisher, R. A. (1935). The Design of
Experiments. Edinburgh: Oliver and Boyd. Fisher, R. A. (1936). Statistical Methods for Research Workers (6th ed.). Edinburgh: Oliver and Boyd. Fisher, R. A. (1936). The use of multiple measurements
in taxonomic problems. Annals of Eugenics, 7, 179-188. Fisher, R. A. (1938). The mathematics of experimentation. Nature, 142, 442-443. Fisher, R. A., & Yates, F. (1934). The 6 x 6 Latin squares.
Proceedings of the Cambridge Philosophical Society, 30, 492-507. Fisher, R. A., & Yates, F. (1938). Statistical Tables for Biological, Agricultural and Medical Research. London: Oliver and Boyd.
Fleishman, A. E. (1980). Confidence intervals for correlation ratios. Educational and Psychological Measurement, 40, 659670. Fletcher, R. (1969). Optimization. New York: Academic Press. Fletcher, R.,
& Powell, M. J. D. (1963). A rapidly convergent descent method for minimization. Computer Journal, 6, 163-168. Fletcher, R., & Reeves, C. M. (1964). Function minimization by conjugate gradients.
Computer Journal, 7, 149-154. Fomby, T.B., Hill, R.C., & Johnson, S.R. (1984). Advanced econometric methods. New York: Springer-Verlag. Ford Motor Company, Ltd. & GEDAS (1991). Test examples for SPC
software. Fouladi, R. T. (1991). A comprehensive examination of procedures for testing the significance of a correlation matrix and its elements. Unpublished master's thesis, University of British
Columbia, Vancouver, British Columbia, Canada. Franklin, M. F. (1984). Constructing tables of minimum aberration p(nm) designs. Technometrics, 26, 225-232. Fraser, C., & McDonald, R. P. (1988).
COSAN: Covariance structure analysis. Multivariate Behavioral Research, 23, 263-265. Freedman, L. S. (1982). Tables of the number of patients required in clinical trials using the logrank test.
Statistics in Medicine, 1, 121129. Friedman, J. (1991). Multivariate adaptive regression splines (with discussion), Annals of Statistics, 19, 1-141. Friedman, J. H. (1993). Estimating functions of
mixed ordinal and categorical variables using adaptive splines. in S. Morgenthaler, E. Ronchetti, & W. A. Stahel (Eds.) (1993, p. 73-113). New directions in statistical data analysis and robustness.
Berlin: Birkhuser Verlag. Friedman, J. H. (1999a). Greedy function approximation: A gradient boosting machine. IMS 1999 Reitz Lecture. Friedman, J. H. (1999b). Stochastic gradient boosting. Stanford
University. Friedman, M. (1937). The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32, 675-701. Friedman, M.
(1940). A comparison of alternative tests of significance for the problem of m rankings. Annals of Mathematical Statistics, 11, 86-92. Fries, A., & Hunter, W. G. (1980). Minimum aberration 2 (kp)
designs. Technometrics, 22, 601608. Frost, P. A. (1975). Some properties of the Almon lag technique when one searches for degree of polynomial and lag. Journal of the American Statistical
Association, 70, 606-612. Fuller, W. A. (1976). Introduction to statistical time series. New York: Wiley. Gaddum, J. H. (1945). Lognormal distributions. Nature, 156, 463-466. Gale, N., & Halperin, W.
C. (1982). A case for better graphics: The unclassed choropleth map. The American Statistician, 36, 330-336. Galil, Z., & Kiefer, J. (1980). Time- and space-saving computer methods, related to
Mitchell's DETMAX, for finding D-optimum designs. Technometrics, 22, 301-313. Galton, F. (1882). Report of the anthropometric committee. In Report of the 51st Meeting of the British Association for
the Advancement of Science, 1881, 245-260. Galton, F. (1885). Section H. Anthropology. Opening address by Francis Galton. Nature, 32, 507- 510. Galton, F. (1885). Some results of the anthropometric
laboratory. Journal of the Anthropological Institute, 14, 275-287. Galton, F. (1888). Co-relations and their measurement. Proceedings of the Royal Society of London, 45, 135-145. Galton, F. (1889).
Natural Inheritance. London: Macmillan. Galton, F. (1889). Natural Inheritance. London: Macmillan. Ganguli, M. (1941). A note on nested sampling. Sankhya, 5, 449-452. Gara, M. A., & Rosenberg, S.
(1979). The identification of persons as supersets and subsets in free-response personality descriptions. Journal of Personality and Social Psychology, 37, 21612170. Gara, M. A., & Rosenberg, S.
(1981). Linguistic factors in implicit personality theory. Journal of Personality and Social Psychology, 41, 450-457. Gardner, E. S., Jr. (1985). Exponential smoothing: The state of the art. Journal
of Forecasting, 4, 1-28. Garthwaite, P. H. (1994) An Interpretation of Partial Least Squares, Journal of the American Statistical Association, 89 NO. 425, 122-127. Garvin, D. A. (1987). Competing on
the eight dimensions of quality. Harvard Business Review, November/December, 101-109. Gatsonis, C., & Sampson, A. R. (1989). Multiple correlation: exact power and sample size calculations.
Psychological Bulletin, 106, 516524. Gbur, E., Lynch, M., & Weidman, L. (1986). An analysis of nine rating criteria on 329 U. S. metropolitan areas. Proceedings of the Section on Statistical
Graphics, American Statistical Association, 104-109. Gedye, R. (1968). A manager's guide to quality and reliability. New York: Wiley. Gehan, E. A. (1965a). A generalized Wilcoxon test for comparing
arbitrarily singly-censored samples. Biometrika, 52, 203-223. Gehan, E. A. (1965b). A generalized two-sample Wilcoxon test for doubly-censored data. Biometrika, 52, 650-653. Gehan, E. A., & Siddiqui,
M. M. (1973). Simple regression methods for survival time studies. Journal of the American Statistical Association, 68, 848-856. Gehan, E. A., & Thomas, D. G. (1969). The performance of some two
sample tests in small samples with and without censoring. Biometrika, 56, 127-132. Geladi, P. and Kowalski, B. R. (1986) Partial Least Squares Regression: A Tutorial, Analytica Chimica Acta, 185,
1-17. Gerald, C. F., & Wheatley, P. O. (1989). Applied numerical analysis (4th ed.). Reading, MA: Addison Wesley. Gibbons, J. D. (1976). Nonparametric methods for quantitative analysis. New York:
Holt, Rinehart, & Winston. Gibbons, J. D. (1985). Nonparametric statistical inference (2nd ed.). New York: Marcel Dekker. Gifi, A. (1981). Nonlinear multivariate analysis. Department of Data Theory,
The University of Leiden. The Netherlands. Gifi, A. (1990). Nonlinear multivariate analysis. New York: Wiley. Gill, P. E., & Murray, W. (1972). Quasi-Newton methods for unconstrained optimization.
Journal of the Institute of Mathematics and its Applications, 9, 91-108. Gill, P. E., & Murray, W. (1974). Numerical methods for constrained optimization. New York: Academic Press. Gini, C. (1911).
Considerazioni sulle probabilita a posteriori e applicazioni al rapporto dei sessi nelle nascite umane. Studi Economico-Giuridici della Universita de Cagliari, Anno III, 133-171. Glass, G V., &
Hopkins, K. D. (1996). Statistical methods in education and psychology. Needham Heights, MA: Allyn & Bacon. Glass, G. V., & Stanley, J. (1970). Statistical methods in education and Psychology.
Englewood Cliffs, NJ: Prentice-Hall. Glasser, M. (1967). Exponential survival with covariance. Journal of the American Statistical Association, 62, 561-568. Gnanadesikan, R., Roy, S., & Srivastava,
J. (1971). Analysis and design of certain quantitative multiresponse experiments. Oxford: Pergamon Press, Ltd. Goldberg, D. E. (1989). Genetic Algorithms. Reading, MA: Addison Wesley. Golub, G. and
Kahan, W. (1965). Calculating the singular values and pseudo-inverse of a matrix. SIAM Numerical Analysis, B 2 (2), 205-224. Golub, G. H. and van Load, C. F. (1996) Matrix Computations, The Johns
Hopkins University Press Golub, G. H., & Van Loan, C. F. (1983). Matrix computations. Baltimore: Johns Hopkins University Press. Gompertz, B. (1825). On the nature of the function expressive of the
law of human mortality. Philosophical Transactions of the Royal Society of London, Series A, 115, 513-580. Goodman, L .A., & Kruskal, W. H. (1972). Measures of association for cross-classifications
IV: Simplification of asymptotic variances. Journal of the American Statistical Association, 67, 415421. Goodman, L. A. (1954). Kolmogorov-Smirnov tests for psychological research. Psychological
Bulletin, 51, 160-168. Goodman, L. A. (1971). The analysis of multidimensional contingency tables: Stepwise procedures and direct estimation methods for models building for multiple classification.
Technometrics, 13, 33-61. Goodman, L. A., & Kruskal, W. H. (1954). Measures of association for cross-classifications. Journal of the American Statistical Association, 49, 732-764. Goodman, L. A., &
Kruskal, W. H. (1959). Measures of association for cross-classifications II: Further discussion and references. Journal of the American Statistical Association, 54, 123-163. Goodman, L. A., &
Kruskal, W. H. (1963). Measures of association for cross-classifications III: Approximate sampling theory. Journal of the American Statistical Association, 58, 310-364. Goodnight, J. H. (1980). Tests
of hypotheses in fixed effects linear models. Communications in Statistics, A9, 167-180. Gorman, R.P., & Sejnowski, T.J. (1988). Analysis of hidden units in a layered network trained to classify
sonar targets. Neural Networks 1 (1), 75-89. Grant, E. L., & Leavenworth, R. S. (1980). Statistical quality control (5th ed.). New York: McGraw-Hill. Green, P. E., & Carmone, F. J. (1970).
Multidimensional scaling and related techniques in marketing analysis. Boston: Allyn & Bacon. Green, P. J. & Silverman, B. W. (1994) Nonparametric regression and generalized linear models: A
roughness penalty approach. New York: Chapman & Hall. Greenacre, M. J. & Hastie, T. (1987). The geometric interpretation of correspondence analysis. Journal of the American Statistical Association,
82, 437-447. Greenacre, M. J. (1984). Theory and applications of correspondence analysis. New York: Academic Press. Greenacre, M. J. (1988). Correspondence analysis of multivariate categorical data
by weighted least-squares. Biometrika, 75, 457-467. Greenhouse, S. W., & Geisser, S. (1958). Extension of Box's results on the use of the F distribution in multivariate analysis. Annals of
Mathematical Statistics, 29, 95-112. Greenhouse, S. W., & Geisser, S. (1959). On methods in the analysis of profile data. Psychometrika, 24, 95-112. Grizzle, J. E. (1965). The two-period change-over
design and its use in clinical trials. Biometrics, 21, 467-480. Gross, A. J., & Clark, V. A. (1975). Survival distributions: Reliability applications in the medical sciences. New York: Wiley. Gruska,
G. F., Mirkhani, K., & Lamberson, L. R. (1989). Non-Normal data Analysis. Garden City, MI: Multiface Publishing. Gruvaeus, G., & Wainer, H. (1972). Two additions to hierarchical cluster analysis. The
British Journal of Mathematical and Statistical Psychology, 25, 200-206. Guttman, L. (1954). A new approach to factor analysis: the radex. In P. F. Lazarsfeld (Ed.), Mathematical thinking in the
social sciences. New York: Columbia University Press. Guttman, L. (1968). A general nonmetric technique for finding the smallest coordinate space for a configuration of points. Pyrometrical, 33,
469-506. Guttman, L. B. (1977). What is not what in statistics. The Statistician, 26, 81107. Haberman, S. J. (1972). Loglinear fit for contingency tables. Applied Statistics, 21, 218-225. Haberman,
S. J. (1974). The analysis of frequency data. Chicago: University of Chicago Press. Hahn, G. J., & Shapiro, S. S. (1967). Statistical models in engineering. New York: Wiley. Hakstian, A. R., Rogers,
W. D., & Cattell, R. B. (1982). The behavior of numbers of factors rules with simulated data. Multivariate Behavioral Research, 17, 193-219. Hald, A. (1949). Maximum likelihood estimation of the
parameters of a normal distribution which is truncated at a known point. Skandinavisk Aktuarietidskrift, 1949, 119-134. Hald, A. (1952). Statistical theory with engineering applications. New York:
Wiley. Han, J., Kamber, M. (2000). Data mining: Concepts and Techniques. New York: MorganKaufman. Han, J., Lakshmanan, L. V. S., & Pei, J. (2001). Scalable frequent-pattern mining methods: An
overview. In T. Fawcett (Ed.). KDD 2001: Tutorial Notes of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: The Association for Computing Machinery.
Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (Eds.) (1997). What if there were no significance tests. Mahwah, NJ: Lawrence Erlbaum Associates. Harman, H. H. (1967). Modern factor analysis. Chicago:
University of Chicago Press. Harris, R. J. (1976). The invalidity of partitioned U tests in canonical correlation and multivariate analysis of variance. Multivariate Behavioral Research, 11, 353-365.
Harrison, D. & Rubinfeld, D. L. (1978). Hedonic prices and the demand for clean air. Journal of Environmental Economics and Management, 5, 81-102. Hart, K. M., & Hart, R. F. (1989). Quantitative
methods for quality improvement. Milwaukee, WI: ASQC Quality Press. Hartigan, J. A. & Wong, M. A. (1978). Algorithm 136. A k-means clustering algorithm. Applied Statistics, 28, 100. Hartigan, J. A.
(1975). Clustering algorithms. New York: Wiley. Hartley, H. O. (1959). Smallest composite designs for quadratic response surfaces. Biometrics, 15, 611-624. Harville, D. A. (1977). Maximum likelihood
approaches to variance component estimation and to related problems. Journal of the American Statistical Association, 72, 320-340. Haskell, A. C. (1922). Graphic Charts in Business. New York: Codex.
Hastie, T., Tibshirani, R., & Friedman, J. H. (2001). The elements of statistical learning : Data mining, inference, and prediction. New York: Springer. Haviland, R. P. (1964). Engineering
reliability and long life design. Princeton, NJ: Van Nostrand. Hayduk, L. A. (1987). Structural equation modelling with LISREL: Essentials and advances. Baltimore: The Johns Hopkins University Press.
Haykin, S. (1994). Neural Networks: A Comprehensive Foundation. New York: Macmillan Publishing. Haykin, S. (1994). Neural Networks: A Comprehensive Foundation. New York: Macmillan College Publishing.
Hays, W. L. (1981). Statistics (3rd ed.). New York: CBS College Publishing. Hays, W. L. (1988). Statistics (4th ed.). New York: CBS College Publishing. Heiberger, R. M. (1989). Computation for the
analysis of designed experiments. New York: Wiley. Hemmerle, W. J., & Hartley, H., O. (1973). Computing maximum likelihood estimates for the mixed A.O.V. model using the W transformation.
Technometrics, 15, 819-831. Henley, E. J., & Kumamoto, H. (1980). Reliability engineering and risk assessment. New York: Prentice-Hall. Hettmansperger, T. P. (1984). Statistical inference based on
ranks. New York: Wiley. Hibbs, D. (1974). Problems of statistical estimation and causal inference in dynamic time series models. In H. Costner (Ed.), Sociological Methodology 1973/1974 (pp. 252-308).
San Francisco: Jossey-Bass. Hill, I. D., Hill, R., & Holder, R. L. (1976). Fitting Johnson curves by moments. Applied Statistics. 25, 190-192. Hilton, T. L. (1969). Growth study annotated
bibliography. Princeton, NJ: Educational Testing Service Progress Report 69-11. Hochberg, J., & Krantz, D. H. (1986). Perceptual properties of statistical graphs. Proceedings of the Section on
Statistical Graphics, American Statistical Association, 29-35. Hocking, R. R. (1985). The analysis of linear models. Monterey, CA: Brooks/Cole. Hocking, R. R. (1996). Methods and Applications of
Linear Models. Regression and the Analysis of Variance. New York: Wiley. Hocking, R. R., & Speed, F. M. (1975). A full rank analysis of some linear model problems. Journal of the American Statistical
Association, 70, 707-712. Hoerl, A. E. (1962). Application of ridge analysis to regression problems. Chemical Engineering Progress, 58, 54-59. Hoerl, A. E., & Kennard, R. W. (1970). Ridge regression:
Applications to nonorthogonal problems. Technometrics, 12, 69-82. Hoff, J. C. (1983). A practical guide to Box-Jenkins forecasting. London: Lifetime Learning Publications. Hoffman, D. L. & Franke, G.
R. (1986). Correspondence analysis: Graphical representation of categorical data in marketing research. Journal of Marketing Research, 13, 213-227. Hogg, R. V., & Craig, A. T. (1970). Introduction to
mathematical statistics. New York: Macmillan. Holzinger, K. J., & Swineford, F. (1939). A study in factor analysis: The stability of a bi-factor solution. University of Chicago: Supplementary
Educational Monographs, No. 48. Hooke, R., & Jeeves, T. A. (1961). Direct search solution of numerical and statistical problems. Journal of the Association for Computing Machinery, 8, 212-229.
Hosmer, D. W and Lemeshow, S. (1989), Applied Logistic Regression, John Wiley & Sons, Inc. Hotelling, H. (1947). Multivariate quality control. In Eisenhart, Hastay, and Wallis (Eds.), Techniques of
Statistical Analysis. New York: McGraw-Hill. Hotelling, H., & Pabst, M. R. (1936). Rank correlation and tests of significance involving no assumption of normality. Annals of Mathematical Statistics,
7, 29-43. Hoyer, W., & Ellis, W. C. (1996). A graphical exploration of SPC. Quality Progress, 29, 65-73. Hsu, P. L. (1938). Contributions to the theory of Student's t test as applied to the problem
of two samples. Statistical Research Memoirs, 2, 1-24. Huba, G. J., & Harlow, L. L. (1987). Robust structural equation models: implications for developmental psychology. Child Development, 58,
147-166. Huberty, C. J. (1975). Discriminant analysis. Review of Educational Research, 45, 543-598. Hunter, A., Kennedy, L., Henry, J., & Ferguson, R.I. (2000). Application of Neural Networks and
Sensitivity Analysis to improved prediction of Trauma Survival. Computer Methods and Algorithms in Biomedicine 62, 11-19. Huynh, H., & Feldt, L. S. (1970). Conditions under which mean square ratios
in repeated measures designs have exact F-distributions. Journal of the American Statistical Association, 65, 1582-1589. Ireland, C. T., & Kullback, S. (1968). Contingency tables with given
marginals. Biometrika, 55, 179-188. Jaccard, J., Weber, J., & Lundmark, J. (1975). A multitrait-multimethod factor analysis of four attitude assessment procedures. Journal of Experimental Social
Psychology, 11, 149-154. Jacobs, D. A. H. (Ed.). (1977). The state of the art in numerical analysis. London: Academic Press. Jacobs, R.A. (1988). Increased Rates of Convergence Through Learning Rate
Adaptation. Neural Networks 1 (4), 295-307. Jacoby, S. L. S., Kowalik, J. S., & Pizzo, J. T. (1972). Iterative methods for nonlinear optimization problems. Englewood Cliffs, NJ: Prentice-Hall. James,
L. R., Mulaik, S. A., & Brett, J. M. (1982). Causal analysis. Assumptions, models, and data. Beverly Hills, CA: Sage Publications. Jardine, N., & Sibson, R. (1971). Mathematical taxonomy. New York:
Wiley. Jastrow, J. (1892). On the judgment of angles and position of lines. American Journal of Psychology, 5, 214-248. Jenkins, G. M., & Watts, D. G. (1968). Spectral analysis and its applications.
San Francisco: Holden-Day. Jennrich, R. I. (1970). An asymptotic test for the equality of two correlation matrices. Journal of the American Statistical Association, 65, 904-912. Jennrich, R. I.
(1977). Stepwise regression. In K. Enslein, A. Ralston, & H.S. Wilf (Eds.), Statistical methods for digital computers. New York: Wiley. Jennrich, R. I., & Moore, R. H. (1975). Maximum likelihood
estimation by means of nonlinear least squares. Proceedings of the Statistical Computing Section, American Statistical Association, 57-65. Jennrich, R. I., & Sampson, P. F. (1968). Application of
stepwise regression to non-linear estimation. Technometrics, 10, 63-72. Jennrich, R. I., & Sampson, P. F. (1976). Newton-Raphson and related algorithms for maximum likelihood variance component
estimation. Technometrics, 18, 11-17. Jennrich, R. I., & Schuchter, M. D. (1986). Unbalanced repeated-measures models with structured covariance matrices. Biometrics, 42, 805-820. Jennrich. R. I.
(1977). Stepwise discriminant analysis. In K. Enslein, A. Ralston, & H.S. Wilf (Eds.), Statistical methods for digital computers. New York: Wiley. Johnson, L. W., & Ries, R. D. (1982). Numerical
Analysis (2nd ed.). Reading, MA: Addison Wesley. Johnson, N. L. (1961). A simple theoretical approach to cumulative sum control charts. Journal of the American Statistical Association, 56, 83-92.
Johnson, N. L. (1965). Tables to facilitate fitting SU frequency curves. Biometrika, 52, 547. Johnson, N. L., & Kotz, S. (1970). Continuous univariate distributions, Vol I and II. New York: Wiley.
Johnson, N. L., Kotz, S., Balakrishnan, N. (1995). Continuous univariate distributions: Volume II. (2nd Ed). NY: Wiley. Johnson, N. L., & Leone, F. C. (1962). Cumulative sum control charts -
mathematical principles applied to their construction and use. Industrial Quality Control, 18, 15-21. Johnson, N. L., Nixon, E., & Amos, D. E. (1963). Table of percentage points of pearson curves.
Biometrika, 50, 459. Johnson, N. L., Nixon, E., Amos, D. E., & Pearson, E. S. (1963). Table of percentage points of Pearson curves for given 1 and 2, expressed in standard measure. Biometrika, 50,
459-498. Johnson, P. (1987). SPC for short runs: A programmed instruction workbook. Southfield, MI: Perry Johnson. Johnson, S. C. (1967). Hierarchical clustering schemes. Psychometrika, 32, 241-254.
Johnston, J. (1972). Econometric methods. New York: McGraw-Hill. Jreskog, K. G. (1973). A general model for estimating a linear structural equation system. In A. S. Goldberger and O. D. Duncan
(Eds.), Structural Equation Models in the Social Sciences. New York: Seminar Press. Jreskog, K. G. (1974). Analyzing psychological data by structural analysis of covariance matrices. In D. H. Krantz,
R. C. Atkinson, R. D. Luce, and P. Suppes (Eds.), Contemporary Developments in Mathematical Psychology, Vol. II. New York: W. H. Freeman and Company. Jreskog, K. G. (1978). Structural analysis of
covariance and correlation matrices. Psychometrika, 43, 443-477. Jreskog, K. G., & Lawley, D. N. (1968). New methods in maximum likelihood factor analysis. British Journal of Mathematical and
Statistical Psychology, 21, 85-96. Jreskog, K. G., & Srbom, D. (1979). Advances in factor analysis and structural equation models. Cambridge, MA: Abt Books. Jreskog, K. G., & Srbom, D. (1982). Recent
developments in structural equation modeling. Journal of Marketing Research, 19, 404-416. Jreskog, K. G., & Srbom, D. (1984). Lisrel VI. Analysis of linear structural relationships by maximum
likelihood, instrumental variables, and least squares methods. Mooresville, Indiana: Scientific Software. Jreskog, K. G., & Srbom, D. (1989). Lisrel 7. A guide to the program and applications.
Chicago, Illinois: SPSS Inc. Judge, G. G., Griffith, W. E., Hill, R. C., Luetkepohl, H., & Lee, T. S. (1985). The theory and practice of econometrics. New York: Wiley. Juran, J. M. (1960). Pareto,
Lorenz, Cournot, Bernoulli, Juran and others. Industrial Quality Control, 17, 25. Juran, J. M. (1962). Quality control handbook. New York: McGraw-Hill. Juran, J. M., & Gryna, F. M. (1970). Quality
planning and analysis. New York: McGraw-Hill. Juran, J. M., & Gryna, F. M. (1980). Quality planning and analysis (2nd ed.). New York: McGraw-Hill. Juran, J. M., & Gryna, F. M. (1988). Juran's quality
control handbook (4th ed.). New York: McGraw-Hill. Kachigan, S. K. (1986). Statistical analysis: An interdisciplinary introduction to univariate & multivariate methods. New York: Radius Press.
Kackar, R. M. (1985). Off-line quality control, parameter design, and the Taguchi method. Journal of Quality Technology, 17, 176-188. Kackar, R. M. (1986). Taguchi's quality philosophy: Analysis and
commentary. Quality Progress, 19, 21-29. Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press. Kaiser, H. F. (1958).
The varimax criterion for analytic rotation in factor analysis. Pyrometrical, 23, 187-200. Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and
Psychological Measurement, 20, 141-151. Kalbfleisch, J. D., & Prentice, R. L. (1980). The statistical analysis of failure time data. New York: Wiley. Kane, V. E. (1986). Process capability indices.
Journal of Quality Technology, 18, 41-52. Kaplan, E. L., & Meier, P. (1958). Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53, 457-481.
Karsten, K. G., (1925). Charts and graphs. New York: Prentice-Hall. Kass, G. V. (1980). An exploratory technique for investigating large quantities of categorical data. Applied Statistics, 29,
119-127. Keats, J. B., & Lawrence, F. P. (1997). Weibull maximum likelihood parameter estimates with censored data. Journal of Quality Technology, 29, 105-110. Keeves, J. P. (1972). Educational
environment and student achievement. Melbourne: Australian Council for Educational Research. Kendall, M. G. (1940). Note on the distribution of quantiles for large samples. Supplement of the Journal
of the Royal Statistical Society, 7, 83-85. Kendall, M. G. (1948). Rank correlation methods. (1st ed.). London: Griffin. Kendall, M. G. (1975). Rank correlation methods (4th ed.). London: Griffin.
Kendall, M. G. (1984). Time Series. New York: Oxford University Press. Kendall, M., & Ord, J. K. (1990). Time series (3rd ed.). London: Griffin. Kendall, M., & Stuart, A. (1977). The advanced theory
of statistics. (Vol. 1). New York: MacMillan. Kendall, M., & Stuart, A. (1979). The advanced theory of statistics (Vol. 2). New York: Hafner. Kennedy, A. D., & E. Gehan, A. (1971). Computerized
simple regression methods for survival time studies. Computer Programs in Biomedicine, 1, 235-244. Kennedy, W. J., & Gentle, J. E. (1980). Statistical computing. New York: Marcel Dekker, Inc. Kenny,
D. A. (1979). Correlation and causality. New York: Wiley. Keppel, G. (1973). Design and analysis: A researcher's handbook. Englewood Cliffs, NJ: Prentice-Hall. Keppel, G. (1982). Design and analysis:
A researcher's handbook (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall. Keselman, H. J., Rogan, J. C., Mendoza, J. L., & Breen, L. L. (1980). Testing the validity conditions for repeated measures F
tests. Psychological Bulletin, 87, 479-481. Khuri, A. I., & Cornell, J. A. (1987). Response surfaces: Designs and analyses. New York: Marcel Dekker, Inc. Kiefer, J., & Wolfowitz, J. (1960). The
equivalence of two extremum problems. Canadian Journal of Mathematics, 12, 363-366. Kim, J. O., & Mueller, C. W. (1978a). Factor analysis: Statistical methods and practical issues. Beverly Hills, CA:
Sage Publications. Kim, J. O., & Mueller, C. W. (1978b). Introduction to factor analysis: What it is and how to do it. Beverly Hills, CA: Sage Publications. Kirk, D. B. (1973). On the numerical
approximation of the bivariate normal (tetrachoric) correlation coefficient. Psychometrika, 38, 259-268. Kirk, R. E. (1968). Experimental design: Procedures for the behavioral sciences. (1st ed.).
Monterey, CA: Brooks/Cole. Kirk, R. E. (1982). Experimental design: Procedures for the behavioral sciences. (2nd ed.). Monterey, CA: Brooks/Cole. Kirk, R. E. (1995). Experimental design: Procedures
for the behavioral sciences. Pacific Grove, CA: Brooks-Cole. Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P. (1983). Optimization by simulated annealing. Science 220 (4598), 671-680. Kish, L. (1965).
Survey sampling. New York: Wiley. Kivenson, G. (1971). Durability and reliability in engineering design. New York: Hayden. Klecka, W. R. (1980). Discriminant analysis. Beverly Hills, CA: Sage. Klein,
L. R. (1974). A textbook of econometrics. Englewood Cliffs, NJ: Prentice-Hall. Kleinbaum, D. G. (1996). Survival analysis: A self-learning text. New York: Springer-Verlag. Kline, P. (1979).
Psychometrics and psychology. London: Academic Press. Kline, P. (1986). A handbook of test construction. New York: Methuen. Kmenta, J. (1971). Elements of econometrics. New York: Macmillan. Knuth,
Donald E. (1981). Seminumerical algorithms. 2nd ed., Vol 2 of: The art of computer programming. Reading, Mass.: Addison-Wesley. Kohonen, T. (1982). Self-organized formation of topologically correct
feature maps. Biological Cybernetics, 43, 59-69. Kohonen, T. (1990). Improved versions of learning vector quantization. International Joint Conference on Neural Networks 1, 545-550. San Diego, CA.
Kolata, G. (1984). The proper display of data. Science, 226, 156-157. Kolmogorov, A. (1941). Confidence limits for an unknown distribution function. Annals of Mathematical Statistics, 12, 461-463.
Korin, B. P. (1969). On testing the equality of k covariance matrices. Biometrika, 56, 216-218. Kramer, M.A. (1991). Nonlinear principal components analysis using autoassociative neural networks.
AIChe Journal 37 (2), 233-243. Kruskal, J. B. (1964). Nonmetric multidimensional scaling: A numerical method. Pyrometrical, 29, 1-27, 115-129. Kruskal, J. B., & Wish, M. (1978). Multidimensional
scaling. Beverly Hills, CA: Sage Publications. Kruskal, W. H. (1952). A nonparametric test for the several sample problem. Annals of Mathematical Statistics, 23, 525-540. Kruskal, W. H. (1975).
Visions of maps and graphs. In J. Kavaliunas (Ed.), Auto-carto II, proceedings of the international symposium on computer assisted cartography. Washington, DC: U. S. Bureau of the Census and American
Congress on Survey and Mapping. Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47, 583-621. Ku, H. H., &
Kullback, S. (1968). Interaction in multidimensional contingency tables: An information theoretic approach. J. Res. Nat. Bur. Standards Sect. B, 72, 159-199. Ku, H. H., Varner, R. N., & Kullback, S.
(1971). Analysis of multidimensional contingency tables. Journal of the American Statistical Association, 66, 55-64. Kullback, S. (1959). Information theory and statistics. New York: Wiley. Kvlseth,
T. O. (1985). Cautionary note about R2. The American Statistician, 39, 279-285. Lagakos, S. W., & Kuhns, M. H. (1978). Maximum likelihood estimation for censored exponential survival data with
covariates. Applied Statistics, 27, 190-197. Lakatos, E., & Lan, K. K. G. (1992). A comparison of sample size methods for the logrank statistic. Statistics in Medicine, 11, 179191. Lance, G. N., &
Williams, W. T. (1966). A general theory of classificatory sorting strategies. Computer Journal, 9, 373. Lance, G. N., & Williams, W. T. (1966). Computer programs for hierarchical polythetic
classification ("symmetry analysis"). Computer Journal, 9, 60. Larsen, W. A., & McCleary, S. J. (1972). The use of partial residual plots in regression analysis. Technometrics, 14, 781-790. Lawless,
J. F. (1982). Statistical models and methods for lifetime data. New York: Wiley. Lawley, D. N., & Maxwell, A. E. (1971). Factor analysis as a statistical method. New York: American Elsevier. Lawley,
D. N., & Maxwell, A. E. (1971). Factor analysis as a statistical method (2nd. ed.). London: Butterworth & Company. Lebart, L., Morineau, A., and Tabard, N. (1977). Techniques de la description
statistique. Paris: Dunod. Lebart, L., Morineau, A., and Warwick, K., M. (1984). Multivariate descriptive statistical analysis: Correspondence analysis and related techniques for large matrices. New
York: Wiley. Lee, E. T. (1980). Statistical methods for survival data analysis. Belmont, CA: Lifetime Learning. Lee, E. T., & Desu, M. M. (1972). A computer program for comparing K samples with
rightcensored data. Computer Programs in Biomedicine, 2, 315-321. Lee, E. T., Desu, M. M., & Gehan, E. A. (1975). A Monte-Carlo study of the power of some two-sample tests. Biometrika, 62, 425-532.
Lee, S., & Hershberger, S. (1990). A simple rule for generating equivalent models in covariance structure modeling. Multivariate Behavioral Research, 25, 313-334. Lee, Y. S. (1972). Tables of upper
percentage points of the multiple correlation coefficient. Biometrika, 59, 175189. Legendre, A. M. (1805). Nouvelles Methodes pour la Determination des Orbites des Cometes. Paris: F. Didot. Lehmann,
E. L. (1975). Nonparametrics: Statistical methods based on ranks. San Francisco: Holden-Day. Levenberg, K. (1944). A method for the solution of certain non-linear problems in least squares. Quarterly
Journal of Applied Mathematics II (2), 164-168. Lewicki, P., Hill, T., & Czyzewska, M. (1992). Nonconscious acquisition of information. American Psychologist, 47, 796-801. Lieblein, J. (1953). On the
exact evaluation of the variances and covariances of order statistics in samples form the extreme-value distribution. Annals of Mathematical Statistics, 24, 282-287. Lieblein, J. (1955). On moments
of order statistics from the Weibull distribution. Annals of Mathematical Statistics, 26, 330-333. Lilliefors, H. W. (1967). On the Kolmogorov-Smirnov test for normality with mean and variance
unknown. Journal of the American Statistical Association, 64, 399-402. Lim, T.-S., Loh, W.-Y., & Shih, Y.-S. (1997). An empirical comparison of decision trees and other classification methods.
Technical Report 979, Department of Statistics, University of Wisconsin, Madison. Lindeman, R. H., Merenda, P. F., & Gold, R. (1980). Introduction to bivariate and multivariate analysis. New York:
Scott, Foresman, & Co. Lindman, H. R. (1974). Analysis of variance in complex experimental designs. San Francisco: W. H. Freeman & Co. Linfoot, E. H. (1957). An informational measure of correlation.
Information and Control, 1, 5055. Linn, R. L. (1968). A Monte Carlo approach to the number of factors problem. Psychometrika, 33, 37-71. Lipson, C., & Sheth, N. C. (1973). Statistical design and
analysis of engineering experiments. New York: McGraw-Hill. Lloyd, D. K., & Lipow, M. (1977). Reliability: Management, methods, and mathematics. New York: McGraw-Hill. Loehlin, J. C. (1987). Latent
variable models: An introduction to latent, path, and structural analysis. Hillsdale, NJ: Erlbaum. Loh, W.-Y, & Shih, Y.-S. (1997). Split selection methods for classification trees. Statistica
Sinica, 7, 815-840. Loh, W.-Y., & Vanichestakul, N. (1988). Tree-structured classification via generalized discriminant analysis (with discussion). Journal of the American Statistical Association,
83, 715728. Long, J. S. (1983a). Confirmatory factor analysis. Beverly Hills: Sage. Long, J. S. (1983b). Covariance structure models: An introduction to LISREL. Beverly Hills: Sage. Longley, J. W.
(1967). An appraisal of least squares programs for the electronic computer from the point of view of the user. Journal of the American Statistical Association, 62, 819-831. Longley, J. W. (1984).
Least squares computations using orthogonalization methods. New York: Marcel Dekker. Lord, F. M. (1957). A significance test for the hypothesis that two variables measure the same trait except for
errors of measurement. Psychometrika, 22, 207-220. Lorenz, M. O. (1904). Methods of measuring the concentration of wealth. American Statistical Association Publication, 9, 209-219. Lowe, D. (1989).
Adaptive radial basis function non-linearities, and the problem of generalisation. First IEEE International Conference on Artificial Neural Networks, 171-175, London, UK. Lucas, J. M. (1976). The
design and use of cumulative sum quality control schemes. Journal of Quality Technology, 8, 45-70. Lucas, J. M. (1982). Combined Shewhart-CUSUM quality control schemes. Journal of Quality Technology,
14, 89-93. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structur modeling. Psychological Methods, 1, 130149. Maddala, G.
S. (1977) Econometrics. New York: McGraw-Hill. Maiti, S. S., & Mukherjee, B. N. (1990). A note on the distributional properties of the JreskogSrbom fit indices. Psychometrika, 55, 721-726.
Makridakis, S. G. (1983). Empirical evidence versus personal experience. Journal of Forecasting, 2, 295-306. Makridakis, S. G. (1990). Forecasting, planning, and strategy for the 21st century.
London: Free Press. Makridakis, S. G., & Wheelwright, S. C. (1978). Interactive forecasting: Univariate and multivariate methods (2nd ed.). San Francisco, CA: Holden-Day. Makridakis, S. G., &
Wheelwright, S. C. (1989). Forecasting methods for management (5th ed.). New York: Wiley. Makridakis, S. G., Wheelwright, S. C., & McGee, V. E. (1983). Forecasting: Methods and applications (2nd
ed.). New York: Wiley. Makridakis, S., Andersen, A., Carbone, R., Fildes, R., Hibon, M., Lewandowski, R., Newton, J., Parzen, R., & Winkler, R. (1982). The accuracy of extrapolation (time series)
methods: Results of a forecasting competition. Journal of Forecasting, 1, 11-153. Malinvaud, E. (1970). Statistical methods of econometrics. Amsterdam: North-Holland Publishing Co. Mandel, B. J.
(1969). The regression control chart. Journal of Quality Technology, 1, 3-10. Mann, H. B., & Whitney, D. R. (1947). On a test of whether one of two random variables is stochastically larger than the
other. Annals of Mathematical Statistics, 18, 50-60. Mann, N. R., Schafer, R. E., & Singpurwalla, N. D. (1974). Methods for statistical analysis of reliability and life data. New York: Wiley. Mann,
N. R., Scheuer, R. M, & Fertig, K. W. (1973). A new goodness of fit test for the twoparameter Weibull or extreme value distribution. Communications in Statistics, 2, 383-400. Mantel, N. (1966).
Evaluation of survival data and two new rank order statistics arising in its consideration. Cancer Chemotherapy Reports, 50, 163-170. Mantel, N. (1967). Ranking procedures for arbitrarily restricted
observations. Biometrics, 23, 6578. Mantel, N. (1974). Comment and suggestion on the Yates continuity correction. Journal of the American Statistical Association, 69, 378-380. Mantel, N., & Haenszel,
W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22, 719-748. Marascuilo, L. A., & McSweeney, M. (1977).
Nonparametric and distribution free methods for the social sciences. Monterey, CA: Brooks/Cole. Marple, S. L., Jr. (1987). Digital spectral analysis. Englewood Cliffs, NJ: Prentice-Hall. Marquardt,
D.W. (1963). An algorithm for least-squares estimation of non-linear parameters. Journal of the Society of Industrial and Applied Mathematics 11 (2), 431-441. Marsaglia, G. (1962). Random variables
and computers. In J. Kozenik (Ed.), Information theory, statistical decision functions, random processes: Transactions of the third Prague Conference. Prague: Czechoslovak Academy of Sciences. Mason,
R. L., Gunst, R. F., & Hess, J. L. (1989). Statistical design and analysis of experiments with applications to engineering and science. New York: Wiley. Massey, F. J., Jr. (1951). The
Kolmogorov-Smirnov test for goodness of fit. Journal of the American Statistical Association, 46, 68-78. Masters (1995). Neural, Novel, and Hybrid Algorithms for Time Series Predictions. New York:
Wiley. Matsueda, R. L., & Bielby, W. T. (1986). Statistical power in covariance structure models. In N. B. Tuma (Ed.), Sociological methodology. Washington, DC: American Sociological Association.
McArdle, J. J. (1978). A structural view of structural models. Paper presented at the Winter Workshop on Latent Structure Models Applied to Developmental Data, University of Denver, December, 1978.
McArdle, J. J., & McDonald, R. P. (1984). Some algebraic properties of the Reticular Action Model for moment structures. British Journal of Mathematical and Statistical Psychology, 37, 234-251.
McCleary, R., & Hay, R. A. (1980). Applied time series analysis for the social sciences. Beverly Hills, CA: Sage Publications. McCullagh, P. & Nelder, J. A. (1989). Generalized linear models (2nd
Ed.). New York: Chapman & Hall. McDonald, R. P. (1980). A simple comprehensive model for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology, 31, 59-72.
McDonald, R. P. (1989). An index of goodness-of-fit based on noncentrality. Journal of Classification, 6, 97-103. McDonald, R. P., & Hartmann, W. M. (1992). A procedure for obtaining initial value
estimates in the RAM model. Multivariate Behavioral Research, 27, 57-76. McDonald, R. P., & Mulaik, S. A. (1979). Determinacy of common factors: A nontechnical review. Psychological Bulletin, 86,
297-306. McDowall, D., McCleary, R., Meidinger, E. E., & Hay, R. A. (1980). Interrupted time series analysis. Beverly Hills, CA: Sage Publications. McKenzie, E. (1984). General exponential smoothing
and the equivalent ARMA process. Journal of Forecasting, 3, 333-344. McKenzie, E. (1985). Comments on 'Exponential smoothing: The state of the art' by E. S. Gardner, Jr. Journal of Forecasting, 4,
32-36. McLachlan, G. J. (1992). Discriminant analysis and statistical pattern recognition. New York: Wiley. McLain, D. H. (1974). Drawing contours from arbitrary data points. The Computer Journal,
17, 318-324. McLean, R. A., & Anderson, V. L. (1984). Applied factorial and fractional designs. New York: Marcel Dekker. McLeod, A. I., & Sales, P. R. H. (1983). An algorithm for approximate
likelihood calculation of ARMA and seasonal ARMA models. Applied Statistics, 211-223 (Algorithm AS). McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or
percentages. Psychometrika, 12, 153-157. McNemar, Q. (1969). Psychological statistics (4th ed.). New York: Wiley. Melard, G. (1984). A fast algorithm for the exact likelihood of autoregressive-moving
average models. Applied Statistics, 33, 104-119. Mels, G. (1989). A general system for path analysis with latent variables. M. S. Thesis: Department of Statistics, University of South Africa.
Mendoza, J. L., Markos, V. H., & Gonter, R. (1978). A new perspective on sequential testing procedures in canonical analysis: A Monte Carlo evaluation. Multivariate Behavioral Research, 13, 371-382.
Meredith, W. (1964). Canonical correlation with fallible data. Psychometrika, 29, 55-65. Miettinnen, O. S. (1968). The matched pairs design in the case of all-or-none responses. Biometrics, 24,
339352. Miller, R. (1981). Survival analysis. New York: Wiley. Milligan, G. W. (1980). An examination of the effect of six types of error perturbation on fifteen clustering algorithms. Psychometrika,
45, 325-342. Milliken, G. A., & Johnson, D. E. (1984). Analysis of messy data: Vol. I. Designed experiments. New York: Van Nostrand Reinhold, Co. Milliken, G. A., & Johnson, D. E. (1992). Analysis of
messy data: Vol. I. Designed experiments. New York: Chapman & Hall. Minsky, M.L. and Papert, S.A. (1969). Perceptrons. Cambridge, MA: MIT Press. Mitchell, T. J. (1974a). Computer construction of
"D-optimal" first-order designs. Technometrics, 16, 211-220. Mitchell, T. J. (1974b). An algorithm for the construction of "D-optimal" experimental designs. Technometrics, 16, 203-210. Mittag, H. J.
(1993). Qualittsregelkarten. Mnchen/Wien: Hanser Verlag. Mittag, H. J., & Rinne, H. (1993). Statistical methods of quality assurance. London/New York: Chapman & Hall. Monro, D. M. (1975). Complex
discrete fast Fourier transform. Applied Statistics, 24, 153-160. Monro, D. M., & Branch, J. L. (1976). The chirp discrete Fourier transform of general length. Applied Statistics, 26, 351-361.
Montgomery, D. C. (1976). Design and analysis of experiments. New York: Wiley. Montgomery, D. C. (1985). Statistical quality control. New York: Wiley. Montgomery, D. C. (1991) Design and analysis of
experiments (3rd ed.). New York: Wiley. Montgomery, D. C. (1996). Introduction to Statistical Quality Control (3rd Edition). New York:Wiley. Montgomery, D. C. (1996). Statistical quality control
(3rd. Edition). New York: Wiley. Montgomery, D. C., & Wadsworth, H. M. (1972). Some techniques for multivariate quality control applications. Technical Conference Transactions. Washington, DC:
American Society for Quality Control. Montgomery, D. C., Johnson, L. A., & Gardiner, J. S. (1990). Forecasting and time series analysis (2nd ed.). New York: McGraw-Hill. Mood, A. M. (1954).
Introduction to the theory of statistics. New York: McGraw Hill. Moody, J. and Darkin, C.J. (1989). Fast learning in networks of locally-tuned processing units. Neural Computation 1 (2), 281-294.
Mor, J. J., (1977). The Levenberg-Marquardt Algorithm: Implementation and Theory. In G.A. Watson, (ed.), Lecture Notes in Mathematics 630, p. 105-116. Berlin: Springer-Verlag. Morgan, J. N., &
Messenger, R. C. (1973). THAID: A sequential analysis program for the analysis of nominal scale dependent variables. Technical report, Institute of Social Research, University of Michigan, Ann Arbor.
Morgan, J. N., & Sonquist, J. A. (1963). Problems in the analysis of survey data, and a proposal. Journal of the American Statistical Association, 58, 415-434. Morris, M., & Thisted, R. A. (1986).
Sources of error in graphical perception: A critique and an experiment. Proceedings of the Section on Statistical Graphics, American Statistical Association, 43-48. Morrison, A. S., Black, M. M.,
Lowe, C. R., MacMahon, B., & Yuasa, S. (1973). Some international differences in histology and survival in breast cancer. International Journal of Cancer, 11, 261-267. Morrison, D. (1967).
Multivariate statistical methods. New York: McGraw-Hill. Morrison, D. F. (1990). Multivariate statistical methods. (3rd Ed.). New York: McGraw-Hill. Moses, L. E. (1952). Non-parametric statistics for
psychological research. Psychological Bulletin, 49, 122-143. Mulaik, S. A. (1972). The foundations of factor analysis. New York: McGraw Hill. Murphy, K. R., & Myors, B. (1998). Statistical power
analysis: A simple general model for traditional and modern hypothesis tests. Mahwah, NJ: Lawrence Erlbaum Associates. Muth, J. F. (1960). Optimal properties of exponentially weighted forecasts.
Journal of the American Statistical Association, 55, 299-306. Nachtsheim, C. J. (1979). Contributions to optimal experimental design. Ph.D. thesis, Department of Applied Statistics, University of
Minnesota. Nachtsheim, C. J. (1987). Tools for computer-aided design of experiments. Journal of Quality Technology, 19, 132-160. Nelder, J. A., & Mead, R. (1965). A Simplex method for function
minimization. Computer Journal, 7, 308-313. Nelson, L. (1984). The Shewhart control chart - tests for special causes. Journal of Quality Technology, 15, 237-239. Nelson, L. (1985). Interpreting
Shewhart X-bar control charts. Journal of Quality Technology, 17, 114-116. Nelson, W. (1982). Applied life data analysis. New York: Wiley. Nelson, W. (1990). Accelerated testing: Statistical models,
test plans, and data analysis. New York: Wiley. Neter, J., Wasserman, W., & Kutner, M. H. (1985). Applied linear statistical models: Regression, analysis of variance, and experimental designs.
Homewood, IL: Irwin. Neter, J., Wasserman, W., & Kutner, M. H. (1989). Applied linear regression models (2nd ed.). Homewood, IL: Irwin. Newcombe, Robert G. (1998). Two-sided confidence intervals for
the single proportion: comparison of seven methods. Statistics in Medicine, 17, 857872. Neyman, J., & Pearson, E. S. (1931). On the problem of k samples. Bulletin de l'Academie Polonaise des Sciences
et Lettres, Ser. A, 460-481. Neyman, J., & Pearson, E. S. (1933). On the problem of the most efficient tests of statistical hypothesis. Philosophical Transactions of the Royal Society of London, Ser.
A, 231, 289-337. Nisbett, R. E., Fong, G. F., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238, 625-631. Noori, H. (1989). The Taguchi methods: Achieving design and output
quality. The Academy of Management Executive, 3, 322-326. Nunnally, J. C. (1970). Introduction to psychological measurement. New York: McGraw-Hill. Nunnally, J. C. (1978). Psychometric theory. New
York: McGraw-Hill. Nussbaumer, H. J. (1982). Fast Fourier transforms and convolution algorithms (2nd ed.). New York: Springer-Verlag. O'Brien, R. G., & Kaiser, M. K. (1985). MANOVA method for
analyzing repeated measures designs: An extensive primer. Psychological Bulletin, 97, 316-333. Okunade, A. A., Chang, C. F., & Evans, R. D. (1993). Comparative analysis of regression output summary
statistics in common statistical packages. The American Statistician, 47, 298-303. Olds, E. G. (1949). The 5% significance levels for sums of squares of rank differences and a correction. Annals of
Mathematical Statistics, 20, 117-118. Olejnik, S. F., & Algina, J. (1987). Type I error rates and power estimates of selected parametric and nonparametric tests of scale. Journal of Educational
Statistics, 12, 45-61. Olson, C. L. (1976). On choosing a test statistic in multivariate analysis of variance. Psychological Bulletin, 83, 579-586. O'Neill, R. (1971). Function minimization using a
Simplex procedure. Applied Statistics, 3, 7988. Ostle, B., & Malone, L. C. (1988). Statistics in research: Basic concepts and techniques for research workers (4th ed.). Ames, IA: Iowa State Press.
Ostrom, C. W. (1978). Time series analysis: Regression techniques. Beverly Hills, CA: Sage Publications. Overall, J. E., & Spiegel, D. K. (1969). Concerning least squares analysis of experimental
data. Psychological Bulletin, 83, 579-586. Page, E. S. (1954). Continuous inspection schemes. Biometrics, 41, 100-114. Page, E. S. (1961). Cumulative sum charts. Technometrics, 3, 1-9. Palumbo, F.
A., & Strugala, E. S. (1945). Fraction defective of battery adapter used in handietalkie. Industrial Quality Control, November, 68. Pankratz, A. (1983). Forecasting with univariate Box-Jenkins
models: Concepts and cases. New York: Wiley. Parker, D.B. (1985). Learning logic. Technical Report TR-47, Cambridge, MA: MIT Center for Research in Computational Economics and Management Science.
Parzen, E. (1961). Mathematical considerations in the estimation of spectra: Comments on the discussion of Messers, Tukey, and Goodman. Technometrics, 3, 167-190; 232-234. Parzen, E. (1962). On
estimation of a probability density function and mode. Annals of Mathematical Statistics 33, 1065-1076. Patil, K. D. (1975). Cochran's Q test: Exact distribution. Journal of the American Statistical
Association, 70, 186-189. Patterson, D. (1996). Artificial Neural Networks. Singapore: Prentice Hall. Peace, G. S. (1993). Taguchi methods: A hands-on approach. Milwaukee, Wisconsin: ASQC. Pearson,
E. S., and Hartley, H. O. (1972). Biometrika tables for statisticians, Vol II. Cambridge: Cambridge University Press. Pearson, K. (1894). Contributions to the mathematical theory of evolution.
Philosophical Transactions of the Royal Society of London, Ser. A, 185, 71-110. Pearson, K. (1895). Skew variation in homogeneous material. Philosophical Transactions of the Royal Society of London,
Ser. A, 186, 343-414. Pearson, K. (1896). Regression, heredity, and panmixia. Philosophical Transactions of the Royal Society of London, Ser. A, 187, 253-318. Pearson, K. (1900). On the criterion
that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical
Magazine, 5th Ser., 50, 157-175. Pearson, K. (1904). On the theory of contingency and its relation to association and normal correlation. Drapers' Company Research Memoirs, Biometric Ser. I. Pearson,
K. (1905). Das Fehlergesetz und seine Verallgemeinerungen durch Fechner und Pearson. A Rejoinder. Biometrika, 4, 169-212. Pearson, K. (1908). On the generalized probable error in multiple normal
correlation. Biometrika, 6, 59-68. Pearson, K., (Ed.). (1968). Tables of incomplete beta functions (2nd ed.). Cambridge, MA: Cambridge University Press. Pedhazur, E. J. (1973). Multiple regression in
behavioral research. New York: Holt, Rinehart, & Winston. Pedhazur, E. J. (1982). Multiple regression in behavioral research (2nd ed.). New York: Holt, Rinehart, & Winston. Peressini, A. L.,
Sullivan, F. E., & Uhl, J. J., Jr. (1988). The mathematics of nonlinear programming. New York: Springer. Peto, R., & Peto, J. (1972). Asymptotically efficient rank invariant procedures. Journal of
the Royal Statistical Society, 135, 185-207. Phadke, M. S. (1989). Quality engineering using robust design. Englewood Cliffs, NJ: PrenticeHall. Phatak, A., Reilly, P. M., and Penlidis, A. (1993) An
Approach to Interval Estimation in Partial Least Squares Regression, Analytica Chimica Acta, 277, 495-501 Piatetsky-Shapiro, G. (Ed.) (1993). Proceedings of AAAI-93 Workshop on Knowledge Discovery in
Databases. Menlo Park, CA: AAAI Press. Piepel, G. F. (1988). Programs for generating extreme vertices and centroids of linearly constrained experimental regions. Journal of Quality Technology, 20,
125-139. Piepel, G. F., & Cornell, J. A. (1994). Mixture experiment approaches: Examples, discussion, and recommendations. Journal of Quality Technology, 26, 177-196. Pigou, A. C. (1920). Economics
of Welfare. London: Macmillan. Pike, M. C. (1966). A method of analysis of certain class of experiments in carcinogenesis. Biometrics, 22, 142-161. Pillai, K. C. S. (1965). On the distribution of the
largest characteristic root of a matrix in multivariate analysis. Biometrika, 52, 405-414. Plackett, R. L., & Burman, J. P. (1946). The design of optimum multifactorial experiments. Biometrika, 34,
255-272. Polya, G. (1920). Uber den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem. Mathematische Zeitschrift, 8, 171-181. Porebski, O. R. (1966). Discriminatory and
canonical analysis of technical college data. British Journal of Mathematical and Statistical Psychology, 19, 215-236. Powell, M. J. D. (1964). An efficient method for finding the minimum of a
function of several variables without calculating derivatives. Computer Journal, 7, 155-162. Pregibon, D. (1997). Data Mining. Statistical Computing and Graphics, 7, 8. Prentice, R. (1973).
Exponential survivals with censoring and explanatory variables. Biometrika, 60, 279-288. Press, W. H., Flannery, B. P., Teukolsky, S. A., Vetterling, W. T. (1992). Numerical Recipes (2nd Edition).
New York: Cambridge University Press. Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. (1992). Numerical Recipes in C: The Art of Scientific Computing (Second ed.). Cambridge
University Press. Press, William, H., Flannery, B. P., Teukolsky, S. A., Vetterling, W. T. (1986). Numerical Recipes. New York: Cambridge University Press. Priestley, M. B. (1981). Spectral analysis
and time series. New York: Academic Press. Pyzdek, T. (1989). What every engineer should know about quality control. New York: Marcel Dekker. Quinlan. (1992). C4.5: Programs for Machine Learning,
Morgan Kaufmann Quinlan, J.R., & Cameron-Jones, R.M. (1995). Oversearching and layered search in empirical learning. Proceedings of the 14th International Joint Conference on Artificial Intelligence,
Montreal (Vol. 2). Morgan Kaufman, 1019-10244. Ralston, A., & Wilf, H.S. (Eds.). (1960). Mathematical methods for digital computers. New York: Wiley. Ralston, A., & Wilf, H.S. (Eds.). (1967).
Mathematical methods for digital computers (Vol. II). New York: Wiley. Randles, R. H., & Wolfe, D. A. (1979). Introduction to the theory of nonparametric statistics. New York: Wiley. Rannar, S.,
Lindgren, F., Geladi, P, and Wold, S. (1994) A PLS Kernel Algorithm for Data Sets with Many Variables and Fewer Objects. Part 1: Theory and Algorithm, Journal of Chemometrics, 8, 111-125. Rao, C. R.
(1951). An asymptotic expansion of the distribution of Wilks' criterion. Bulletin of the International Statistical Institute, 33, 177-181. Rao, C. R. (1952). Advanced statistical methods in biometric
research. New York: Wiley. Rao, C. R. (1965). Linear statistical inference and its applications. New York: Wiley. Rhoades, H. M., & Overall, J. E. (1982). A sample size correction for Pearson
chi-square in 2 x 2 contingency tables. Psychological Bulletin, 91, 418-423. Rinne, H., & Mittag, H. J. (1995). Statistische Methoden der Qualittssicherung (3rd. edition). Mnchen/Wien: Hanser Verlag.
Ripley, B. D. (1981). Spacial statistics. New York: Wiley. Ripley, B. D. (1996). Pattern recognition and neural networks. Cambridge: Cambridge University Press. Ripley, B. D., (1996) Pattern
Recognition and Neural Networks, Cambridge University Press Rodriguez, R. N. (1992). Recent developments in process capability analysis. Journal of Quality Technology, 24, 176-187. Rogan, J. C.,
Keselman, J. J., & Mendoza, J. L. (1979). Analysis of repeated measurements. British Journal of Mathematical and Statistical Psychology, 32, 269-286. Rosenberg, S. (1977). New approaches to the
analysis of personal constructs in person perception. In A. Landfield (Ed.), Nebraska symposium on motivation (Vol. 24). Lincoln, NE: University of Nebraska Press. Rosenberg, S., & Sedlak, A. (1972).
Structural representations of implicit personality theory. In L. Berkowitz (Ed.). Advances in experimental social psychology (Vol. 6). New York: Academic Press. Rosenblatt, F. (1958). The Perceptron:
A probabilistic model for information storage and organization in the brain. Psychological Review 65, 386-408. Roskam, E. E., & Lingoes, J. C. (1970). MINISSA-I: A Fortran IV program for the smallest
space analysis of square symmetric matrices. Behavioral Science, 15, 204-205. Ross, P. J. (1988). Taguchi techniques for quality engineering: Loss function, orthogonal experiments, parameter, and
tolerance design. Milwaukee, Wisconsin: ASQC. Roy, J. (1958). Step-down procedure in multivariate analysis. Annals of Mathematical Statistics, 29, 1177-1187. Roy, J. (1967). Some aspects of
multivariate analysis. New York: Wiley. Roy, R. (1990). A primer on the Taguchi method. Milwaukee, Wisconsin: ASQC. Royston, J. P. (1982). An extension of Shapiro and Wilks' W test for normality to
large samples. Applied Statistics, 31, 115-124. Rozeboom, W. W. (1979). Ridge regression: Bonanza or beguilement? Psychological Bulletin, 86, 242-249. Rozeboom, W. W. (1988). Factor indeterminacy:
the saga continues. British Journal of Mathematical and Statistical Psychology, 41, 209-226. Rubinstein, L.V., Gail, M. H., & Santner, T. J. (1981). Planning the duration of a comparative clinical
trial with loss to follow-up and a period of continued observation. Journal of Chronic Diseases, 34, 469479. Rud, O., P. (2001). Data mining cookbook: Modeling data for marketing, risk, and customer
relationship management. NY: Wiley. Rumelhart, D.E. and McClelland, J. (eds.) (1986). Parallel Distributed Processing, Vol 1. Cambridge, MA: MIT Press. Rumelhart, D.E., Hinton, G.E. and Williams,
R.J. (1986). Learning internal representations by error propagation. In D.E. Rumelhart, J.L. McClelland (Eds.), Parallel Distributed Processing, Vol 1. Cambridge, MA: MIT Press. Runyon, R. P., &
Haber, A. (1976). Fundamentals of behavioral statistics. Reading, MA: Addison-Wesley. Ryan, T. P. (1989). Statistical methods for quality improvement. New York: Wiley. Ryan, T. P. (1997). Modern
Regression Methods. New York: Wiley. Sandler, G. H. (1963). System reliability engineering. Englewood Cliffs, NJ: Prentice-Hall. SAS Institute, Inc. (1982). SAS user's guide: Statistics, 1982
Edition. Cary, NC: SAS Institute, Inc. Satorra, A., & Saris, W. E. (1985). Power of the likelihood ratio test in covariance structure analysis. Psychometrika, 50, 83-90. Saxena, K. M. L., & Alam, K.
(1982). Estimation of the noncentrality parameter of a chi squared distribution. Annals of Statistics, 10, 1012-1016. Scheff, H. (1953). A method for judging all possible contrasts in the analysis of
variance. Biometrika, 40, 87-104. Scheff, H. (1959). The analysis of variance. New York: Wiley. Scheff, H. (1963). The simplex-centroid design for experiments with mixtures. Journal of the Royal
Statistical Society, B25, 235-263. Scheff, H., & Tukey, J. W. (1944). A formula for sample sizes for population tolerance limits. Annals of Mathematical Statistics, 15, 217. Scheines, R. (1994).
Causation, indistinguishability, and regression. In F. Faulbaum, (Ed.), SoftStat '93. Advances in statistical software 4. Stuttgart: Gustav Fischer Verlag. Schiffman, S. S., Reynolds, M. L., & Young,
F. W. (1981). Introduction to multidimensional scaling: Theory, methods, and applications. New York: Academic Press. Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the
discontinuation of significance testing in the analysis of research data. In Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (Eds.), What if there were no significance tests. Mahwah, NJ: Lawrence
Erlbaum Associates. Schmidt, P., & Muller, E. N. (1978). The problem of multicollinearity in a multistage causal alienation model: A comparison of ordinary least squares, maximum-likelihood and ridge
estimators. Quality and Quantity, 12, 267-297. Schmidt, P., & Sickles, R. (1975). On the efficiency of the Almon lag technique. International Economic Review, 16, 792-795. Schmidt, P., & Waud, R. N.
(1973). The Almon lag technique and the monetary versus fiscal policy debate. Journal of the American Statistical Association, 68, 11-19. Schnabel, R. B., Koontz, J. E., and Weiss, B. E. (1985). A
modular system of algorithms for unconstrained minimization. ACM Transactions on Mathematical Software, 11, 419-440. Schneider, H. (1986). Truncated and censored samples from normal distributions.
New York: Marcel Dekker. Schneider, H., & Barker, G.P. (1973). Matrices and linear algebra (2nd ed.). New York: Dover Publications. Schnemann, P. H., & Steiger, J. H. (1976). Regression component
analysis. British Journal of Mathematical and Statistical Psychology, 29, 175-189. Schrock, E. M. (1957). Quality control and statistical methods. New York: Reinhold Publishing. Schwarz, G. ( 1978).
Estimating the dimension of a model. Annals of Statistics, 6, 461-464. Scott, D. W. (1979). On optimal and data-based histograms. Biometrika, 66, 605-610. Searle, S. R. (1987). Linear models for
unbalanced data. New York: Wiley. Searle, S. R., Casella, G., & McCulloch, C. E. (1992). Variance components. New York: Wiley. Searle, S., R., Speed., F., M., & Milliken, G. A. (1980). The population
marginal means in the linear model: An alternative to least squares means. The American Statistician, 34, 216-221. Seber, G. A. F., & Wild, C. J. (1989). Nonlinear regression. New York: Wiley.
Sebestyen, G. S. (1962). Decision making processes in pattern recognition. New York: Macmillan. Sen, P. K., & Puri, M. L. (1968). On a class of multivariate multisample rank order tests, II: Test for
homogeneity of dispersion matrices. Sankhya, 30, 1-22. Serlin, R. A., & Lapsley, D. K. (1993). Rational appraisal of psychological research and the good-enough principle. In G. Keren & C. Lewis
(Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 199-228). Hillsdale, NJ: Lawrence Erlbaum Associates. Serlin. R. A., & Lapsley, D. K. (1985). Rationality
in psychological research: The good-enough principle. American Psychologist, 40, 7383. Shapiro, A., & Browne, M. W. (1983). On the investigation of local identifiability: A counter example.
Psychometrika, 48, 303-304. Shapiro, S. S., Wilk, M. B., & Chen, H. J. (1968). A comparative study of various tests of normality. Journal of the American Statistical Association, 63, 1343-1372.
Shepherd, A. J. (1997). Second-Order Methods for Neural Networks. New York: Springer. Shewhart, W. A. (1931). Economic control of quality of manufactured product. New York: D. Van Nostrand. Shewhart,
W. A. (1939). Statistical method from the viewpoint of quality. Washington, DC: The Graduate School Department of Agriculture. Shirland, L. E. (1993). Statistical quality control with microcomputer
applications. New York: Wiley. Shiskin, J., Young, A. H., & Musgrave, J. C. (1967). The X-11 variant of the census method II seasonal adjustment program. (Technical paper no. 15). Bureau of the
Census. Shumway, R. H. (1988). Applied statistical time series analysis. Englewood Cliffs, NJ: Prentice Hall. Siegel, A. E. (1956). Film-mediated fantasy aggression and strength of aggressive drive.
Child Development, 27, 365-378. Siegel, S. (1956). Nonparametric statistics for the behavioral sciences. New York: McGrawHill. Siegel, S., & Castellan, N. J. (1988). Nonparametric statistics for the
behavioral sciences (2nd ed.) New York: McGraw-Hill. Simkin, D., & Hastie, R. (1986). Towards an information processing view of graph perception. Proceedings of the Section on Statistical Graphics,
American Statistical Association, 11-20. Sinha, S. K., & Kale, B. K. (1980). Life testing and reliability estimation. New York: Halstead. Smirnov, N. V. (1948). Table for estimating the goodness of
fit of empirical distributions. Annals of Mathematical Statistics, 19, 279-281. Smith, D. J. (1972). Reliability engineering. New York: Barnes & Noble. Smith, K. (1953). Distribution-free statistical
methods and the concept of power efficiency. In L. Festinger and D. Katz (Eds.), Research methods in the behavioral sciences (pp. 536-577). New York: Dryden. Sneath, P. H. A., & Sokal, R. R. (1973).
Numerical taxonomy. San Francisco: W. H. Freeman & Co. Snee, R. D. (1975). Experimental designs for quadratic models in constrained mixture spaces. Technometrics, 17, 149-159. Snee, R. D. (1979).
Experimental designs for mixture systems with multi-component constraints. Communications in Statistics - Theory and Methods, A8(4), 303-326. Snee, R. D. (1985). Computer-aided design of experiments
- some practical experiences. Journal of Quality Technology, 17, 222-236. Snee, R. D. (1986). An alternative approach to fitting models when re-expression of the response is useful. Journal of
Quality Technology, 18, 211-225. Sokal, R. R., & Mitchener, C. D. (1958). A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38, 1409. Sokal, R. R.,
& Sneath, P. H. A. (1963). Principles of numerical taxonomy. San Francisco: W. H. Freeman & Co. Soper, H. E. (1914). Tables of Poisson's exponential binomial limit. Biometrika, 10, 25-35. Spearman,
C. (1904). "General intelligence," objectively determined and measured. American Journal of Psychology, 15, 201-293. Speckt, D.F. (1990). Probabilistic Neural Networks. Neural Networks 3 (1),
109-118. Speckt, D.F. (1991). A Generalized Regression Neural Network. IEEE Transactions on Neural Networks 2 (6), 568-576. Spirtes, P., Glymour, C., & Scheines, R. (1993). Causation, prediction, and
search. Lecture Notes in Statistics, V. 81. New York: Springer-Verlag. Spjotvoll, E., & Stoline, M. R. (1973). An extension of the T-method of multiple comparison to include the cases with unequal
sample sizes. Journal of the American Statistical Association, 68, 976-978. Springer, M. D. (1979). The algebra of random variables. New York: Wiley. Spruill, M. C. (1986). Computation of the maximum
likelihood estimate of a noncentrality parameter. Journal of Multivariate Analysis, 18, 216-224. Steiger, J. H. (1979). Factor indeterminacy in the 1930's and in the 1970's; some interesting
parallels. Psychometrika, 44, 157-167. Steiger, J. H. (1980a). Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87, 245-251. Steiger, J. H. (1980b). Testing pattern
hypotheses on correlation matrices: Alternative statistics and some empirical results. Multivariate Behavioral Research, 15, 335-352. Steiger, J. H. (1988). Aspects of person-machine communication in
structural modeling of correlations and covariances. Multivariate Behavioral Research, 23, 281-290. Steiger, J. H. (1989). EzPATH: A supplementary module for SYSTAT and SYGRAPH. Evanston, IL: SYSTAT,
Inc. Steiger, J. H. (1990). Some additional thoughts on components and factors. Multivariate Behavioral Research, 25, 41-45. Steiger, J. H., & Browne, M. W. (1984). The comparison of interdependent
correlations between optimal linear composites. Psychometrika, 49, 11-24. Steiger, J. H., & Fouladi, R. T. (1992). R2: A computer program for interval estimation, power calculation, and hypothesis
testing for the squared multiple correlation. Behavior Research Methods, Instruments, and Computers, 4, 581582. Steiger, J. H., & Fouladi, R. T. (1997). Noncentrality interval estimation and the
evaluation of statistical models. In Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (Eds.), What if there were no significance tests. Mahwah, NJ: Lawrence Erlbaum Associates. Steiger, J. H., &
Hakstian, A. R. (1982). The asymptotic distribution of elements of a correlation matrix: Theory and application. British Journal of Mathematical and Statistical Psychology, 35, 208-215. Steiger, J.
H., & Lind, J. C. (1980). Statistically-based tests for the number of common factors. Paper presented at the annual Spring Meeting of the Psychometric Society in Iowa City. May 30, 1980. Steiger, J.
H., & Schnemann, P. H. (1978). A history of factor indeterminacy. In S. Shye, (Ed.), Theory Construction and Data Analysis in the Social Sciences. San Francisco: Jossey-Bass. Steiger, J. H., Shapiro,
A., & Browne, M. W. (1985). On the multivariate asymptotic distribution of sequential chi-square statistics. Psychometrika, 50, 253-264. Stelzl, I. (1986). Changing causal relationships without
changing the fit: Some rules for generating equivalent LISREL models. Multivariate Behavioral Research, 21, 309-331. Stenger, F. (1973). Integration formula based on the trapezoid formula. Journal of
the Institute of Mathematics and Applications, 12, 103-114. Stevens, J. (1986). Applied multivariate statistics for the social sciences. Hillsdale, NJ: Erlbaum. Stevens, W. L. (1939). Distribution of
groups in a sequence of alternatives. Annals of Eugenics, 9, 10-17. Stewart, D. K., & Love, W. A. (1968). A general canonical correlation index. Psychological Bulletin, 70, 160-163. Steyer, R.
(1992). Theorie causale regressionsmodelle [Theory of causal regression models]. Stuttgart: Gustav Fischer Verlag. Steyer, R. (1994). Principles of causal modeling: a summary of its mathematical
foundations and practical steps. In F. Faulbaum, (Ed.), SoftStat '93. Advances in statistical software 4. Stuttgart: Gustav Fischer Verlag. Stone, M. and Brooks, R. J. (1990) Continuum Regression:
Cross-validated Sequentially Constructed Prediction Embracing Ordinary Least Squares, Partial Least Squares, and Principal Components Regression, Journal of Royal Statistical Society, 52, No. 2,
237-269. Student (1908). The probable error of a mean. Biometrika, 6, 1-25. Swallow, W. H., & Monahan, J. F. (1984). Monte Carlo comparison of ANOVA, MIVQUE, REML, and ML estimators of variance
components. Technometrics, 26, 47-57. Taguchi, G. (1987). Jikken keikakuho (3rd ed., Vol I & II). Tokyo: Maruzen. English translation edited by D. Clausing. System of experimental design. New York:
UNIPUB/Kraus International Taguchi, G., & Jugulum, R. (2002). The Mahalanobis-Taguchi strategy. New York, NY: Wiley. Tanaka, J. S., & Huba, G. J. (1985). A fit index for covariance structure models
under arbitrary GLS estimation. British Journal of Mathematical and Statistical Psychology, 38, 197-201. Tanaka, J. S., & Huba, G. J. (1989). A general coefficient of determination for covariance
structure models under arbitrary GLS estimation. British Journal of Mathematical and Statistical Psychology, 42, 233-239. Tatsuoka, M. M. (1970). Discriminant analysis. Champaign, IL: Institute for
Personality and Ability Testing. Tatsuoka, M. M. (1971). Multivariate analysis. New York: Wiley. Tatsuoka, M. M. (1976). Discriminant analysis. In P. M. Bentler, D. J. Lettieri, and G. A. Austin
(Eds.), Data analysis strategies and designs for substance abuse research. Washington, DC: U.S. Government Printing Office. Taylor, D. J., & Muller, K. E. (1995). Computing confidence bounds for
power and sample size of the general linear univariate model. The American Statistician, 49, 4347. Thorndike, R. L., & Hagen, E. P. (1977). Measurement and evaluation in psychology and education. New
York: Wiley. Thurstone, L. L. (1931). Multiple factor analysis. Psychological Review, 38, 406-427. Thurstone, L. L. (1947). Multiple factor analysis. Chicago: University of Chicago Press. Timm, N. H.
(1975). Multivariate analysis with applications in education and psychology. Monterey, CA: Brooks/Cole. Timm, N. H., & Carlson, J. (1973). Multivariate analysis of non-orthogonal experimental designs
using a multivariate full rank model. Paper presented at the American Statistical Association Meeting, New York. Timm, N. H., & Carlson, J. (1975). Analysis of variance through full rank models.
Multivariate behavioral research monographs, No. 75-1. Tracey, N. D., Young, J., C., & Mason, R. L. (1992). Multivariate control charts for individual observations. Journal of Quality Technology, 2,
88-95. Tribus, M., & Szonyi, G. (1989). An alternative view of the Taguchi approach. Quality Progress, 22, 46-48. Trivedi, P. K., & Pagan, A. R. (1979). Polynomial distributed lags: A unified
treatment. Economic Studies Quarterly, 30, 37-49. Tryon, R. C. (1939). Cluster Analysis. Ann Arbor, MI: Edwards Brothers. Tucker, L. R., Koopman, R. F., & Linn, R. L. (1969). Evaluation of factor
analytic research procedures by means of simulated correlation matrices. Psychometrika, 34, 421-459. Tufte, E. R. (1983). The visual display of quantitative information. Cheshire, CT: Graphics Press.
Tufte, E. R. (1990). Envisioning information. Cheshire, CT: Graphics Press. Tukey, J. W. (1953). The problem of multiple comparisons. Unpublished manuscript, Princeton University. Tukey, J. W.
(1962). The future of data analysis. Annals of Mathematical Statistics, 33, 1-67. Tukey, J. W. (1967). An introduction to the calculations of numerical spectrum analysis. In B. Harris (Ed.), Spectral
analysis of time series. New York: Wiley. Tukey, J. W. (1972). Some graphic and semigraphic displays. In Statistical Papers in Honor of George W. Snedecor; ed. T. A. Bancroft, Arnes, IA: Iowa State
University Press, 293-316. Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley. Tukey, J. W. (1984). The collected works of John W. Tukey. Monterey, CA: Wadsworth. Tukey, P.
A. (1986). A data analyst's view of statistical plots. Proceedings of the Section on Statistical Graphics, American Statistical Association, 21-28. Tukey, P. A., & Tukey, J. W. (1981). Graphical
display of data sets in 3 or more dimensions. In V. Barnett (Ed.), Interpreting multivariate data. Chichester, U.K.: Wiley. Uspensky, J. V. (1937). Introduction to Mathematical Probability. New York:
McGraw-Hill. Vale, C. D., & Maurelli, V. A. (1983). Simulating multivariate non-normal distributions. Psychometrika, 48, 465-471. Vandaele, W. (1983). Applied time series and Box-Jenkins models. New
York: Academic Press. Vaughn, R. C. (1974). Quality control. Ames, IA: Iowa State Press. Velicer, W. F., & Jackson, D. N. (1990). Component analysis vs. factor analysis: some issues in selecting an
appropriate procedure. Multivariate Behavioral Research, 25, 1-28. Velleman, P. F., & Hoaglin, D. C. (1981). Applications, basics, and computing of exploratory data analysis. Belmont, CA: Duxbury
Press. Von Mises, R. (1941). Grundlagen der Wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 5, 52-99. Wainer, H. (1995). Visual revelations. Chance, 8, 48-54. Wald, A. (1939). Contributions
to the theory of statistical estimation and testing hypotheses. Annals of Mathematical Statistics, 10, 299-326. Wald, A. (1945). Sequential tests of statistical hypotheses. Annals of Mathematical
Statistics, 16, 117-186. Wald, A. (1947). Sequential analysis. New York: Wiley. Walker, J. S. (1991). Fast Fourier transforms. Boca Raton, FL: CRC Press. Wallis, K. F. (1974). Seasonal adjustment and
relations between variables. Journal of the American Statistical Association, 69, 18-31. Wang, C. M., & Gugel, H. W. (1986). High-performance graphics for exploring multivariate data. Proceedings of
the Section on Statistical Graphics, American Statistical Association, 60-65. Ward, J. H. (1963). Hierarchical grouping to optimize an objective function. Journal of the American Statistical
Association, 58, 236. Warner B. & Misra, M. (1996). Understanding Neural Networks as Statistical Tools. The American Statistician, 50, 284-293. Weatherburn, C. E. (1946). A First Course in
Mathematical Statistics. Cambridge: Cambridge University Press. Wei, W. W. (1989). Time series analysis: Univariate and multivariate methods. New York: Addison-Wesley. Weibull, W. (1951). A
statistical distribution function of wide applicability. Journal of Applied Mechanics, September. Weibull, W., (1939). A statistical theory of the strength of materials. Ing. Velenskaps Akad. Handl.,
151, 1-45. Weigend, A.S., Rumelhart, D.E. and Huberman, B.A. (1991). Generalization by weightelimination with application to forecasting. In R.P. Lippmann, J.E. Moody and D.S. Touretzky (Eds.)
Advances in Neural Information Processing Systems 3, 875-882. San Mateo, CA: Morgan Kaufmann. Weiss, S. M., & Indurkhya, N. (1997). Predictive data mining: A practical guide. New York:
Morgan-Kaufman. Welch, B. L. (1938). The significance of the differences between two means when the population variances are unequal. Biometrika, 29, 350-362. Welstead, S. T. (1994). Neural network
and fuzzy logic applications in C/C++. New York: Wiley. Werbos, P.J. (1974). Beyond regression: new tools for prediction and analysis in the behavioural sciences. Ph.D. thesis, Harvard University,
Boston, MA. Wescott, M. E. (1947). Attribute charts in quality control. Conference Papers, First Annual Convention of the American Society for Quality Control. Chicago: John S. Swift Co. Westphal,
C., Blaxton, T. (1998). Data mining solutions. New York: Wiley. Wheaton, B., Mthen, B., Alwin, D., & Summers G. (1977). Assessing reliability and stability in panel models. In D. R. Heise (Ed.),
Sociological Methodology. New York: Wiley. Wheeler, D. J., & Chambers, D.S. (1986). Understanding statistical process control. Knoxville, TN: Statistical Process Controls, Inc. Wherry, R. J. (1984).
Contributions to correlational analysis. New York: Academic Press. Whitney, D. R. (1948). A comparison of the power of non-parametric tests and tests based on the normal distribution under non-normal
alternatives. Unpublished doctoral dissertation, Ohio State University. Whitney, D. R. (1951). A bivariate extension of the U statistic. Annals of Mathematical Statistics, 22, 274-282. Widrow, B.,
and Hoff Jr., M.E. (1960). Adaptive switching circuits. IRE WESCON Convention Record, 96-104. Wiggins, J. S., Steiger, J. H., and Gaelick, L. (1981). Evaluating circumplexity in models of
personality. Multivariate Behavioral Research, 16, 263-289. Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics Bulletin, 1, 80-83. Wilcoxon, F. (1947). Probability tables for
individual comparisons by ranking methods. Biometrics, 3, 119-122. Wilcoxon, F. (1949). Some rapid approximate statistical procedures. Stamford, CT: American Cyanamid Co. Wilde, D. J., & Beightler,
C. S. (1967). Foundations of optimization. Englewood Cliffs, NJ: Prentice-Hall. Wilks, S. S. (1943). Mathematical Statistics. Princeton, NJ: Princeton University Press. Wilks, S. S. (1946).
Mathematical statistics. Princeton, NJ: Princeton University Press. Williams, W. T., Lance, G. N., Dale, M. B., & Clifford, H. T. (1971). Controversy concerning the criteria for taxonometric
strategies. Computer Journal, 14, 162. Wilson, G. A., & Martin, S. A. (1983). An empirical comparison of two methods of testing the significance of a correlation matrix. Educational and Psychological
Measurement, 43, 11-14. Winer, B. J. (1962). Statistical principles in experimental design. New York: McGraw-Hill. Winer, B. J. (1971). Statistical principles in experimental design (2nd ed.). New
York: McGraw Hill. Winer, B. J., Brown, D. R., Michels, K. M. (1991). Statistical principals in experimental design. (3rd ed.). New York: McGraw-Hill. Witten, I., H., & Frank, E. (2000). Data Mining:
Practical Machine Learning Tools and Techniques. New York: Morgan Kaufmann. Wolfowitz, J. (1942). Additive partition functions and a class of statistical hypotheses. Annals of Mathematical
Statistics, 13, 247-279. Wolynetz, M. S. (1979a). Maximum likelihood estimation from confined and censored normal data. Applied Statistics, 28, 185-195. Wolynetz, M. S. (1979b). Maximum likelihood
estimation in a linear model from confined and censored normal data. Applied Statistics, 28, 195-206. Wonnacott, R. J., & Wonnacott, T. H. (1970). Econometrics. New York: Wiley. Woodward, J. A., &
Overall, J. E. (1975). Multivariate analysis of variance by multiple regression methods. Psychological Bulletin, 82, 21-32. Woodward, J. A., & Overall, J. E. (1976). Calculation of power of the F
test. Educational and Psychological Measurement, 36, 165-168. Woodward, J. A., Bonett, D. G., & Brecht, M. L. (1990). Introduction to linear models and experimental design. New York: Harcourt, Brace,
Jovanovich. Yates, F. (1933). The principles of orthogonality and confounding in replicated experiments. Journal of Agricultural Science, 23, 108-145. Yates, F. (1937). The Design and Analysis of
Factorial Experiments. Imperial Bureau of Soil Science, Technical Communication No. 35, Harpenden. Yokoyama, Y., & Taguchi, G. (1975). Business data analysis: Experimental regression analysis. Tokyo:
Maruzen. Youden, W. J., & Zimmerman, P. W. (1936). Field trials with fiber pots. Contributions from Boyce Thompson Institute, 8, 317-331. Young, F. W, & Hamer, R. M. (1987). Multidimensional scaling:
History, theory, and applications. Hillsdale, NJ: Erlbaum Young, F. W., Kent, D. P., & Kuhfeld, W. F. (1986). Visuals: Software for dynamic hyperdimensional graphics. Proceedings of the Section on
Statistical Graphics, American Statistical Association, 69-74. Younger, M. S. (1985). A first course in linear regression (2nd ed.). Boston: Duxbury Press. Yuen, C. K., & Fraser, D. (1979). Digital
spectral analysis. Melbourne: CSIRO/Pitman. Yule, G. U. (1897). On the theory of correlation. Journal of the Royal Statistical Society, 60, 812-854. Yule, G. U. (1907). On the theory of correlation
for any number of variables treated by a new system of notation. Proceedings of the Royal Society, Ser. A, 79, 182-193. Yule, G. U. (1911). An Introduction to the Theory of Statistics. London:
Griffin. Zippin, C., & Armitage, P. (1966). Use of concomitant variables and incomplete survival information in the estimation of an exponential survival parameter. Biometrics, 22, 665-672. Zupan, J.
(1982). Clustering of large data sets. New York: Research Studies Press. Zweig, M.H., & Campbell, G. (1993). Receiver-Operating Characteristic (ROC) Plots: A Fundamental Evaluation Tool in Clinical
Medicine. Clin. Chem 39 (4), pp. 561-577. Zwick, W. R., & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99, 432-442.
Textbooks related to the document above:
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Kaplan University - ACCOUNTING - 101
#:06Instructions:Downloadthisdocumenttoyourcomputerbeforefillingitout.SaveusingSAVEASandadd yournametothefrontremov
BU - CAS CH - 102
(low)(high)descriptionSpontaneous at high temperature only.Never spontaneous.Spontaneous at all temperatures.Spontaneous at low temperature only.
South Carolina - POLI - 315
1) Security dilemma Action-reaction/retaliation Bad decision-making2) Relative power Preemptive Revisionist Balancing checks3) Rationalist Bargaining model Why war4) Diversionary critique, support5)
Externalities of civil war6) Models of civil
South Carolina - POLI - 315
Class NotesA) Relational theories of war not systemic1) Dyadic = between two states2) Security dilemmaa) Even if states dont want war, things spiral out of control due toretaliation/action-reaction
(i) Retaliatory one up the other(ii) Defensive allo
South Carolina - POLI - 315
Book Notes1) Causes of civil war, domestic conflict Deprivation poverty, lack of rights Unequal distribution Lack of dem Weak govn Poor national integration Failure of govn to curb globalization
shocks2) Factors causing continuation of civil war
South Carolina - POLI - 315
Transnational ethnic tieso The extent to which shared ethnic ties between a group that controls astate and a kin who are disadvantages in a neighboring state lead tointernational tensions between the
two states (Moore 89)o Ethnic ties among peoples a
South Carolina - POLI - 315
A) Societal1) India-Pakistan cricket match better societal desire for peace2) Opinion poll India not even seen as largest threat (why is US somuch greater) importance of perceived threat perceived
threatB) Societal (importance of public opinion)1) Im
South Carolina - POLI - 315
I) Theory ApplicationA) Transnational ties1) Discrimination states are likely to pay attention to groups(a) Evidence of:(i) Kashmir leader was assassinated perceived discrimination(Assassination 1)
(b) Response(i) Pakistani leaders are negligent of
South Carolina - POLI - 315
James GrahamExplaining Interstate Territorial ConflictThe territorial dispute between Pakistan and India, a decades-long problem thathas spawned war, state-backed separatist movements, and
perpetually bad relationsbetween the two states, continues to
South Carolina - SPAN - 210
Muchas personas creen que el crecimiento rpido de la poblacin hispnica, quees debido en parte a inmigracin ilegal, causara muchos cambios en los Estados Unidos.Por esta razn, inmigracin es una tema
que crea mucha discusion, especialmente en losestados
University of Toronto - ECON - A04
#Mojai#M#o#j#a#i## ## #<#
University of Toronto - ECON - A04
1Welcome to ECMA04H Introduction toM icroeconomics: A Mathematical Approach!Prof. Gordon ClevelandOffice: MW-364, Phone: 416-287-7317E-mail: cleveland@utsc.utoronto.caOffice Hours: Mondays 1 p.m.- 2
p.m.Wednesdays 11 a.m. 12 noonWebsite: www.utsc.
University of Toronto - ECON - A04
Week 2: Demand and Supply in aCompetitive MarketWhat did we learn last week?- definition of economics- opportunity cost- production possibilities model- calculating opportunity cost on PPF-
increasing costs- maximizing value function along PPF1A
University of Toronto - ECON - A04
Week #3 Demand and UtilityReview from last week:What will happen to the equilibrium price and theequilibrium quantity traded of coffee (a normalgood) if (a) consumer incomes increase, and (b)
atechnological improvement in coffee productionlowers cos
University of Toronto - ECON - A04
Week 4 Elasticity, Tax Incidence andT ax BurdenNote: first midterm Friday Oct 26 at 3 p.m.Elasticity = sensitivity or reponsivenessChanges in one variable as another variablechangesHow sensitive is
the quantity demanded tochanges in the price of th
University of Toronto - ECON - A04
University of Toronto - ECON - A04
University of Toronto - ECON - A04
11/1Professor: Michael Krashinsky( KRASH IN SKEE )THE BASICS (What do I have to know about this course?)1. What materials do I need?THE BOOK:Ragan and Lipsey - Recommended not requiredTHE MANUAL:OLD
EXAMS:Most Important - download and print, fro
University of Toronto - ECON - A04
12/1Review issue of demandRemember : defined as what consumers WANT to buy, givenconditionsDemand for X = DX(PX, PY1, PY2, ., I, pop)Demand depends on (is a function of)1. own pricePX2. other prices
PYdDX/dPX < 0(LAW OF DEMAND)effect of change
University of Toronto - ECON - A04
3/1REVIEW: Prices play a critical role in allocating resourcesAn increase in demand leads to an imbalance between what buyerswant to buy and what sellers want to sell / WE CALL THISEXCESS DEMANDThe
rise in price that results does two things:1. It ca
University of Toronto - ECON - A04
4/1Review last week:First we defined ELASTICITYelasticity= % change in Q% change in PNow % change in Q = (/ Q/Q) x 100%And % change in P = (/ P/P) x 100%So elasticity = / Q/Q x 100%/ P/P x 100%= / Q/
Q = / Q P/ P/P/P QElasticity of demandelas
University of Toronto - ECON - A04
5/0Upcoming exams:TERM TEST #1, worth 20%FRIDAY OCTOBER 26 3 p.m.(rooms to be announced)TERM TEST #2, worth 30%FRIDAY NOVEMBER 16 3 p.m.(rooms to be announced)Note: If you miss term exam, no makeup,
weightautomatically shifts to final.ONE EXCEPT
University of Toronto - ECON - A04
6/1NOW we go to the next stage: we want to link production tocostsTC = PKK + PLLPKK = cost of capitalPLL = cost of labourIn short run, K fixed and L variable, soPKK = fixed costTC ==PLL = variable
costPKK + PLLFC+ VCNow divide both sides by
University of Toronto - ECON - A04
7/1PERFECT COMPETITIONOne model of how the economy worksThe actual assumptions behind supply and demandAssume:1. many firms (atomistic)/ price takers2. homogeneous product3. perfect information, no
transaction costs/ law of one price4. free entr
University of Toronto - ECON - A04
8/1FIRST SEVEN PAGES ARE REVIEWReview what happens in long run in perfect competition:Case 1: If negative profits, assume EXITCase 2: If positive profits, assume ENTRYReminder !Profits are economic
profitsSo if / > 0, that is, if economic profits a
University of Toronto - ECON - A04
9/1Return to monopoly problem we did last week:TC = 100 + 4q + q2demand given by P = 44 - q = 44 - Qq = 10, P = 34, / = 340 - 240 = 100Now imagine that we place an excise tax of $4 on buyers.PB = 44
- QPS = 40 - QSolve for two prices:q = 9, PB =
University of Toronto - ECON - A04
10/1so summarizeArguments for and against monopolyAgainst: 0 reduce Q, raise P, reduce CS0 hard to regulateFor:0 natural monopoly0 even if not natural monopoly, dynamic argumentJoseph Schumpeterlure
of monopoly profits lead to innovationeg. Micr
University of Toronto - ECON - A04
11/1Course evaluations?Now turn to issue of Government Financing of certain goods.Core issue - why have government at all?After all, with exception of monopoly, we have seen that marketworks well to
maximize NSB or GTSBUT:Obviously need government
University of Toronto - ECON - A04
12/1Go back to problemTWO firms: EACH generates 100 units of pollutionFIRM #1 has TC of cleaning up (of abatement) ofTC1 = A12FIRM #2 has TC of cleaning up (of abatement) ofTC2 = 4A22Now suppose that
society benefits by $80 for each unit ofpolluti
University of Toronto - ECON - A06
Macro Week 1 2008 Professor Michael KrashinskyII-1/1New term, more of the same!Grades in ECMA04 were quite
high:Overall:90-10080-8970-7960-6950-590-49Total11517319215418813295412.1%18.1%20.1%16.1%19.7%13.8%Note that these grades w
University of Toronto - ECON - A06
Missing page from first weeks notesII - 1/6Note that u.r. is never zeronormal frictionsi industriesi regionsi normal movements, esp in secondarylabour marketwe think of full employment as u.r. about
6 or 7% (maybelower?)Note - when u.r. rises, i
University of Toronto - ECON - A06
II-3/1Review:3. Does G.D.P. measure how well off we are?sort of - it measures our access to goods and services thatare sold through the marketBUT - it does not account for harm done during
production- it does not account for depletion of scarce reso
University of Toronto - ECON - A06
II-4/1What we were doing at the end of the lectures last week was tocomplicate the model:We inserted governmentWe inserted the foreign sectorWe ended up with a much more detailed model:C = 20 + (7/8)
DIT = (2/7)Y - 20TR = 220 - (1/7)YI = 100 - 5(r
University of Toronto - ECON - A06
II - 5/1Continue working with the extended model we used last week:eg. C = 20 + (7/8)DIT = (2/7)Y - 20TR = 220 - (1/7)YI = 100G = 250X = 120IM = (1/8)YRemember, AE = 700 + (3/8)Y,*Y = 1120We argued
that the deficit was ENDOGENOUSLets show tha
University of Toronto - ECON - A06
II - 6/1Currently, we are trying to revise our model to take prices intoaccount.We are responding to a clear gap in our modelwe have a model without prices, and we have beensuggesting (implicitly!)
that whatever buyers want will beproduced[Did we r
University of Toronto - ECON - A06
II - 7/1Review: expansionary fiscal policy in situation of deflationary gap:This means a much flatter AS:full linked diagramII - 7/2Now suppose expansionary fiscal policy, inflationary gapThis means
a much steeper AS:full linked diagramII - 7/3eg
University of Toronto - ECON - A06
II - 8/1Continue the topic we began last week. We need tounderstand the role of money in the economy, and how it feedsinto our macro model via the interest rate.Money is what people use money for
transactions.Money is convenient.Without money, the o
University of Toronto - ECON - A06
II - 9/1Review last week:How does government control money supply:Main way is through OPEN MARKET OPERATIONThe government enters the bond market and buys or sellsgovernment bondsTWO interesting
questions from students:1. Arent some bonds risky, sin
University of Toronto - ECON - A06
II - 10/1Review material at end of last week:Why does monetary policy fail in major recession?Notice that links are far less direct.In case of Fiscal Policy:TRincrease G or cut TA or increaseincrease
in G directly stimulates demandCut in TA or ris
University of Toronto - ECON - A06
II - 11/1Now before we move forward on exchange rates, we need toclean up a couple of small topics from fiscal and monetarypolicy.Topic #1 - Feedbacks through the interest rateWhen we did fiscal and
monetary policy, we ignored feedbackmechanisms thr
University of Toronto - ECON - A06
II - 12/1FILL IN COURSE EVALUATIONS ON INTRANETConsider monetary policy in a world of floating exchangeratesSo far, we have discussed how government might usemonetary policy:1. Expansionary monetary
policy:Government buys bonds domestically through
University of Toronto - ECON - A06
2008UNIVERSITY OF TORONTOScarboroughINTRODUCTION TOMACROECONOMICS:A MATHEMATICAL APPROACHECONOMICS A06HProfessor Michael KrashinskyProfessor Iris Au COURSEOUTLINECOURSEHANDBOOKANDSTUDYAIDSSCHEDULE OF
IMPORTANT DATESECMA06HWeek 1Monday, Jan
University of Toronto - ECON - A06
University of Toronto - ECON - A06
Midlands Tech - ACC - 102
BU - PHYSICS - 101
Q=0.25,1,4,161.0Quautum Ising model:Landau-Zener solutionp0.80.60.40.20.0-4-3-2-101234kC0.1CLinear Fit of exactn100part_C0.10.01k=-0.50624nn0.01N=1001E-31E-3N=20010100Q1000110Q100The non-uniform
Ising chain - L=12
Michigan Flint - ECON - 2450
12.Suppose that the long-run world demand and supply elasticities of crude oil are -0.906 and0.515, respectively. The current long-run equilibrium price is $30 per barrel and theequilibrium quantity
is 16.88 billion barrels per year. Derive the linear
Michigan Flint - ECON - 2450
VARIAN VARIANG2,Budget Constraint0. Describe budget constraint0. Algebra :Px X + PyY = IB(p1, , pn, m) =cfw_ (x1, , xn) | x1 0, , xn 0 and p1x1 + + pnxn m 1. Graph.1.Describe changes in budget
constraint2. Government programs and budget constrai
Michigan Flint - ECON - 2450
VARIAN VARIANG2,Budget Constraint0. Describe budget constraint0. Algebra :Px X + PyY = IB(p1, , pn, m) =cfw_ (x1, , xn) | x1 0, , xn 0 and p1x1 + + pnxn m 1. Graph.1.Describe changes in budget
constraint2. Government programs and budget constrai
Michigan Flint - ECON - 2450
CHAPTER 3DESCRIPTIVE STATISTICS: NUMERICAL MEASURESMULTIPLE CHOICE 1. The measure of location which is the most likely to be influenced by extreme values in the data set is the a. range b. median c.
mode d. mean ANS: D PTS: 1 TOP: Descriptive Statistics
Michigan Flint - ECON - 2450
12.Suppose that the long-run world demand and supply elasticities of crude oil are -0.906 and0.515, respectively. The current long-run equilibrium price is $30 per barrel and theequilibrium quantity
is 16.88 billion barrels per year. Derive the linear
Michigan Flint - ECON - 2450
Answers for chapters 1-8Please do not distributeCHAPTER11. Decidewhethereachofthefollowingstatementsistrueorfalseandexplain why:a.
Michigan Flint - ECON - 2450
CUNY Queens College, Economics 205, Geordan Hull, 1HW #32. Draw indifference curves that represent the following individuals preferences for
hamburgersandsoftdrinks.Indicatethedirectioninwhichtheindividualssatisfaction (orutility)isincreasing.a. Joeh
Westwood College - BUSINESSS - MBA400
/Legal/Insurance terminologyKnowledge o
Westwood College - BUSINESSS - MBA400
1. What type of product is GarageBand? You will need to provide the correct type andcategory to get full credit, as well as explanation.2. To which of Apples product lines does GarageBand belong? Be
sure your answer is theactual Product Line.3. Is Gar
Arizona - HCA - 250
A Day in the Life 0f JosieBeliefs and behaviors affect our health in many ways. Like Josie many people are indenial about their health. I smoked for years and I always could find ways to rationalize
mysmoking. When I was smoking the most I was a two pa
Universitas Indonesia - ACCT - 312
PSAK Pernyataan Standar Akuntansi Keuangan no 4 2009, Pedoman pelaporan akuntansi Indonesia
Universitas Indonesia - ACCT - 312
For Indonesian Accounting Policy
Universitas Indonesia - ACCT - 312
For Indonesian Accounting Policy, Pernyataan Standar Akuntansi Keuangan no 22 Revisi 2010
Universitas Indonesia - ACCT - 312
For Indonesian Accounting Policy, Pernyataan Standar Akuntansi Keuangan, No 1 Revisi 2009
UIllinois - IB - 302
Homework Assignment #1 (72.5 points), IB 302Due at the beginning of class on February 10, 2010Please print out this assignment and write your answers in the spaces provided unlessotherwise noted.1.
(24 points total) The following DNA sequence represen | {"url":"http://www.coursehero.com/file/6303418/REFERENCES-SITE-FOR-STATISTICS-AND-ANALYSIS-DATA/","timestamp":"2014-04-19T02:37:21Z","content_type":null,"content_length":"165577","record_id":"<urn:uuid:7a8fd2ea-e7f9-43c9-9805-c1083eed30b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dihadron fragmentation functions and their relevance for transverse spin
Dihadron fragmentation functions and their relevance for transverse spin studies
ABSTRACT Dihadron fragmentation functions describe the probability that a quark
fragments into two hadrons plus other undetected hadrons. In particular, the
so-called interference fragmentation functions describe the azimuthal asymmetry
of the dihadron distribution when the quark is transversely polarized. They can
be used as tools to probe the quark transversity distribution in the nucleon.
Recent studies on unpolarized and polarized dihadron fragmentation functions
are presented, and we discuss their role in giving insights into transverse
spin distributions.
[show abstract] [hide abstract]
ABSTRACT: We reconsider the option of extracting the transversity distribution by using interference fragmentation functions into two leading hadrons inside the same current jet. To this end, we
perform a new study of two-hadron fragmentation functions. We derive new positivity bounds on them. We expand the hadron pair system in relative partial waves, so that we can naturally
incorporate in a unified formalism specific cases already studied in the literature, such as the fragmentation functions arising from the interference between the s- and p-wave production of two
mesons, as well as the production of a spin-one hadron. In particular, our analysis clearly distinguishes two different ways to access the transversity distribution in two-hadron semi-inclusive
leptoproduction. Comment: 23 pages, 3 figures
Physical Review D 12/2002; · 4.69 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: New results on single spin asymmetries of charged hadrons produced in deep-inelastic scattering of muons on a transversely polarised LiD target are presented. The data were taken in the
years 2002, 2003 and 2004 with the COMPASS spectrometer using the muon beam of the CERN SPS at 160 GeV/c. Preliminary results are given for the Sivers asymmetry and for all the three ``quark
polarimeters'' presently used in COMPASS to measure the transversity distributions. The Collins and the Sivers asymmetries for charged hadrons turn out to be compatible with zero, within the
small (~1%) statistical errors, at variance with the results from HERMES on a transversely polarised proton target. Similar results have been obtained for the two hadron asymmetries and for the
Lambda polarisation. First attempts to describe the Collins and the Sivers asymmetries measured by COMPASS and HERMES allow to give a consistent picture of these transverse spin effects.
[show abstract] [hide abstract]
ABSTRACT: We present a model for dihadron fragmentation functions, describing the fragmentation of a quark into two unpolarized hadrons. We tune the parameters of our model to the output of the
PYTHIA event generator for two-hadron semi-inclusive production in deep inelastic scattering at HERMES. Once the parameters of the model are fixed, we make predictions for other unknown
fragmentation functions and for a single-spin asymmetry in the azimuthal distribution of pi+ pi- pairs in semi-inclusive deep inelastic scattering on a transversely polarized target at HERMES and
COMPASS. Such asymmetry could be used to measure the quark transversity distribution function.
Physical review D: Particles and fields 09/2006;
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
17 Downloads
Available from
Sep 28, 2012 | {"url":"http://www.researchgate.net/publication/47860653_Dihadron_fragmentation_functions_and_their_relevance_for_transverse_spinstudies","timestamp":"2014-04-19T12:11:22Z","content_type":null,"content_length":"183753","record_id":"<urn:uuid:a4c44b16-f302-4794-957c-9fef869e6f16>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sun City, AZ Math Tutor
Find a Sun City, AZ Math Tutor
...I believe most students, after seeing the material presented, have the answers to the questions in their brain, but don't know how to access it. It is my job as a tutor to guide the students
to sift through the problems and obtain the correct answer. This is done by providing them with thoughtful sequential questions.
20 Subjects: including algebra 2, SAT math, discrete math, differential equations
...As I mentioned in my profile, I love numbers, and wish to help others appreciate how mathematics impacts our lives in countless ways every day. With an appreciation for numbers comes a desire
to grasp and master mathematical concepts, leading to success in the classroom and beyond. With a bache...
20 Subjects: including calculus, English, trigonometry, writing
I have taught at a valley high school for several years. I have taught all levels of high school math. I try to explain math in a way that the student can understand instead of using too much
math language.
10 Subjects: including linear algebra, logic, algebra 1, algebra 2
...I've completed numerous homework assignments, research papers, and scientific/environmental reports via MS Word. I've had specific training on some of the earlier versions, but most of my
learning has come from experience. Although my degrees are in geology, I've done quite a bit of scientific writing associated with those studies.
7 Subjects: including algebra 1, English, writing, elementary math
...The student gets the help he/she needs and I get my refresher. It's a win-win situation. I always give positive reinforcement.
9 Subjects: including algebra 1, algebra 2, vocabulary, grammar
Related Sun City, AZ Tutors
Sun City, AZ Accounting Tutors
Sun City, AZ ACT Tutors
Sun City, AZ Algebra Tutors
Sun City, AZ Algebra 2 Tutors
Sun City, AZ Calculus Tutors
Sun City, AZ Geometry Tutors
Sun City, AZ Math Tutors
Sun City, AZ Prealgebra Tutors
Sun City, AZ Precalculus Tutors
Sun City, AZ SAT Tutors
Sun City, AZ SAT Math Tutors
Sun City, AZ Science Tutors
Sun City, AZ Statistics Tutors
Sun City, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/sun_city_az_math_tutors.php","timestamp":"2014-04-18T21:43:12Z","content_type":null,"content_length":"23642","record_id":"<urn:uuid:29455154-f665-4ce3-b7b2-de00105ee31e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advanced Calculus: An Introduction to Linear Analysis
Advanced Calculus: An Introduction to Linear Analysis
ISBN: 978-1-118-03067-7
416 pages
February 2011
Features an introduction to advanced calculus and highlights its inherent concepts from linear algebra
Advanced Calculus reflects the unifying role of linear algebra in an effort to smooth readers' transition to advanced mathematics. The book fosters the development of complete theorem-proving skills
through abundant exercises while also promoting a sound approach to the study. The traditional theorems of elementary differential and integral calculus are rigorously established, presenting the
foundations of calculus in a way that reorients thinking toward modern analysis.
Following an introduction dedicated to writing proofs, the book is divided into three parts:
Part One explores foundational one-variable calculus topics from the viewpoint of linear spaces, norms, completeness, and linear functionals.
Part Two covers Fourier series and Stieltjes integration, which are advanced one-variable topics.
Part Three is dedicated to multivariable advanced calculus, including inverse and implicit function theorems and Jacobian theorems for multiple integrals.
Numerous exercises guide readers through the creation of their own proofs, and they also put newly learned methods into practice. In addition, a "Test Yourself" section at the end of each chapter
consists of short questions that reinforce the understanding of basic concepts and theorems. The answers to these questions and other selected exercises can be found at the end of the book along with
an appendix that outlines key terms and symbols from set theory.
Guiding readers from the study of the topology of the real line to the beginning theorems and concepts of graduate analysis, Advanced Calculus is an ideal text for courses in advanced calculus and
introductory analysis at the upper-undergraduate and beginning-graduate levels. It also serves as a valuable reference for engineers, scientists, and mathematicians.
See More
1. Real Numbers and Limits of Sequences.
2. Continuous Functions.
3. Rieman Integral.
4. The Derivative.
5. Infinite Series.
6. Fourier Series.
7. The Riemann-Stieltjes Integral.
8. Euclidean Space.
9. Continuous Functions on Euclidean Space.
10. The Derivative in Euclidean Space.
11. Riemann Integration in Euclidean Space.
Appendix A. Set Theory.
Problem Solutions.
See More
Leonard F. Richardson, PhD, is Herbert Huey McElveen Professor and Assistant Chair of the Department of Mathematics at Louisiana State University, where he is also Director of Graduate Studies in
Mathematics. Dr. Richardson's research interests include harmonic analysis and representation theory.
See More
• Provides a rigorous approach to proofs, fundamental theorems, and the foundations of calculus
• Highlights the connections and interplay between calculus and linear algebra, emphasizing the concepts of a vector space, a linear transformation (including a linear functional), a norm, and a
scalar product
• Offers a 'Test Yourself' section at the end of every chapter, which is a sample hour test with solutions to aid the study of readers
• Gradually guides the reader from the study of topology of the real line to the beginning theorems and concepts of graduate analysis, expressed from a modern viewpoint
• Features the system of real numbers as a Cauchy-complete Archimedean ordered field, and the traditional theorems of advanced calculus are presented.
• Employs a multi-part outline approach and provides broad hints to guide students through the more substantial proof exercises. Solutions to most of the non-proof exercises are also provided.
See More
"This is an excellent book, well worth considering for a textbook for an undergraduate analysis course." (MAA Reviews July, 2008)
See More | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-1118030672.html","timestamp":"2014-04-17T01:28:39Z","content_type":null,"content_length":"43894","record_id":"<urn:uuid:22cb52a1-1a3d-4569-a091-a31f8b1d278b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turbulence Modeling
CFD-101: The Basics of Computational Fluid Dynamics Modeling
Turbulence Modeling
The majority of flows in nature are turbulent. This raises the question, is it necessary to represent turbulence in computational models of flow processes? Unfortunately, there is no simple answer to
this question, and the modeler must exercise some engineering judgment. The following remarks cover some things to consider when faced with this question.
Definitions and Orders of Magnitude
The possibility that turbulence may occur is generally measured by the flow Reynolds number:
where ρ is fluid density and μ is the dynamic viscosity of the fluid. The parameters L and U are a characteristic length and speed for the flow. Obviously, the choice of L and U are somewhat
arbitrary, and there may not be single values that characterize all the important features of an entire flow field. The important point to remember is that Re is meant to measure the relative
importance of fluid inertia to viscous forces. When viscous forces are negligible the Reynolds number is large.
A good choice for L and U is usually one that characterizes the region showing the strongest shear flow, that is, where viscous forces would be expected to have the most influence.
Roughly speaking, a Reynolds number well above 1000 is probably turbulent, while a Reynolds number below 100 is not. The actual value of a critical Reynolds number that separates laminar and
turbulent flow can vary widely depending on the nature of the surfaces bounding the flow and the magnitude of perturbations in the flow.
In a fully turbulent flow a range of scales exist for fluctuating velocities that are often characterized as collections of different eddy structures. If L is a characteristic macroscopic length
scale and l is the diameter of the smallest turbulent eddies, defined as the scale on which viscous effects are dominant, then the ratio of these scales can be shown to be of order L/l≈Re3/4. This
relation follows from the assumption that, in steady-state, the smallest eddies must dissipate turbulent energy by converting it into heat.
Turbulence Models
From the above relation for the range of scales it is easy to see that even for a modest Reynolds number, say Re=104, the range spans three orders of magnitude, L/l=103. In this case, the number of
control volumes needed to resolve all the eddies in a three-dimensional computation would be greater than 109. Numbers of this size are well beyond current computational capabilities. For this
reason, considerable effort has been devoted to the construction of approximate models for turbulence.
We cannot describe turbulence modeling in any detail in this short article. Instead, we will simply make some basic observations about the types of models available. Be forewarned, however, that no
models exist for general use. Every model must be employed with discretion and its results cautiously treated.
The original turbulence modeler was Osborne Reynolds. Anyone interested in this subject should read his groundbreaking work (Phil. Trans. Royal Soc. London, Series A, Vol.186, p.123, 1895).
Reynolds’s insights and approach were both fundamental and practical.
The Pseudo-Fluid Approximation
In a fully turbulent flow it is sometimes possible to define an effective turbulent viscosity, μ[eff], that roughly approximates the turbulent mixing processes contributing to a diffusion of momentum
(and other properties). Thinking of a turbulent flow as a pseudo-fluid having increased viscosity leads to the observation that the effective Reynolds number for a turbulent flow is generally less
than 100:
This observation is particularly useful because it suggests a simple way to approximate some turbulent flows. In particular, when the details of the turbulence are not important, but the general
mixing behavior associated with the turbulence is, it is often possible to use an effective turbulent (eddy) viscosity in place of the molecular viscosity. The effective viscosity can often be
expressed as
where α is a number between 0.02 and 0.04. This expression works well for the turbulence associated with plane and cylindrical jets entering a stagnant fluid. The effective Reynolds number associated
with this model is Re=1/α, a number between 25 and 50.
While this model is often adequate for predicting the gross features of a turbulent flow, it may not be suitable for predicting local details. For example, it would predict a parabolic flow (i.e.,
laminar) profile in a pipe instead of the measured logarithmic profile.
Local Viscosity Model
The next level of complexity beyond a constant eddy viscosity is to compute an effective viscosity that is a function of local conditions. This is the basis of Prandtl’s mixing-length hypothesis
where it is assumed that the viscosity is proportional to the local rate of shear. The proportionality constant has the dimensions of a length squared. The square root of this constant is referred to
as the "mixing length."
This model offers an improvement over a simple constant viscosity. For example, it predicts the logarithmic velocity profile in a pipe. However, it is not used much because it doesn’t account for
important transport effects.
Turbulence Transport Models
For practical engineering purposes the most successful computational models have two or more transport equations. A minimum of two equations is desirable because it takes two quantities to
characterize the length and time scales of turbulent processes. The use of transport equations to describe these variables allows turbulence creation and destruction processes to have localized
rates. For instance, a region of strong shear at the corners of a building may generate strong eddies, while little turbulence is generated in the building’s wake region. The strong mixing observed
in the wakes of buildings (or automobiles and airplanes) is caused by the advection of upstream generated eddies into the wake. Without transport mechanisms, turbulence would have to instantly adjust
to local conditions, implying unrealistically large creation and destruction rates.
Nearly all transport models invoke one or more gradient assumptions in which a correlation between two fluctuating quantities is approximated by an expression proportional to the gradient of one of
the terms. This captures the diffusion-like character of turbulent mixing associated with many small eddy structures, but such approximations can lead to errors when there is significant transport by
large eddy structures.
Large Eddy Simulation
Most models of turbulence are designed to approximate a smoothed out or time-averaged effect of turbulence. An exception is the Large Eddy Simulation model (or Subgrid Scale model). The idea behind
this model is that computations should be directly capable of modeling all the fluctuating details of a turbulent flow except for those too small to be resolved by the grid. The unresolved eddies are
then treated by approximating their effect using a local eddy viscosity. Generally, this eddy viscosity is made proportional to the local grid size and some measure of the local flow velocity, such
as the magnitude of the rate of strain.
Such an approach might be expected to give good results if the unresolved scales are small enough, for example, in the viscous sub-range. Unfortunately, this is still an uncomfortably small size.
When these models are used with a minimum scale size that is above the viscous sub-range, they are then referred to as Coherent Structure Capturing models.
The advantage of these more realistic models is that they provide information not only about the average effects of turbulence but also about the magnitude of fluctuations. But, this advantage is
also a disadvantage, because averages must actually be computed over many fluctuations, and some means must be provided to introduce meaningful fluctuations at the start of a computation and at
boundaries where flow enters the computational region.
Turbulence from an Engineering Perspective
We have seen that it is probably not reasonable to attempt to compute all the details of a turbulent flow. Furthermore, from the perspective of most applications, it’s not likely that we would be
interested in the local details of individual fluctuations. The question then is how should we deal with turbulence, when should we employ a turbulence model, and how complex should that model be?
Experimental observations suggest that many flows become independent of Reynolds number once a certain minimum value is exceeded. If this were not so, wind tunnels, wave tanks, and other experimental
tools would not be as useful as they are. One of the principal effects of a Reynolds number change is to relocate flow separation points. In laboratory experiments this fact sometimes requires the
use of trip wires or other devices to induce separation at desired locations. A similar treatment may be used in a numerical simulation.
Most often a simulation is done to determine the dominant flow patterns that develop in some specified situation. These patterns consist of the mean flow and the largest eddy structures containing
the majority of the kinetic energy of the flow. The details of how this energy is removed from the larger eddies and dissipated into heat by the smallest eddies may not be important. In such cases
the dissipation mechanisms inherent in numerical methods may alone be sufficient to produce reasonable results. In other cases it is possible to supply additional dissipation with a simple turbulence
model such as a constant eddy viscosity or a mixing length assumption.
Turbulence transport equations require more CPU resources and should only be used when there are strong, localized sources of turbulence and when that turbulence is likely to be advected into other
important regions of the flow.
When there is reason to seriously question the results of a computation, it is always desirable to seek experimental confirmation.
An excellent introduction to fluid turbulence can be found in the book Elementary Mechanics of Fluids by Hunter Rouse, Dover Publications, Inc., New York (1978). | {"url":"http://www.flow3d.com/cfd-101/cfd-101-turbulence-flows.html","timestamp":"2014-04-21T02:00:17Z","content_type":null,"content_length":"30718","record_id":"<urn:uuid:bcfdbcd6-f7b5-4a3c-8856-35c190f90c99>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Explain why each function is discontinuous at the given point.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
f(x) = x/x - 1 at x = 1
Best Response
You've already chosen the best response.
@saifoo.khan can u help?
Best Response
You've already chosen the best response.
@zepdrix can u help?
Best Response
You've already chosen the best response.
@jim_thompson5910 can u helppp??
Best Response
You've already chosen the best response.
you CANNOT divide by zero so the denominator x-1 CANNOT be zero if it were, then x-1 = 0 x-1+1 = 0+1 x = 1 So x = 1 makes the denominator x - 1 equal to zero To avoid dividing by zero, x = 1 is
kicked out of the domain. This explains why there is a discontinuity here.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510af433e4b0d9aa3c46bf34","timestamp":"2014-04-18T08:19:04Z","content_type":null,"content_length":"37199","record_id":"<urn:uuid:c1e4bdef-3e6c-46c0-9b9b-062b134c0589>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
negative binomial
June 26th 2008, 04:55 PM
negative binomial
The probability that a machine produces a defective item is 0.01. Each item is check as it's produced. Assume these are independent trials and compute the probability that at least 100 items must
be checked to find 1 defective item.
answer: 0.3967
I let x= 100, r= 1,
99C0 * (0.01)(0.99)^99, which gives 0.003967
Wrong answer. Explain please.
June 26th 2008, 05:06 PM
I think what you found is the wrong probability.
What you are looking for is the probability that, after 99 items have gone by, you still haven't found a defective one. Then, anything that comes after it is irrelevant, because the first time
you find a defective one, you will have checked at least 100 items.
So, you want to find the probability that you get 99 undefective items.
June 26th 2008, 05:12 PM
Well, if I dont multiply by the 0.01 I get the right answer.
But that's not what the question asks, or it doesn't seem so to me.
June 26th 2008, 05:20 PM
Why are you multiplying by .01?
June 26th 2008, 05:38 PM
It's the formula for a negative binomial.
June 26th 2008, 05:39 PM
Formula for a negative binomial:
x-1 C r-1) (p^r)(1-p)^x-r
June 26th 2008, 07:38 PM
The probability that a machine produces a defective item is 0.01. Each item is check as it's produced. Assume these are independent trials and compute the probability that at least 100 items must
be checked to find 1 defective item.
answer: 0.3967
I let x= 100, r= 1,
99C0 * (0.01)(0.99)^99, which gives 0.003967
Wrong answer. Explain please.
The probability that at least $100$ must be checked is the probability that the first $99$ are not defective, which is $0.99^{99} \approx 0.3697$
June 26th 2008, 08:44 PM
mr fantastic
The probability that a machine produces a defective item is 0.01. Each item is check as it's produced. Assume these are independent trials and compute the probability that at least 100 items must
be checked to find 1 defective item.
answer: 0.3967
I let x= 100, r= 1,
99C0 * (0.01)(0.99)^99, which gives 0.003967
Wrong answer. Explain please.
You have calculated the probability of exactly 100 items checked before the first defective, rather than at least 100 items ........ There's a big difference. | {"url":"http://mathhelpforum.com/advanced-statistics/42544-negative-binomial-print.html","timestamp":"2014-04-21T00:56:16Z","content_type":null,"content_length":"8378","record_id":"<urn:uuid:530f828d-c459-4599-8b3f-6337f7c94a69>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 12
"... We study the problem of finding small trees. Classical network design problems are considered with the additional constraint that only a specified number k of nodes are required to be connected
in the solution. A prototypical example is the kMST problem in which we require a tree of minimum weight s ..."
Cited by 65 (2 self)
Add to MetaCart
We study the problem of finding small trees. Classical network design problems are considered with the additional constraint that only a specified number k of nodes are required to be connected in
the solution. A prototypical example is the kMST problem in which we require a tree of minimum weight spanning at least k nodes in an edge-weighted graph. We show that the kMST problem is NP-hard
even for points in the Euclidean plane. We provide approximation algorithms with performance ratio 2 p k for the general edge-weighted case and O(k 1=4 ) for the case of points in the plane.
Polynomial-time exact solutions are also presented for the class of treewidth-bounded graphs which includes trees, series-parallel graphs, and bounded bandwidth graphs, and for points on the boundary
of a convex region in the Euclidean plane. We also investigate the problem of finding short trees, and more generally, that of finding networks with minimum diameter. A simple technique is used to
, 1997
"... This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.-R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and
data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, th ..."
Cited by 65 (14 self)
Add to MetaCart
This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.-R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data
structures for proximity problems on point sets in IR D . In particular, the closest pair problem, the exact and approximate post-office problem, and the problem of constructing spanners are
discussed in detail. Contents 1 Introduction 1 2 The static closest pair problem 4 2.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Algorithms that are optimal in
the algebraic computation tree model . 5 2.2.1 An algorithm based on the Voronoi diagram . . . . . . . . . . . 5 2.2.2 A divide-and-conquer algorithm . . . . . . . . . . . . . . . . . . 5 2.2.3 A
plane sweep algorithm . . . . . . . . . . . . . . . . . . . . . . 6 2.3 A deterministic algorithm that uses indirect addressing . . . . . . . . . 7 2.3.1 The degraded grid . . . . . . . . . . . . . .
. . . . ...
, 1994
"... Weintroduce a new method for finding several types of optimal k-point sets, minimizing perimeter, diameter, circumradius, and related measures, by testing sets of the O(k) nearest neighbors to
each point. We argue that this is better in a number of ways than previous algorithms, whichwere based o ..."
Cited by 56 (6 self)
Add to MetaCart
Weintroduce a new method for finding several types of optimal k-point sets, minimizing perimeter, diameter, circumradius, and related measures, by testing sets of the O(k) nearest neighbors to each
point. We argue that this is better in a number of ways than previous algorithms, whichwere based on high order Voronoi diagrams. Our technique allows us for the first time to efficiently maintain
minimal sets as new points are inserted, to generalize our algorithms to higher dimensions, to find minimal convex k-vertex polygons and polytopes, and to improvemany previous results. Weachievemany
of our results via a new algorithm for finding rectilinear nearest neighbors in the plane in time O(n log n+kn). We also demonstrate a related technique for finding minimum area k-point sets in the
plane, based on testing sets of nearest vertical neighbors to each line segment determined by a pair of points. A generalization of this technique also allows us to find minimum volume and boundary
measure sets in arbitrary dimensions.
- DISCRETE & COMPUTATIONAL GEOMETRY , 1992
"... Given a set P of n points in the plane and a number k, we want to find a polygon ~ with vertices in P of minimum area that satisfies one of the following properties: (1) cK is a convex k-gon,
(2) ~ is an empty convex k-gon, or (3) ~ is the convex hull of exactly k points of P. We give algorithms ..."
Cited by 23 (5 self)
Add to MetaCart
Given a set P of n points in the plane and a number k, we want to find a polygon ~ with vertices in P of minimum area that satisfies one of the following properties: (1) cK is a convex k-gon, (2) ~
is an empty convex k-gon, or (3) ~ is the convex hull of exactly k points of P. We give algorithms for solving each of these three problems in time O(kn3). The space complexity is O(n) for k = 4 and
O(kn 2) for k> 5. The algorithms are based on a dynamic ptogramming approach. We generalize this approach to polygons with minimum perimeter, polygons with maximum perimeter or area, polygons
containing the maximum or minimum number of points, polygons with minimum weight (for some weights added to vertices), etc., in similar time bounds.
- IEEE TRANSACTIONS ON COMPUTERS , 1984
"... We survey the state of the art of computational geometry, a discipline that deals with the complexity of geometric problems within the framework of the analysis ofalgorithms. This newly emerged
area of activities has found numerous applications in various other disciplines, such as computer-aided de ..."
Cited by 19 (3 self)
Add to MetaCart
We survey the state of the art of computational geometry, a discipline that deals with the complexity of geometric problems within the framework of the analysis ofalgorithms. This newly emerged area
of activities has found numerous applications in various other disciplines, such as computer-aided design, computer graphics, operations research, pattern recognition, robotics, and statistics. Five
major problem areas-convex hulls, intersections, searching, proximity, and combinatorial optimizations-are discussed. Seven algorithmic techniques incremental construction, plane-sweep, locus,
divide-andconquer, geometric transformation, prune-and-search, and dynamization-are each illustrated with an example.Acollection of problem transformations to establish lower bounds for geometric
problems in the algebraic computation/decision model is also included.
, 1991
"... Given a set P of n points in the plane, we wish to find a set Q P of k points for which the convex hull conv(Q) has the minimum area. ..."
Cited by 9 (1 self)
Add to MetaCart
Given a set P of n points in the plane, we wish to find a set Q P of k points for which the convex hull conv(Q) has the minimum area.
- IN PROC. 9TH ANNU. EUROPEAN SYMPOS. ALGORITHMS , 2001
"... Motivated by questions in location planning, we show for a set of colored point sites in the plane how to compute the smallest— by perimeter or area—axis-parallel rectangle and the narrowest
strip enclosing at least one site of each color. ..."
Cited by 5 (2 self)
Add to MetaCart
Motivated by questions in location planning, we show for a set of colored point sites in the plane how to compute the smallest— by perimeter or area—axis-parallel rectangle and the narrowest strip
enclosing at least one site of each color.
- In Proceedings of the 18th Canadian Conference on Computational Geometry (CCCG , 2006
"... We consider the problem of removing c points from a set S of n points so that the resulting point set has the smallest possible convex hull. Our main result is an O � n � � � � � 4c c 2c (3c) +
log n time algorithm that solves this problem when “smallest ” is taken to mean least area or least perim ..."
Cited by 3 (3 self)
Add to MetaCart
We consider the problem of removing c points from a set S of n points so that the resulting point set has the smallest possible convex hull. Our main result is an O � n � � � � � 4c c 2c (3c) + log n
time algorithm that solves this problem when “smallest ” is taken to mean least area or least perimeter. 1
- Abstracts 17th European Workshop Comput. Geom. CG 2001, Freie Universität , 2001
"... Manuel Abellanas # Ferran Hurtado ## Christian Icking ** * Rolf Klein ** ** Elmar Langetepe ** ** Lihong Ma ** ** Belen Palop ** * ** Vera Sacristan ## Suppose there are k types of facilities,
e. g. schools, post o#ces, supermarkets, modeled by n colored points in the plane, each type ..."
Cited by 3 (0 self)
Add to MetaCart
Manuel Abellanas # Ferran Hurtado ## Christian Icking ** * Rolf Klein ** ** Elmar Langetepe ** ** Lihong Ma ** ** Belen Palop ** * ** Vera Sacristan ## Suppose there are k types of facilities, e. g.
schools, post o#ces, supermarkets, modeled by n colored points in the plane, each type by its own color. One basic goal in choosing a residence location is in having at least one representative of
each facility type in the neighborhood. In this paper we provide algorithms that may help to achieve this goal for various specifications of the term "neighborhood". Several problems on multicolored
point sets have been previously considered, such as the bichromatic closest pair, see e. g. Preparata and Shamos [14, Section 5.7], Agarwal et al. [1], and Graf and Hinrichs [8], the group Steiner
tree, see Mitchell [11, Section 7.1], or the chromatic nearest neighbor search,seeMountetal.[12]. Let us call a set color-spanning if it contains at least one point of each color. A natural app...
, 1994
"... We present efficient parallel algorithms for some geometric clustering and partitioning problems. Our algorithms run in the CREW PRAM model of parallel computation. Given a point set P of n
points in two dimensions, the clustering problems are to find a k-point subset such that some measure for ..."
Cited by 3 (0 self)
Add to MetaCart
We present efficient parallel algorithms for some geometric clustering and partitioning problems. Our algorithms run in the CREW PRAM model of parallel computation. Given a point set P of n points in
two dimensions, the clustering problems are to find a k-point subset such that some measure for this subset is minimized. We consider the problems of finding a k-point subset with minimum L1
perimeter and minimum L1 diameter. For the L1 perimeter case, our algorithm runs in O(log 2 n) time and O(n log 2 n + nk 2 log 2 k) work. For the L1 diameter case, our algorithm runs in O(log 2 n+
log 2 k log log k log k) time and O(n log 2 n) work. We consider partitioning problems of the following nature. Given a planar point set S (jSj = n), a measure acting on S and a pair of values 1 and
2 , does there exist a bipartition S = S 1 [S 2 such that (S 1 ) i for i = 1; 2? We consider several measures like diameter under L1 and L 1 metric; area, perimeter of the smallest... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1636034","timestamp":"2014-04-16T16:58:23Z","content_type":null,"content_length":"37693","record_id":"<urn:uuid:252a3350-f878-4272-bca6-08a057a735d4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
e formula to
Angle Sum Formula
How to use formula to express exact values
The angle sum identities take two different formulas
• sin(A+B) = sinAcosB + cosAsinB
• cos(A+B) = cosAcosB − sinAsinB
These formulas allow you to express the exact value of trigonometric expressions that you could not otherwise express. Consider the sin(105°). Unlike the sin(30) which can be expressed as ½, the sin
(105) cannot simply be represented as a rational expression. However, the angle sum formula allows you to represent the exact value of this function | {"url":"http://www.mathwarehouse.com/trigonometry/identities/angle-sum-formula.php","timestamp":"2014-04-17T09:33:48Z","content_type":null,"content_length":"19385","record_id":"<urn:uuid:bf15696e-fd64-46b7-9303-844738003e37>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
plane intersection - Java-Gaming.org
I am trying to convert .map file brushes (quake, hl) into models. The data of a brush, which is just a shape in the editor, is given as a bunch of planes, each being defined by 3 points (each with
x,y,z). It is the intersection of any 3 planes that give you the actual vertex points of the shape (for those that do intersect).
I am trying to use this page
to achieve it, but I cannot get it to work.
This is the formula from that page:
I have been told the first part (the bit to the negative 1) is calculated by:
dotProduct [normal1, crossProduct (normal2, normal3 ) ]; does anyone know if this is correct?
Second, these normals for each plane, I have been calculating these:
For 3 points, p1, p2, p3
cross( (p2 - p1), (p3-p1) );
After this I normalize them.
Finally, in that formula, is the x1, x2, x3 simply one of my 3 points for each plane?
Would be really grateful if anyone who knows this math could clarify those points for me. Alternatively if anyone knows of any code that finds the intersection of 3 planes when it exists, that would
be awesome | {"url":"http://www.java-gaming.org/index.php?topic=31430.msg292275","timestamp":"2014-04-19T03:08:58Z","content_type":null,"content_length":"115198","record_id":"<urn:uuid:4095d37a-104f-4a01-b3ae-5ab447e9c680>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functional interpretation
Results 1 - 10 of 13
- The Journal of Symbolic Logic , 1998
"... We present a possible computational content of the negative translation of classical analysis with the Axiom of Choice. Our interpretation seems computationally more direct than the one based on
Godel's Dialectica interpretation [10, 18]. Interestingly, thisinterpretation uses a re nement of the rea ..."
Cited by 34 (1 self)
Add to MetaCart
We present a possible computational content of the negative translation of classical analysis with the Axiom of Choice. Our interpretation seems computationally more direct than the one based on
Godel's Dialectica interpretation [10, 18]. Interestingly, thisinterpretation uses a re nement of the realizibility semantics of the absurdity proposition, which is not interpreted as the empty type
here. We alsoshowhow to compute witnesses from proofs in classical analysis, and how to interpret the axiom of Dependent Choice and Spector's Double Negation Shift.
- J. SYMBOLIC LOGIC , 1997
"... In [15],[16] Kreisel introduced the no-counterexample interpretation (n.c.i.) of Peano arithmetic. In particular he proved, using a complicated "-substitution method (due to W. Ackermann), that
for every theorem A (A prenex) of first-order Peano arithmetic PA one can find ordinal recursive functi ..."
Cited by 18 (10 self)
Add to MetaCart
In [15],[16] Kreisel introduced the no-counterexample interpretation (n.c.i.) of Peano arithmetic. In particular he proved, using a complicated "-substitution method (due to W. Ackermann), that for
every theorem A (A prenex) of first-order Peano arithmetic PA one can find ordinal recursive functionals \Phi A of order type ! " 0 which realize the Herbrand normal form A of A. Subsequently more
- Journal of Symbolic Logic
"... Abstract. Extending Gödel’s Dialectica interpretation, we provide a functional interpretation of classical theories of positive arithmetic inductive definitions, reducing them to theories of
finite-type functionals defined using transfinite recursion on well-founded trees. 1. ..."
Cited by 7 (2 self)
Add to MetaCart
Abstract. Extending Gödel’s Dialectica interpretation, we provide a functional interpretation of classical theories of positive arithmetic inductive definitions, reducing them to theories of
finite-type functionals defined using transfinite recursion on well-founded trees. 1.
"... Martin-Lof's type theory can be described as an intuitionistic theory of iterated inductive definitions developed in a framework of dependent types. It was originally intended to be a full-scale
system for the formalization of constructive mathematics, but has also proved to be a powerful framewo ..."
Cited by 6 (0 self)
Add to MetaCart
Martin-Lof's type theory can be described as an intuitionistic theory of iterated inductive definitions developed in a framework of dependent types. It was originally intended to be a full-scale
system for the formalization of constructive mathematics, but has also proved to be a powerful framework for programming. The theory integrates an expressive specification language (its type system)
and a functional programming language (where all programs terminate). There now exist several proof-assistants based on type theory, and many non-trivial examples from programming, computer science,
logic, and mathematics have been implemented using these. In this series of lectures we shall describe type theory as a theory of inductive definitions. We emphasize its open nature: much like in a
standard functional language such as ML or Haskell the user can add new types whenever there is a need for them. We discuss the syntax and semantics of the theory. Moreover, we present some examples
- Logic, Methodology, and the Philosophy of Science IX, Elsevier , 1994
"... This article will survey the state of the art nowadays, in particular recent advance in proof theory beyond admissible proof theory, giving some prospects of success of obtaining an ordinal
analysis of \Pi ..."
Cited by 5 (2 self)
Add to MetaCart
This article will survey the state of the art nowadays, in particular recent advance in proof theory beyond admissible proof theory, giving some prospects of success of obtaining an ordinal analysis
of \Pi
, 1992
"... In this paper we shall investigate fragments of Kripke--Platek set theory with Infinity which arise from the full theory by restricting Foundation to \Pi n Foundation, where n 2. The strength of
such fragments will be characterized in terms of the smallest ordinal ff such that L ff is a model o ..."
Cited by 4 (4 self)
Add to MetaCart
In this paper we shall investigate fragments of Kripke--Platek set theory with Infinity which arise from the full theory by restricting Foundation to \Pi n Foundation, where n 2. The strength of such
fragments will be characterized in terms of the smallest ordinal ff such that L ff is a model of every \Pi 2 sentence which is provable in the theory. 1 Introduction Kripke-Platek set theory plus
Infinity (hereinafter called KP!) is a truly remarkable subsystem of ZF. Though considerably weaker than ZF, a great deal of set theory requires only the axioms of this subsystem (cf.[Ba]). KP!
consists of the axioms Extensionality, Pair, Union, (Set)Foundation, Infinity, along with the schemas of \Delta 0 --Collection, \Delta 0 --Separation, and Foundation for Definable Classes. So KP!
arises from ZF by completely omitting Power Set and restricting Separation and Collection to \Delta 0 --formulas. These alterations are suggested by the informal notion of "predicative". KP! is an
"... Abstract. We prove that the (non-intuitionistic) law of the double negation shift has a bounded functional interpretation with bar recursive functionals of finite type. As an application, we
show that full numerical comprehension is compatible with the uniformities introduced by the characteristic p ..."
Cited by 2 (1 self)
Add to MetaCart
Abstract. We prove that the (non-intuitionistic) law of the double negation shift has a bounded functional interpretation with bar recursive functionals of finite type. As an application, we show
that full numerical comprehension is compatible with the uniformities introduced by the characteristic principles of the bounded functional interpretation for the classical case. §1. Introduction and
background. In 1962 [14], Clifford Spector gave a remarkable characterization of the provably recursive functionals of full secondorder arithmetic (a.k.a. analysis). The central result of his paper
is an extension, from arithmetic to analysis, of the (then quite recent) dialectica interpretation of Gödel of 1958 [7]. Spector’s extension relies on a form of well-founded recursion
- in Computational Logic and Proof Theory (G. Gottlob et al eds.), Lecture Notes in Computer Science 713 , 1997
"... this article has appeared in Computational Logic and Proof Theory (Proc. 3 ..."
"... . The paper investigates the strength of the Anti-Foundation Axiom, AFA, on the basis of Kripke-Platek set theory without Foundation. It is shown that the addition of AFA considerably increases
the proof theoretic strength. MSC:03F15,03F35 Keywords: Anti-foundation axiom, Kripke-Plate set theory ..."
Cited by 1 (1 self)
Add to MetaCart
. The paper investigates the strength of the Anti-Foundation Axiom, AFA, on the basis of Kripke-Platek set theory without Foundation. It is shown that the addition of AFA considerably increases the
proof theoretic strength. MSC:03F15,03F35 Keywords: Anti-foundation axiom, Kripke-Plate set theory, subsystems of second order arithmeic 1. Introduction Intrinsically circular phenomena have come to
the attention of researchers in differing fields such as mathematical logic, computer science, artificial intelligence, linguistics, cognitive science, and philosophy. Logicians first explored set
theories whose universe contains what are called non-wellfounded sets, or hypersets (cf. [6], [2]). But the area was considered rather exotic until these theories were put to use in developing
rigorous accounts of circular notions in computer science (cf. [4]). Instead of the Foundation Axiom these set theories adopt the so-called AntiFoundation Axiom, AFA, which gives rise to a rich
universe of ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1606241","timestamp":"2014-04-20T06:49:13Z","content_type":null,"content_length":"34211","record_id":"<urn:uuid:e361eca2-e103-4658-899e-04ff17c8140c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phong model or phong specular reflection model, Computer Graphics
Phong Model or Phong Specular Reflection Model
It is an empirical model that is not based on physics, although physical observation. Phong observed here for extremely shiny surfaces the specular highlight was minute and the intensity fell off
quickly, whereas for duller surfaces this was superior and fell off more slowly. He decided to allow the reflected intensity be a function of (cos θ)^n with n >= 200 for a shiny surface and n minute
for a dull surface. For an ideal reflector n equals infinity, and for a part of cardboard n equals 0 or 1. In the diagram demonstrated below we can observe how the function (cos θ)n behaves for
various values of n. This empirical model for computing the specular reflection range was developed via Phong and therefore termed as PHONG MODEL/ PHONG SPECULAR REFLECTION MODEL. That is the
intensity of specular reflection is proportional to cos^n a (a lies in between 0° & 90°) consequently cos a varies from 1 to 0. Here 'n' is specular reflection parameter dependent upon the type of
Remember that the Phong illumination equation is simply the Lambert illumination equation along with an additional summand to account for ambient reflection and specular reflection.
Posted Date: 4/5/2013 2:48:25 AM | Location : United States
Your posts are moderated
Put the system of a geometric data table for a 3d rectangle. Solution : Vertex Table Edge Table Polygon Surface Table
OBJECTIVE Since graphics plays a very important role in modern computer application, it is important to know more information about its hardware and software operations. Despite
Ray Tracing Algorithm - Recursive Frequently, the basic ray tracing algorithm is termed as a "recursive" acquiring an outcome wherein a given process repeats itself an arbitr
Question 1 Explain any ten types of artistic filters Question 2 What is Freehand Tool? Explain the procedure of drawing lines and curves with freehand tool Question 3 Ho
Given arbitrary 8 values at the vertices of a cube, please draw the curved iso-surfaces with shading. Also, please draw the saddle point. This can be done relatively easy with phys
Icon Based or Event Driven Tools In such authoring systems, multimedia components and interaction cues or events are organized like objects in a structural process or framework
Important Points about the Transformation for isometric projection Note: We can also verify such Isometric transformation matrix through checking all the foreshortening fact
JPEG Graphics: Another graphic file format usually utilized on the Web to minimize graphics file sizes is the Joint Photographic Experts Group that is JPEG compression scheme. Not
Animation and Games - CAD and CAM In our childhood, we all have seen that the flip books of cricketers that came free with some soft drink and where some pictures of similar p
Two-Dimensional Geometric Transformations When a real life object is modelled using shape primitives, there are several possible applications. You may be required to do furth | {"url":"http://www.expertsmind.com/questions/phong-model-or-phong-specular-reflection-model-30150661.aspx","timestamp":"2014-04-20T03:19:48Z","content_type":null,"content_length":"30494","record_id":"<urn:uuid:956007c3-2680-436e-8d74-c19aee914889>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making Stuff is Scary
August 15, 2011
By Milk Trader
My daughter's best friend lives just down the street. Her mother runs a cupcake shop that's just a little further down the street. Being eleven going on sixteen, my daughter fancies herself a "quote"
-- worker at the shop. She's not paid in actual money, given child labor laws and all, but sometimes she brings home some cupcakes. The family enjoys them, fighting over certain ones but eating them
all in the end. When it's scary story night at the dinner table, I'm always ready with a fan-cheeky paluki tale if not a binblebot story. Our daughter tells the scariest stories though. The stories
of what happens to ingredients that come in the back door of the cupcake shop.
As she relates it, all manner of horrifying things happen to the unsuspecting goods. Eggs get broken, their shells carelessly discarded. Butter gets melted, sugar gets pulverized to dust and flour
gets churned while wet stuff gets incorporated into it. My youngest son can almost not bear these stories. His eyes drop to table level when the story starts.
After dinner and our night-time routine, when the lights are dimmed and the doors locked, I sometimes go into my own laboratory. What happens here though is simply too scary to tell to the children.
Our bed is not large enough for three extra children, all of whom would likely come during the middle of the night from nightmares if they new this story. In my lab, I do things to data that must not
be spoken of.
My intentions are good. I'm not trying to create the bizarre creatures that sometimes form from seemingly nowhere. I'm trying to manufacture nice things. Sweet things that people can enjoy. But
sometimes the ingredients get mixed in just the wrong way. It's why the lab is double-bolted and hidden in a secret location. Protected with lasers and in-the-wall machine guns. Alright, no lasers or
guns, but it is, shall we say, secure.
The latest malformed creature to escape is the product of my Bayesian Prolog machine. I either put in the wrong ingredients (not likely) or my logic machine is missing a critical sprocket or gear.
I'm not sure yet. The plan after the initial, requisite clean-up effort is complete, is to create a smaller version of this machine where I know what the answer should be. If I get a working model at
the simple level, I can expand it. Here is the prolog code that I have so far. The mysterious
symbol in the rules section is a cut, and behaves like the
in an imperative language. Careful, like I said it's either missing something or something is in the wrong place.
prob(spx(bear), 0.3039216).prob(spx(bull), 0.6960784).prob(spx(up_two), 0.02173633). conprob(spx(bear) ^ spx(up_two), 0.0125129).conprob(spx(bull) ^ spx(up_two), 0.009158927).isprob(A,P) :- prob
(A,P), !.isprob(A^B, P) :- conprob(A^B, P), !.isprob(A^B, P) :- conprob(B^A, PBA), isprob(A, PA), isprob(B, PB), P is PBA * PA / PB.
I used my new R data mining script to get the percentage values you see in the first part fact section. I also used R to get the conditional fact values. The function downloads all the SPX data
(passed in as parameter
in the function) from Yahoo with the following line:
x <- getSymbols(sym, from="1900-01-01", auto.assign=FALSE)
I set
because I need the object to be manipulated in the function. I return an object after doing simple look-forward and look-backward manipulations. I also employ the
R package's
function to keep a running total of the 50-day and 200-day moving averages. I use these average to define bull and bear markets. Let's define the
variable as containing the complete set of data from the data mining function. To get a slice of it (ie, bear and bull markets) requires a simple indexing operation. Like this:
> bull <- S[S$r.50 > S$r.200]
The r.50 and r.200 vectors are where the 50- and 200-day average values are stored. I'm not sure why I called them r.50 and r.200. Come to think of it, it's not very intuitive is it? I'll get to that
later. In any case, to get the percentage of time the SPX is a bull market requires a grade-school calculation:
> nrow(bull)/nrow(S)
[1] 0.683243
Now you know where I get the probability fact that goes into the beginning of the Prolog script. The contingency fact (I think this is where my problem is) is calculated below. The bull.up object is
all instances where the return was greater than 2% and the market was in a bull regime.
bull.up <- S[S$RET > 0.02 & S$r.50 > S$r.200 ]
Then to get the value:
> nrow(bull.up)/nrow(S)
[1] 0.009158927
Well, after all of this is compiled in a Prolog session, the following query generates the following result:
?- isprob(spx(up_two) ^ spx(bull), P).
P = 0.000286004363470997
But a direct calculation in R gives the following result (which I trust more):
> ans <- bull[bull$RET > 0.02]
> nrow(ans)/nrow(bull)
[1] 0.01340508
So it's back to the algorithm drawing board I suppose. Tomorrow night is joke night at the dinner table. I think I've got some good material.
for the author, please follow the link and comment on his blog:
Milk Trader
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/making-stuff-is-scary/","timestamp":"2014-04-19T17:16:30Z","content_type":null,"content_length":"44770","record_id":"<urn:uuid:2498871d-4181-49dd-8222-9b2c0fe4d9db>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2004 [00518]
[Date Index] [Thread Index] [Author Index]
Re: Chebyshev's Identity
• To: mathgroup at smc.vnet.net
• Subject: [mg46515] Re: Chebyshev's Identity
• From: bobhanlon at aol.com (Bob Hanlon)
• Date: Fri, 20 Feb 2004 22:58:58 -0500 (EST)
• References: <c14spn$rna$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
By iy I assume that you meant I*y
{{(Cos[x/n]/E^((I*y)/n))^n, (Sin[x/n]/E^((I*y)/n))^n},
{((-E^(iy/n))*Sin[x/n])^n, (E^((I*y)/n)*Cos[x/n])^n}}
Limit[M, n->Infinity]
{{E^((-I)*y), 0}, {0, E^(I*y)}}
Note that each term in the original matrix was individually raised to power n.
By "matrix raised to power n" you may have meant
Limit[M, n->Infinity]//FullSimplify
{{Cosh[Sqrt[-x^2 - y^2]] - (I*y*Sinh[Sqrt[-x^2 - y^2]])/
Sqrt[-x^2 - y^2], (x*Sinh[Sqrt[-x^2 - y^2]])/
Sqrt[-x^2 - y^2]},
{-((x*Sinh[Sqrt[-x^2 - y^2]])/Sqrt[-x^2 - y^2]),
Cosh[Sqrt[-x^2 - y^2]] + (I*y*Sinh[Sqrt[-x^2 - y^2]])/
Sqrt[-x^2 - y^2]}}
Bob Hanlon
In article <c14spn$rna$1 at smc.vnet.net>, "Ravinder Kumar B."
<ravi at crest.ernet.in> wrote:
<< I have a (2x2) matrix raised to power n.
M = ({{Cos[x/n]*Exp[-iy/n],
All I know at present is that this expression can be further
simplified analytically using Chebyshev's identity to much a simpler
expression in the limit n -> infinity.
I am unable to find any information regarding Chebyseb's identity and its
Could some one please tell me more about this identity and its usage in
solving above expression. Mathematica fails to do it analytically. | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Feb/msg00518.html","timestamp":"2014-04-17T00:55:22Z","content_type":null,"content_length":"35627","record_id":"<urn:uuid:7fdc8ad7-bdc6-4e95-984e-11c5a00f5081>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Impact of Sample Size on Approximating the Triangular Distribution
You can select the minimum, mode, and maximum parameter values for the triangular distribution. By definition, the minimum < mode < maximum. The sample probability distribution is compared to the
theoretical distribution as you increase the sample size. In general, as the sample size increases, the more closely the sample distribution matches the theoretical distribution. The red dot shows
the mean value for the theoretical distribution.
The triangular distribution is used in discrete-event and Monte Carlo simulation as a key probability distribution for modeling randomness. This Demonstration compares the sample triangular
probability distribution with the theoretical distribution. Probability and statistical theory shows us that as the number of samples increases for the given parameter values, the more closely the
sample probability distribution will resemble the theoretical distribution. You can verify this by specifying the minimum, mode, and maximum parameter values that describe a sample triangular
probability distribution. The specified number of samples is randomly generated and compared to the theoretical distribution.
Contributed by:
Paul Savory
(University of Nebraska-Lincoln) | {"url":"http://demonstrations.wolfram.com/ImpactOfSampleSizeOnApproximatingTheTriangularDistribution/","timestamp":"2014-04-18T08:55:09Z","content_type":null,"content_length":"45366","record_id":"<urn:uuid:55935956-4003-4ffb-aca0-4425160b162c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conjugation in GL(n) (p-adic setting)
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
In $GL(n, \mathbb{Q}_p)$, what are the orbits under conjugation of $GL(n, \mathbb{Z}_p)$?
add comment
The problem can be reduced to that of classifying $GL(n,\mathbf{Z}_p)$ conjugacy classes in $M(n,\mathbf{Z}_p)$. The situation for general $n$ is complicated, but for $n=2$ the problem is settled by
the following.
Let $F\in M(2,\mathbf{Z}_p)$ be any matrix, let $f(x)$ be its characteristic polynomial, and let
$n(F)=\sup_i(i\in\mathbf{Z}_{\ge 0},$ $F$ mod $p^i$ is multiplication by a scalar $)$.
Then the $GL(2,\mathbf{Z}_p)$ conjugacy class of $F$ is uniquely determined by $f(x)$ and $n(F)$.
If $n(F)$ is infinite, then $F$ is a scalar matrix and therefore central.
If $n(F)$ is finite then there exists a unique integer $\lambda\in\mathbf{Z}$, with $0\le\lambda\le p^{n(F)}-1$ such that $F$ is conjugate to $\begin{pmatrix} \lambda&0\\ 0&\lambda \end{pmatrix}+p^
{n(F)}\begin{pmatrix} 0&-a_0\\ 1&-a_1 \end{pmatrix}$.
Here $a_0$ and $a_1$ are the constant and linear term of the polynomial $f_0(x):=p^{-2n(F)}f(p^{n(F)}x+\lambda)$, which has coefficients in $\mathbf{Z}_p$.
We have that $p^{n(F)}$ is the index of the ring $\mathbf{Z}_p[F]$ inside the ring $R_F:=\mathbf{Q}_p[F]\cap M(2, \mathbf{Z}_p)$.
All rings are viewed as subrings of $M(2,\mathbf{Q}_p)$. We will sometime think of $F$ and of elements of $R_F$ as endomorphisms of the standard lattice $\mathbf{Z}_p^2$ inside $\mathbf{Q}_p^2$.
Proof: if $n(F)$ is infinite then there is not much to prove, therefore we assume $n(F)$ finite, and $F$ not central. The ring $R_F$ as defined above contains $\mathbf{Z}_p[F]$ with finite index,
since they both are finite free $\mathbf{Z}_p$-modules of rank two. This is clear for $\mathbf{Z}_p[F]$, since $f(x)$ is the minimal polynomial of $F$, since $F$ is not central. For $R_F$ it follows
from the fact that $\mathbf{Q}_p[F]\cap M(2, \mathbf{Z}_p)$ is open, compact, and non-empty in $\mathbf{Q}_p[F]$, which has rank two over $\mathbf{Q}_p$, since $F$ is not central.
The ring $R_F$ has a $\mathbf{Z}_p$-basis of the form $(1, F')$ where $F'=(a+bF)/p^{h}$, for some $a,b\in\mathbf{Z}_p$ not both divisible by $p$, and where $p^h$, with $h\ge 0$, is the index of $\
mathbf{Z}_p[F]$ in $R_F$. The $p$-adic integer $b$ is a unit, for otherwise $p^{h-1}F'-b'F=a/p$, with $b'=b/p\in\mathbf{Z}_p$, would belong to $R_F$, which is not possible since $a/p$ is not a
$p$-adic integer. This shows that the natural action of $F$ on $\mathbf{Z}_p^2/(p^h)$ is multiplication by $-ab^{-1}$ mod $p^h$. Therefore $h\leq n(F)$.
On the other hand, if $\lambda$ is any integer such that $F-\lambda$ is zero mod $p^{n(F)}$, then $F'':=(F-\lambda)/p^{n(F)}$ is an element of $R_F$, since $F-\lambda$ commutes with $F$ and it is
divisible by $p^{n(F)}$ in $M(2,\mathbf{Z}_p)$. Therefore $n(F)\leq h$. Thus $n(F)=p^h$ and $(1, F'')$ is a $\mathbf{Z}_p$-basis of $R_F$ (since it spans a lattice of the correct index).
Now, by the maximality of $n(F)$ we see that $F''$ does not act via scalar multiplication on $\mathbf{Z}_p^2/p$. This implies that there is a $\mathbf{Z}_p$-basis $(e_1, e_2)$ of $\mathbf{Z}_p^2$
such that $F''(\mathbf{Z}_p\cdot e_1)\not\equiv \mathbf{Z}_p e_1$ mod $p$. It follows that $(e_1, F''(e_1))$ is also a $\mathbf{Z}_p$-basis of $\mathbf{Z}_p^2$.
With respect to this basis the action of $F''$ is given by a matrix of the form $\begin{pmatrix} 0&-a_0\\ 1&-a_1 \end{pmatrix}$, where $a_0$ and $a_1$ are the constant and the linear term of the
characteristic polynomial of $F''$, which is $f_0(x):=p^{-2n(F)}f(p^{n(F)}x+\lambda)$ and has coefficients in $\mathbf{Z}_p$. By picking $\lambda$ in the range $0,\ldots,p^{n(F)}-1$, we see that the
action of $F$ with respect to the basis $(e_1, F''(e_1))$ is that given by the statement.
Notice that this shows that $\mathbf{Z}_p^2$ is a free $R_F$-module of rank one, and classifying the action of $F$ on $\mathbf{Z}_p^2$ is roughly equivalent to finding $R_F$. I was interested
exactly in this in the context of Tate modules of elliptic curves over finite fields ($F=$Frobenius). Probably there is a more conceptual/simpler proof. I would be interested to hear what you get in
higher dimension. It won't be that easy, I expect. What makes this case simple is that orders of $\mathbf{Q}_p[F]$ containing $F$ are classified by the index with which $\mathbf{Z}_p[F]$ sits in
Very nice answer!
Igor Rivin Mar 6 '12 at 21:32
The old trick of putting backticks ` ` around broken latex worked.
j.c. Mar 6 '12 at 21:53
For $n=3$, the similarity problem is solved in the paper Similarity classes for $3\times 3$ matrices over a principal local ring by Avni, Onn, Vaserstein and myself (see imsc.res.in/~amri/
summaries.html#aopv). An argument to show that the general problem includes the matrix pair problem and is therefore wild is found in S. V. Nagornyi, Complex representations of the general linear
group of degree three modulo a power of a prime. Zap. Nauˇcn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 75:143–150, 197–198, 1978.
Amritanshu Prasad Mar 8 '12 at 3:48
In your paper you mention that Appelgate-Onishi solve the conjugacy problem in $SL_n(\mathbf Z_p)$, at least algorithmically. I imagine the same should therefore be true for $GL_n(\mathbf Z_p)$
(although this doesn't seem compatible with what Vytas says below). If so, somehow conjugacy is much worse in $M_n(\mathbf Z_p)$ than in $GL_n(\mathbf Z_p)$...
fherzig Mar 8 '12 at 14:06
Thanks! I was wrong: if I understand correctly, even the matrix pair problem is solvable algorithmically in the sense that if you are given a pair $(A,B)$ of $n \times n$ matrices you can find a
canonical form which uniquely characterises the conjugacy class. (See V. V. Sergeichuk, Canonical matrices for linear matrix problems, 2000 and also S. Friedland, Simultaneous Similarity of
Matrices, 1983.) So it's possible that there's an algorithm that decides whether any two given elements of $M_n(\mathbf Z_p)$ are conjugate, but one can't hope for a description of all conjugacy
fherzig Mar 17 '12 at 21:17
show 5 more comments
up vote 3 down The answer to this question would also determine the conjugacy classes in $GL(n, \mathbb Z_p)$. I think this is known to be a "wild" question for $n>2$, i.e. there is no hope to find
vote an answer. I am not sure about $n=2$. Have you tried to work this case out?
Hi Vytas, do you have a reference for the wildness?
fherzig Mar 7 '12 at 2:03
add comment | {"url":"http://mathoverflow.net/questions/90370/conjugation-in-gln-p-adic-setting/90397","timestamp":"2014-04-21T12:29:38Z","content_type":null,"content_length":"65005","record_id":"<urn:uuid:cf40e6b6-719f-481d-a5ed-0686c5ecacb8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Equations in 4 variables..
September 25th 2007, 05:14 PM #1
Aug 2007
Linear Equations in 4 variables..
Hey guys, I'm having a lot of trouble with this problem:
5x + 4y = 29
y + z - w = -2
5x + z = 23
y - z + w = 4
Yeah, I don't even know where to start on this one, so any kind of help would be greatly appreciated.
there are so many ways to attack a problem like this. tell us what method you must use to solve it
oh. well in that case. you are to use the equations to try and eliminate some variables so you can solve for others. similar to what you would do if it were just 2 simultaneous equations (you can
solve such systems right?).
for example, note that we can solve for y by adding the second and fourth equations
oh. well in that case. you are to use the equations to try and eliminate some variables so you can solve for others. similar to what you would do if it were just 2 simultaneous equations (you can
solve such systems right?).
for example, note that we can solve for y by adding the second and fourth equations
yes. in the past I've dealt with solving these types of equations in 2 variables. However, although that is really easy, for some reason I just can't find the relationship between solving an
equation with only 2 variables to solving an equation like this one..I understand that I can solve for y in the 2nd and 4th equations, and the answer would be y = 1, but where would I go from
yes. in the past I've dealt with solving these types of equations in 2 variables. However, although that is really easy, for some reason I just can't find the relationship between solving an
equation with only 2 variables to solving an equation like this one..I understand that I can solve for y in the 2nd and 4th equations, and the answer would be y = 1, but where would I go from
well, one way would be to replace y with one in all the other equations, so now you only have 3 variables to worry about.
another way would be to continue like nothing happened, and try to add/subtract other sets of equations to eliminate other variables. sometimes you won't be left with just one variable as you
were with y. but you can add/subtract the new equations you form to eliminate other variables until you are left with one.
September 25th 2007, 05:18 PM #2
September 25th 2007, 05:28 PM #3
Aug 2007
September 25th 2007, 05:32 PM #4
September 25th 2007, 05:49 PM #5
Aug 2007
September 25th 2007, 05:59 PM #6
September 25th 2007, 06:35 PM #7
Aug 2007 | {"url":"http://mathhelpforum.com/advanced-algebra/19509-linear-equations-4-variables.html","timestamp":"2014-04-16T11:34:39Z","content_type":null,"content_length":"51885","record_id":"<urn:uuid:73078caf-a198-443d-8f9e-23e62bd0a228>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Standard errors of certain quantities after estimating a system
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Standard errors of certain quantities after estimating a system of equations
From Murat Genc <murat.genc@otago.ac.nz>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject st: RE: Standard errors of certain quantities after estimating a system of equations
Date Thu, 7 Mar 2013 09:48:29 +0000
Dear Statalist,
I've estimated a system of 24 equations and I am calculating some quantities using the estimated coefficients. The formula for the quantities I calculate involves some scalars (which are actually average values of some variables that enter the estimated equations) and the estimated coefficients. My goal is to obtain the standard errors of these quantities. So, I use the predictnl comment. This works fine, but it takes a long time. (I have 24*24 quantities, and it takes about 20 hours.) The set of commands I use is
forvalue i=1/4 {
forvalue j=1/4 {
if `i'==`j'{
nlcom e`i'_`j': -1+(1/w`i'bar)*((pdfp`i'bar*coeflnp`j'eq`i'*(w`i'hatbar-[w`i']pdfp`i'*pdfp`i'bar)/cdfp`i'bar)+cdfp`i'bar*([w`i']shlnp`j'`i'-[w`i']lfp`i'*w`j'bar)-[w`i']pdfp`i'*pdfp`i'bar*coeflnp`j'eq`i'*pr`i'bar),post
else {
nlcom e`i'_`j': (1/w`i'bar)*((pdfp`i'bar*coeflnp`j'eq`i'*(w`i'hatbar-[w`i']pdfp`i'*pdfp`i'bar)/cdfp`i'bar)+cdfp`i'bar*([w`i']shlnp`j'`i'-[w`i']lfp`i'*w`j'bar)-[w`i']pdfp`i'*pdfp`i'bar*coeflnp`j'eq`i'*pr`i'bar),post
[w`i'] refers to the estimated equation.
My question is whether there is a better way of doing this so that it won't take such a long time.
I saw the question about "Using Delta Method with Estimated Marginal Effects from a Tobit Model" yesterday. The solution suggested looked very neat, and I tried to do something similar but it didn't work. The solution suggested by Maarteen there used the margins command in combination with nlcom. Based on that I tried the following
margins, dydx(*) predict(equation(w1)) post nlcom e1_2: -1+(1/w1bar)*((pdfp1bar*coeflnp2eq1*(w1hatbar-[w1]pdfp1*pdfp1bar)/cdfp1bar)+cdfp1bar*([w1]shlnp21-[w1]lfp1*w2bar)-[w1]pdfp1*pdfp1bar*coeflnp2eq1*pr1bar)
to do the calculation for one case. But when I try it I get an error message that says "option nlcom not allowed."
I'll appreciate any suggestion that will make the calculations faster.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2013-03/msg00306.html","timestamp":"2014-04-18T01:20:54Z","content_type":null,"content_length":"10256","record_id":"<urn:uuid:9c7b711b-39ed-426e-a137-5a55640a43b4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Chi-Square Diff Testing Using the Satorra-Bentler Scaled Chi-Square
Leonard Burns posted on Friday, May 17, 2002 - 9:12 am
I have a copy of the instructions from your web site for chi-square difference testing using the Satorra-Bentler Scaled Chi-Square. I would like to perform such tests with nested models in the
context of CFA. I am currently using EQS version 6.0. With this version, the robust estimation option provides the SB Scaled Chi-Square along with the robust CFI and the robust RMSEA measures of fit.
This is my question. Is the MLM procedure in M-Plus 2 (maximum likelihood parameter estimates with robust standard errors and a mean-adjusted chi-square test statistic) the same estimation procedure
as the robust procedure in EQS 6.0?
If this is so, then I can follow the instructions in the paper on your WEB site to calculate the
correct values for the SB Scaled Chi Square for comparing nested models.
Thanks for your feedback.
Len Burns
Linda K. Muthen posted on Friday, May 17, 2002 - 10:02 am
MLM is the Satorra-Bentler chi-square test statistic, so it stands to reason that the directions would work for the EQS Satorra-Bentler chi-square as well as MLM.
Wim Beyers posted on Monday, September 30, 2002 - 7:15 am
Hi there,
I'm just asking a question here, because I did not receive any helpfull feedback on SEMNET. And here I find a specific forum on the topic, so...
I calculated the Satorra-Bentler Scaled Chi-Square Difference Test by hand several times, but I sometimes come out with results that seem strange to me...
For instance (n = 600):
Comparison Model (79 df):
- nonscaled chisquare = 249.18
- scaled chisquare = 220.42
Nested Model (82 df):
- nonscaled chisquare = 337.18
- scaled chisquare = 308.24
So, a traditional chisquare diff test results favours the Comparison Model, with chi-square-diff = 88.00 (df = 3). OK
A SBS scaled diff test also favours the comparison Model, but following the calculations on http://www.statmodel.com/chidiff.html, the test-value is 675.14. It's so huge I really hesitate to report
it. Am I doing something wrong?
Thanks for all help,
Wim Beyers
bmuthen posted on Monday, September 30, 2002 - 9:42 am
It looks like you have done the calculations correctly according to our web site. The test value of 675 is certainly much larger than 88, but the p values are both very small and therefore similar so
this could all be ok.
Two other thoughts:
You might have run into a local optimum in one of your 4 runs.
You can run this in Mplus to get the scaling correction factors directly to see if they agree.
Kaja LeWinn posted on Thursday, June 23, 2005 - 3:05 pm
We are trying to use the Satorra-Benter scaled chi-squared test using the instructions on this site. We have found that when we run our comparison model in MLM and ML we get different degrees of
freedom (the same model, using two different estimation techniques gives us two different dfs). This does not happen in the nested model. We are confused by this. One characteristic of our model that
might be of significance is that we are looking for measurement invariance using the grouping command.
We were hoping this could be explained to us, as well as how we should approach the equation under point 3 of the online instructions (i.e. which degrees of freedom to we choose, those from the ML or
MLM model). Thanks for your help,
Linda K. Muthen posted on Friday, June 24, 2005 - 1:48 am
You need to send your outputs, data, and license number to support@statmodel.com. What you are describing has not been seen by us. There may be something else going on.
Leif Edvard Aaroe posted on Friday, January 06, 2006 - 2:59 pm
Dear colleagues,
I need to test out differences between nested models, and wonder if I can use the Satorra-Bentler scaled chi square in this particular case:
I have a model with a set of predictors that are all regarded as metric (although a couple of them are dichotomies). Some of them could be analysed as latent variables, but I have chosen to start out
using simple sumscores. It is therefore simply a path analysis that I am doing. I have two outcome variables, the most dependent one is a dichotomy. The other one, which is also regarded as a
mediator, is an ordered categorical variable. Since the sample is based on clusters, I am using the cluster option in order to adjust for the design effect.
I understand that the S-B formula is based on ML and MLM estimators. The programme, however, does not permit ML and MLM estimators with this analysis. It only provides WLSMV estimation.
Can my commands be changed in such a way that it allows for ML and MLM estimation? Or is there an alternative procedure that can be used for testing the difference between models?
Most grateful for any suggestions.
Linda K. Muthen posted on Friday, January 06, 2006 - 3:43 pm
MLM is the Satorra-Bentler chi-square. It is available only when all outcomes are continuous.
Leif Edvard Aaroe posted on Friday, January 06, 2006 - 9:15 pm
Thanks for quick response.
How do I test differences between nested path analysis models when I have categorical outcome variables and clustered data? Is this described in the Mplus manual or anywhere else?
If such testing is not possible: What procedure would you recommend for identifying a "good" model?
Linda K. Muthen posted on Saturday, January 07, 2006 - 6:53 am
In Chapter 15 under the ESTIMATOR option, there is a table that shows the estimators available for TYPE=COMPLEX and TYPE=TWOLEVEL. I'm not sure which you want to use for your clustered data. With the
WLS estimator, difference testing is done in the usual way. With WLSM and MLR, you need to use a scaling correction factor which is given in the output and how to use it is shown on the website. With
WLSMV, use the DIFFTEST option. See a description in the Mplus User's Guide. When you do not obtain chi-square, you can use -2 times the loglikelihood difference.
Sophie van der SLuis posted on Thursday, February 02, 2006 - 5:52 am
I'm performing a CFA with type=complex as independency assumption is violated in my data set [gathered within families].
I used the Satorra-Bentler scales chi-square corrections for MLR as described on the Mplus website.
My restricted model has an unscaled chi-square of 19.696 with df=13, and scaling correction factor of 1.123.
My less restricted model has an unscaled chi-sqaure of 10.704 with df=12, and scaling correction factor of 1.213.
Calculating the diff test scaling correction:
the corrected chi-sq diff test would then be:
I do not think this is correct.
Can someone help me out?
bmuthen posted on Friday, February 03, 2006 - 9:47 am
It looks like you have done this correctly. The asymptotics of this correction does not always work out in small samples as has been noted by the authors, although note that the p value will be zero
for both uncorrected and corrected chi-square difference testing. In Mplus Version 4 you will also have access to a Wald test which avoids these problems.
Sophie van der SLuis posted on Monday, February 06, 2006 - 2:23 am
Thank you for the swift response.
I'll see if I can lay my hands on Mplus 4.
I presume however that I can also report the CFI, RMSEA, standardized residuals etc. to substantiate the improvement in model fit?
Kinds regards
Anna Kryziek posted on Monday, February 06, 2006 - 7:50 am
Once you have computed the Satorra-Bentler scaled chi-square difference test, how can you determine whether the two models are significantly different or not?
Linda K. Muthen posted on Monday, February 06, 2006 - 8:44 am
The other fits measures should be okay.
Linda K. Muthen posted on Monday, February 06, 2006 - 8:45 am
You compare it to the chi-square table value for the number of degrees of freedom in the difference test.
Anna Kryziek posted on Monday, February 06, 2006 - 10:08 am
Thank you!
Chris Aberson posted on Monday, February 20, 2006 - 2:16 pm
I've done a series of these calculations and I get some CD values that are negative (do*co is less than d1*c1).
What should I do in this case? Use the absolute value of the CD?
Linda K. Muthen posted on Monday, February 20, 2006 - 5:43 pm
In this case, the difference testing is not working and should not be interpreted.
Chris Aberson posted on Monday, February 20, 2006 - 6:22 pm
What is my option here?
Just a straight test based on the ML values?
Also, to be clear -- if the test is not working -- is that attributable to some data characteristics or user error? I'm confident I'm doing the test correctly (as is comes out positive for about 50%
of my comparisons). Thanks!
bmuthen posted on Monday, February 20, 2006 - 6:30 pm
The asymptotics of the test fails to kick in - this has been observed among the creators of the test (Satorra-Bentler) in several applications. No user error, and no fault of the data (apart from
perhaps not having a large enough n). An alternative is to use a Wald test which is part of the upcoming Mplus Version 4 (see new announcement on the home page). This test is robust to the same type
of violations as MLR/MLM.
Chris Aberson posted on Tuesday, February 21, 2006 - 9:09 am
Would the Wald result here be similar to what EQS calls the Lagrange Multiplier test?
For example if I were to take my model and constrain a covariance to one - would the Lagrange value associated with freeing that parameter give me a test of the difference between a comparison model
and a model with that constraint (given that is the only constraint I'm testing?
Thanks again - great comments.
bmuthen posted on Tuesday, February 21, 2006 - 3:16 pm
No, Wald testing pertains to restrictions on a given H0 model, whereas LM tests (= Mod Ind) pertains to relaxing restrictions on a given H0 model.
Yes. The Wald test, however, would make it unnecessary for you to run that second run with your covariance restricted.
jennybr posted on Monday, February 27, 2006 - 6:56 pm
I want to compute a chi-square difference test, however, my two df's are the same. Does this mean that my models are not nested?
Linda K. Muthen posted on Monday, February 27, 2006 - 7:26 pm
What estimator are you using?
anna kryzicek posted on Wednesday, March 22, 2006 - 10:01 am
There was some discussion above about the conditions under which the Satorra-Bentler chi-square test fails. Could you direct me to papers that discuss this further.
Linda K. Muthen posted on Wednesday, March 22, 2006 - 12:32 pm
I think Peter Bentler has written about this. I don't know of the exact references. If you can't find it doing a literature search, you can contact Peter Bentler.
henry nyabuto posted on Monday, July 17, 2006 - 12:27 pm
I have run a multi-group (4 groups) CFA using LISREL. The output indicates that the factor loadings are equivalent across the four groups - only one set of loadings in the output. However, a
chi-square difference test (Santorra-Bentler Chi-square) is significant - contrained model minus unconstrained model. Does this sometimes happen? If so, what is the explanation and how does one
Linda K. Muthen posted on Monday, July 17, 2006 - 1:20 pm
This would indicate that constraining the factor loadings to be equal across groups significantly worsens the fit of the model. There must be some factor loadings that are not invariant across
groups. You can read about testing for measurement invariance in Chapter 13 of the Mplus User's Guide which is on the website. It is at the end of the multiple group discussion.
Christine McWayne posted on Thursday, August 10, 2006 - 1:20 pm
I am searching for the formula for calculating degrees of freedom when using WLSMV, but cannot find the correct info in the latest User's Guide or in the Technical Appendices. I am using MPlus
version 3.0 for this CFA. Can I interpret the degrees of freedom as accurate in the output?
Linda K. Muthen posted on Thursday, August 10, 2006 - 2:57 pm
It is formula 110 in Technical Appendix 4 on the website. The only interpretable value for WLSMV is the p-value for chi-square. The degrees of freedom are not calculated in the regular way.
Difference testing of nested models can be carried out using the DIFFTEST option.
Jason Prenoveau posted on Thursday, July 24, 2008 - 9:39 am
I know that when using the Satorra-Bentler chi-square (MLM), you must calculate a corrected chi-square difference test. It appears from the discussion above that this SAME procedure should be used
for the Yuan-Bentler T2 test statistic (MLR). Is this true, or when using MLR (Yuan-Bentler T2 test statistic) is it possible to just use a regular difference test?
Thank you for your help!
Linda K. Muthen posted on Thursday, July 24, 2008 - 10:21 am
MLM, MLR, and WLSM all need to use a scaling correction factor for difference testing.
M C Schilpzand posted on Wednesday, October 22, 2008 - 2:11 pm
Dear colleagues,
I would like to use the Chi-square difference test but I am unsure if my two models are nested. I would like to compare a CFA with conflict as 1 scale (items 1,2,3,4,5,6,7,8) and a CFA with conflict
as 2 scales( task conflict items 3,6,7 and relationship conflict items 1,2,5). The last model has 1 df less. Could I consider these models nested? If so could I use the Satorra-Bentler Scaled
Difference Test?
Thanks so much!
P.S. great forum, I have learned a lot from it!
Linda K. Muthen posted on Thursday, October 23, 2008 - 9:30 am
If you don't have the same set of observed dependent variables, the models are not nested.
M C Schilpzand posted on Thursday, October 23, 2008 - 1:30 pm
Thanks for your quick reply.
I think I understand. So if I would make a second order factor (conflict) of the two first order factors (task conflict and relationship conflict)and compare this to a CFA of one factor (conflict)
the models would be nested and thus I can use the S-B Scaled Difference Test?
Linda K. Muthen posted on Thursday, October 23, 2008 - 5:49 pm
A second-order factor model with two indicators is not identified. To be nested, each model would need to include the same set of dependent variables. I don't think if identified, your suggestion has
Janine Neuhaus posted on Wednesday, January 14, 2009 - 2:46 am
I ran a multilevel CFA with continous variables. I would like to compare two models: (1) 2-factors within and 1 factor between vs. (2) 2-factors within and 2 factors between. As much as I understand
Model 2 is not nested within model 1, so I can't use the S-B Scaled Difference Test. Regarding my chi-square value and the fit indices I cannot decide which model fits better, because they are nearly
the same. My question:
Is there another way to test if they are different? If not, can I conclude both models are equivalent although I couldn't really prove it?
Thanks very much!
Bengt O. Muthen posted on Wednesday, January 14, 2009 - 8:05 am
BIC could be helpful to balance parsimony against improving the loglikelihood. If the fit information is about the same, I would decide based on the interpretation and usefulness of the model; I
wouldn't be so concerned about testing. If the model with 2 between factors doesn't add interpretational value, parsimony speaks for the 1-factor model on between.
Janine Neuhaus posted on Thursday, January 15, 2009 - 2:21 am
Thank you very much for your advice - very helpful!
Sunny Liu posted on Monday, March 09, 2009 - 12:28 am
I am a little confused with the Satorra-Bentler Scaled Chi-Square.
For example,
Baseline model:
Chi-square Test of Model Fit:
Value: 2
Degree of Freedom: 2
Scaling Correction Factor for MLR: 2
Model 1:
Chi-square Test of Model Fit:
Value: 1
Degree of Freedom: 1
Scaling Correction Factor for MLR: 1
# Compute the difference test scaling correction where d0 is the degrees of freedom in the nested model and d1 is the degrees of freedom in the comparison model.
cd = (d0 * c0 - d1*c1)/(d0 - d1)
= (2*2 - 1*1)/(2-1) = 3
Note that the ML chi-square is equal to the MLM or MLR chi-square times the scaling correction factor.
# Compute the Satorra-Bentler scaled chi-square difference test (TRd) as follows:
TRd = (T0 - T1)/cd
= (2*2-1*1)/3 = 1
The X-square difference test is X-square equal to 1 with 1 df, then it is not significant.
Or should it be
TRd = (T0 - T1)/cd
= (2-1)/3 = 1/3
The X-square difference test is X-square equal to 1/3 with 1 df, then it is not significant.
Which way is correct?
Andrea Vocino posted on Monday, November 09, 2009 - 5:28 pm
Is the S-B chi^2 in Mplus going to be changed according to latest estimation provided by Satorra nd Bentler (2009)? see e.g.,
Bengt O. Muthen posted on Monday, November 09, 2009 - 5:44 pm
We have on our list to include that twist in our next version.
Christoph Weber posted on Friday, January 29, 2010 - 6:57 am
Dear Drs. Muthen,
I'm doing a chi-square difference test for MLR. I get a negative cd-value (=-.016). I think this is due to the complex model (M1: df=1085 M2: df=1086, M1: Scale Cor=1.070 M2:1.069). It follows that
also the scaled chi-square difference is negative (-240,639). Is this result interpretable?
How should I interpret it?
Do I have to use the modulus of cd?
Or does it mean, that the model with df=1086 fits better??
best regards
Christoph Weber
Linda K. Muthen posted on Friday, January 29, 2010 - 8:45 am
The result is not interpretable. This is a flaw with the method.
Michael Spaeth posted on Friday, February 05, 2010 - 7:20 am
I have the same problems with MLR-testing as described above (negative chi-square). In this situation, is it o.k. to use the Wald-test in combination with MLR and COMPLEX/CLUSTER in order to test
some simple 1df constraints?
Linda K. Muthen posted on Friday, February 05, 2010 - 9:09 am
kathrin weidacker posted on Sunday, September 19, 2010 - 9:36 am
Dear Drs. Muthen,
I am trying the formula for positive SB-difference statistic values
at invariance testing across groups (3). since the method requires
estimating a M10 model, with the M0 output as starting values, I do
not know which starting values to use in case of several groups.
can you help me further, please?
Bengt O. Muthen posted on Sunday, September 19, 2010 - 10:04 am
You want to study examples 3 and 4 and their Mplus scripts at
You would use the M0 output for each group to form the multi-group M10 start values.
kathrin weidacker posted on Sunday, September 19, 2010 - 6:07 pm
Thank you for the fast response! I will check those notes.
Samantha Anders posted on Wednesday, October 20, 2010 - 9:51 am
Hi there -
I am trying to run a fully unconstrained multiple group CFA model to compare to my constrained model, but I am not able to figure out the syntax for this. For the constrained model, I am using the
syntax below. What do I need to add to make it unconstrained? Thank you!
Title: CFA; low.hi; current; King
DATA: FILE IS low.hi.current.csv;
VARIABLE: NAMES ARE b1-b5 c1-c2 c3-c7 d1-d5 g;
CATEGORICAL ARE b1-b5 c1-c2 c3-c7 d1-d5;
GROUPING IS g (1 = low 2 = high);
MODEL: f1 BY b1-b5;
f2 BY c1-c2;
f3 BY c3-c7;
f4 BY d1-d5;
Linda K. Muthen posted on Wednesday, October 20, 2010 - 10:03 am
See Slides 169 and 170 of the Topic 2 course handout on the website.
Samantha Anders posted on Wednesday, October 20, 2010 - 12:54 pm
Thank you for your prompt response! I realize this is probably a really easy question, but the syntax I posted above - is this for a fully unconstrained model and then syntax on slides 169 and 170
for a fully constrained model?
Thank you so much!
Linda K. Muthen posted on Wednesday, October 20, 2010 - 3:07 pm
The syntax above is for a model with factor loadings and intercepts constrained to be equal across groups. This is the Mplus default. See Chapter 14 of the user's guide for a full description of
multiple group analysis in Mplus.
On Slides 169 and 170, the first syntax is for a fully constrained model. The second is for an unconstrained model. And the third is for a partially constrained model.
Samantha Anders posted on Saturday, October 23, 2010 - 2:45 pm
Thanks again. We got a bit closer, but now are running into this problem -
When the fully unconstrained model is run, we're getting error messages like this:
THE CONDITION NUMBER IS -0.244D-17.
We're not able to get fit statistics because the parameter estimates couldn't be computed. When I check the output, I can find out that, in this model, parameter 81 is d5. So then I delete the line
of code freeing d5 the model will run. I'm not sure what the condition number is yet. Can you tell us what the condition number indicates?
Linda K. Muthen posted on Saturday, October 23, 2010 - 3:29 pm
The condition number is related to identification. Please send the full output and your license number to support@statmodel.com.
Jak posted on Tuesday, November 02, 2010 - 4:02 am
Dear Linda or Bengt,
Is the method described in Webnote 12 ("Computing the Strictly Positive
Satorra-Bentler Chi-Square Test in Mplus") also applicable to the Yuan-Bentler T2 statistic obtained when using MLR estimation?
Thanks in advance,
Tihomir Asparouhov posted on Tuesday, November 02, 2010 - 8:45 am
Ian Clara posted on Tuesday, December 07, 2010 - 6:39 am
Good morning. I am trying to conduct a chi-square difference test using the WLSMV estimation and type=complex. I have used the difftest option and for the two models that I want to test it says that
they are not nested (although there is a single path that is different -- set to zero in one model and freely estimated in the other). I wanted to try to conduct the chi-square difference test by
hand, but I can't determine how to get the required output in Mplus v5. How can I obtain the scaling correction factors, or the log likelihood?
Warm regards,
Linda K. Muthen posted on Tuesday, December 07, 2010 - 8:00 am
Difference testing using WLSMV cannot be done by hand. If you use WLSM, you will obtain a scaling correction factor.
Fred Mueller posted on Wednesday, February 22, 2012 - 6:08 am
Dear Linda and Bengt,
I would like to compare two nested models and I am using MLR as an estimator. Instead of using the (Santorra-Bentler Scaled) Chi-Square difference test, I would like to use the small difference in
fit test by MacCallum, Browne, and Cai (2006, Psych Methods).
For this test, I also have to indicate the Chi-Square of both models. How do I have to proceed? Can I just multiply the (regularI Chi-Square with the scaling correction factor to get the correct
Chi-Square or is it more complicated?
Thank you very much in advance!
Bengt O. Muthen posted on Wednesday, February 22, 2012 - 1:51 pm
I don't know if the MacCallum et al approach needs the usual, uncorrected, chi-square or can use the MLR chi-square. You may want to approach the authors with that question.
Fred Mueller posted on Wednesday, February 22, 2012 - 5:22 pm
Thank you very much for your quick reply!
Malte Jansen posted on Monday, March 19, 2012 - 6:23 am
Dear Mplus Team,
i am comparing 2 measurement models for 18 items using MLR. It would be great if you could help me with some answers:
The first model is a one-factor model where all items are explaining by the same factor and the second model is a three factor model where each factor explains 6 items.
(1)Is it right to say that these models are nested as the 3-factor-model with the correlations between the factors set to 1 would equal the 1-factor-model?
(2) When i compute the S-B scaled Chi square difference test, the result is negative. Therefore i tried computing the strictly positive S-B scaled Chi square difference test. Thus i tried to estimate
the more strict M0 model first which would be the 3-factor-model with between-factor-correlations set to one. However, the estimation did not converge. What am i doing wrong? Heres the input:
factor1 by f10-f16;
factor2 by f20-f26;
factor3 by f30-f36;
factor1 WITH factor2 @1;
factor2 WITH factor3 @1;
factor3 WITH factor1 @1;
Best regards and thank you in advance.
Bengt O. Muthen posted on Tuesday, March 20, 2012 - 1:57 pm
The M0 model is just the one-factor model, right?
Malte Jansen posted on Tuesday, March 27, 2012 - 5:33 am
Yes, but from the examples in the Webnote describing the strictly positive S-B Test i thought in order to compare the models i would need the same notation in the syntax (i.e. 3 correlations set to
one instead of just "factor1 by f10-f36") so that i can save the start values for the M10 Model and then free the correlations between the factors.
Tihomir Asparouhov posted on Tuesday, March 27, 2012 - 6:57 pm
If you are using the MLR estimator with categorical data you should use the unscaled likelihood ratio test. The S-B is designed to be used for the case when you are treating the variables as
Strictly speaking there is a bit of a problem in using LRT for this purpose (overfactoring) see
Hayashi, K.., Bentler, P. M., & Yuan, K. –H. (2007). On the likelihood ratio test for the number of factors in exploratory factor analysis.
You might want to consider using BIC as well.
Malte Jansen posted on Wednesday, March 28, 2012 - 8:17 am
Dear Tihomir,
thanks for your detailed reply. I am using MLR because the complex sampling option (type=complex) requires it. Aside from that i am not treating the data (5-point-scales) as categorical (yet).
Best regards,
Linda K. Muthen posted on Wednesday, March 28, 2012 - 10:26 am
We need to see the relevant files and your license number at support@statmodel.com to help you further.
Johan Korhonen posted on Monday, December 03, 2012 - 5:33 am
I have been working with ESEM models with MLR estimator. To compare nested models I have been using S-B x^2 and it has worked fine until I got a negative estimate when comparing two models. I found
your web note on how to compute the strictly positive S-B x^2 but when I tried to run the M0 model with the svalues command to get the starting values for the M10 model I got the following warning
text in Mplus:
*** WARNING in MODEL command
The SVALUES option in the OUTPUT command is not available with the use of
EFA factors (ESEM). Request for SVALUES will be ignored.
Do you have any advice for me how to proceed?
Best regards
Tihomir Asparouhov posted on Monday, December 03, 2012 - 3:09 pm
You can just use the final results in the M0 output and put these values (manually) as starting value for M10 run. The SVALUES option is just a convenience feature that does this for you but you can
do it manually as well.
The strictly positive S-B chi-square is not very easy to do with ESEM. You have to work with the unrotated model. We may eventually update the web note to include a step by step computation for the
strictly positive S-B for ESEM. Consider using Model Test as a simpler alternative.
Johan Korhonen posted on Monday, December 03, 2012 - 10:30 pm
Thank you for your quick response. I think I will use the Model Test command.
Have a nice day
Paula Vagos posted on Friday, December 07, 2012 - 9:20 am
I have been trying to compute the SB chi-square difference test, bu my cd is negative, and so I need to create the M10 model. But I dont understand how to do it
Can anyone help me?
Linda K. Muthen posted on Sunday, December 09, 2012 - 5:39 pm
Send the outputs from the m0 and m1 analyses along with your attempt at the m10 model. Include your license number.
Sabrina posted on Saturday, April 27, 2013 - 3:04 pm
Hello. I computed the chi-square difference test for 2 nested models and it was negative so I followed the instructions in WebNote 12, but I received the following error message and can't figure out
what to do next?
THE CONDITION NUMBER IS -0.494D-04.
Linda K. Muthen posted on Sunday, April 28, 2013 - 10:26 am
Please send the output and your license number to support@statmodel.com.
Jacqueline Homel posted on Monday, June 24, 2013 - 3:43 pm
I have a longitudinal cross-lagged path model with four outcomes over three assessment points. Three of the outcomes are continuous and the fourth is a count, so I am using MLR. I also am using the
knownclass method to compare invariance across males and females. My analysis command looks like this:
Type = mixture ;
Algorithm = integration ;
Integration =montecarlo ;
I want to test whether two parameters in the model are equal to each other (although they are invariant across sex), so I constrained them to be equal and compared this to a model where they were
free. However, I get a negative scaled chi-square difference value. I read webnote #12 and tried to estimate model 10, but when I pasted the start values from model 0 in, the new model does not
converge. I'm sure I'm doing something wrong but am not sure what.
Linda K. Muthen posted on Tuesday, June 25, 2013 - 10:21 am
Use MODEL TEST. Label the parameters in the MODEL command.
0 - p1 -p2;
jtw posted on Friday, September 20, 2013 - 5:52 am
Hi there,
I am comparing the fit of nested models (i.e., bifactor, correlated traits, higher-order, uni-dimensional) using difftest. I have clustered data and obtain different conclusions when type=COMPLEX is
used as compared to when I do not adjust for clustering. Specifically, the bifactor model is preferred when I do not account for clustering, whereas the correlated traits model seems to be preferred
when I account for clustering.
It may be helpful to know that I obtain all chi-square difference test statistics when not adjusting for clustering. However, I do not obtain the chi-square the difference test statistic for the
correlated traits model when adjusting for clustering (because the chi-square value in this case is actually lower than the value for the bifactor model; i.e., the models are not technically nested).
Also, it is worth noting that there are no real substantive differences with respect to any of the factor loadings in either case; so, the only issue at hand is which model is preferred.
Any recommendations of which results should be reported in this case? Thank you in advance for your time.
Linda K. Muthen posted on Friday, September 20, 2013 - 11:58 am
If you have clustered data, you need to do the analysis taking that into account.
jtw posted on Tuesday, October 08, 2013 - 9:07 am
Hi there,
I understand nested models require two things: 1) more restrictions (higher degrees of freedom) AND 2) worse fit (which can be examined via Tech 5 output).
When a model is more restricted yet has slightly better fit, MPlus does not execute the DIFFTEST and returns the error indicating that the models are not nested. In this specific situation, is the
conclusion to be drawn: 1) the more restricted model (e.g., correlated traits) is preferred over the less restricted model (e.g., bifactor model); or 2) no firm conclusions should be drawn because
the DIFFTEST is not executed. I'm thinking the answer is #1. Am I right? However, if it is #2, what can be done?
Thank you in advance for your time.
Linda K. Muthen posted on Tuesday, October 08, 2013 - 9:40 am
You cannot compare the chi-square values of the estimators that end in V, WLSMV and MLMV. You would need to compare their fitting functions which you will find in TECH5 at the bottom of the first
column. The lower the fitting function the better.
The conditions that you mention are necessary but not sufficient conditions for nesting. Please send the outputs and your license number to support@statmodel.com for further information.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=9&page=192","timestamp":"2014-04-19T17:07:30Z","content_type":null,"content_length":"133856","record_id":"<urn:uuid:bef54f84-09ac-43e3-b4b2-a4a4a06ba936>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Calculate Kb From Pkb
Related documents, manuals and ebooks about How To Calculate Kb From Pkb
1453 downloads @ 7815 KB/s 7731 downloads @ 7339 KB/s
calculate pH from [H +] calculate [H +] from pH: entering values . into your : calculator . ionic product for : water . ... pKa + pKb = 14 . turning Ka, Kb, Kw ; into pKa, pKb, pKw . turning pKa, pKb
, pKw into Ka, Kb, Kw equation for the self ionisation of ; water . relationship ;
These equations can be used to calculate 1. Ka given Kb or Kb given Ka 2. pKa given pKb or pKb given pKa iv. a solution is basic if 1.
(Kb > 1, pKb < 1). Conjugate acids (cations) of strong bases are ineffective bases. * Compiled from Appendix 5 Chem 1A, B, C Lab Manual and Zumdahl 6th Ed. The pKa values for organic acids can be
found in Appendix II of Bruice 5th Ed.
... If pKb is given instead of Kb, calculate Kb from pKb (the same way you calculate [H3O+] from pH:; [because pKb = -log Kb]) (if necessary): If Ka of the conjugate acid is given (or available)
instead, calculate Kb from . KaKb (for conjugates) Kw,
... Kb, pH, pKa, pKb, Kw, pKw. d) sketch pH changes during various types of acid-base titrations. e) describe and explain pH changes of an indicator (use methyl orange and phenolphthalein as
examples) ... Calculate the pH of the following acid solutions a) ...
... Be able to calculate Kb, pKb for the conjugate base of an acid using Kw. 10) Be familiar with how colorimetric acid-base indicators work. 11) Understand the principles of acid-base titrations.
Calculate the pH of a 0.020 M Ba(OH)2(aq) solution. Answer: ... (12) and (3) show that as the Ka increases (and the pKa decreases), the Kb decreases (and the pKb increases). These equations give
quantitative support to the statement “the stronger the acid, ...
Again, we can take the –log of the Kb value and calculate the pKb. NH3 + H2O ( NH4+1 + OH-1. Kb = 1.76 x 10-5 = [NH4+1] [OH-1] ... Set-up an ICE table, fill in the values, calculate Kb or calculate
the missing equilibrium concentration using the Kb. HA + H2O ( H3O+1 + A-1 A-1 + H2O ...
Calculate the Kb and pKb for the ethylamine, and find the pK. for its conjugate acid, CH3CH2NH3+1 12. ... What is the Kb and pKb for hydrazine and the pK. of its conjugate acid? 14. Butanoic acid
{butyric acid), ...
-pKb K w = 10-14 = autoionization constant for water at 25 ˚C Also, K w = [H 3 O +]*[OH-] = K a *K b = 10-14 pH = - log [H 3 O+] pOH = - log [OH-] ... Analysis Section: There are five regions in WA +
SB titrations in which to calculate the pH:
pKb = -log(Kb) pKb = 13.9993 ≈ ... Calculate the pH when no NaOH has been added, 20.0 mL NaOH has been added, 50.0 mL NaOH has been added, 75.0 mL NaOH has been added, at equivalence point, and 20.0
mL past equivalence point.
What is the value of pKb for this ... ?? 5. Calculate the pH of 0.050 M Ba(CN)2 solution. Ba(CN)2 is a soluble ionic compound. a) 2.80. b) 2.96. c) 11.04 ans = d. d) 11.20. e) 12.40?? 6. The [OH-] =
1.3 x 10-6 M for a 0.025 M solution of a weak base. Calculate the value of Kb for this weak base ...
Like ammonia, it is a Bronsted base. A 0.10 M solution has a pH of 11.86. Calculate the Kb and pKb for the ethylamine, and find the pKa for its conjugate acid ... Like ammonia, it is a Bronsted base.
A 0.15 M solution has a pH of 10.70. What is the Kb and pKb for hydrazine and the pKa ...
solution. Calculate the pH of this solution at equivalence point. 3 3 3 ...
Given that Ka for acid A is 2.9 X 10-5 and that for acid B is 4.2 X 10-9, calculate Kb values for (a) A- and (b) B-. [ ] 7. ... molar; (a) what is the pH of this solution and (b) what is the pkb of
the conjugate base of HAS? v.1. Title: Chemistry 122 Author: Nancy Ball Last modified by: jmelius
EQUILIBRIUM REVIEW #2 (Ka/Kb) Ka or Kb. ... pOH = pKb + log [A]/[B]. The purpose of a buffer is to resist a change in pH. ... How to calculate pH at various points in a weak acid/strong base
titration (or weak base/ strong acid titration): ...
To calculate the actual concentration of the hydronium ion and the acetate anion in an acetic acid water solution one must carry out an equilibrium calculation. If one adds sufficient sodium
hydroxide, NaOH, to a solution of acetic acid in water
Kb=NHNH][OH] [NH2NH3][OH]/[NH2NH2] 1.7x10-6 =[OH-]2/0.20 ... When the concentrations of a weak base and its conjugate acid are equal, the pOH equals the pKb. Therefore, the pOH of hydrazine = pKb =
5.77, and pH=14.00-pOH=14.00-5.77=8.23.[H3O+]=10-8.23=
*Values <0 for H 2O and DMSO, and values >14 for water and >35 for DMSO were extrapolated using various methods. 0.79 SULFINIC & SULFONIC ACIDS PEROXIDES
... Calculate the hydroxide ion concentration, [OH ...
... the pH is found to be 10.45. Calculate the pKb of the base and the pKa of its conjugate acid. 18. The pH of 0.0945 M NH3(aq) is 11.12 ... What is the percent of NH2OH protonated? 20. The pH of
0.50 M aniline(aq) is 9.17. What is the value of Kb for aniline? 21. What is the pH of 0.010 M ...
... [H+], calculate pH. Calculate the pH of 0.00125M HNO3. 2. strong base solution – determine [OH-], calculate pOH, ... calc Kb, determine [OH-] using ICE box, calc pOH, calc pH. ... pH = pKa or pOH
= pKb. 10. pH of a buffer with unequal concentrations of donor [HA] and acceptor ...
What are Kb and pKb of methylamine. Kb = 4.4 x 10^-4. pKb = 3.36. 3.A student planned an experiment that would use 0.10 M propionic acid, HC3H5O2. Calculate the values of [H+] and pH for this
solution. For propionic acid, Ka = 1.4 x 10^-5 [H+] = 1.2 x 10^-3 M. pH = 2.92. 4.A solution of ...
Kw = Ka x Kb. pKw = pKa + pKb. pKw = 14. pKa = - log Ka . pKb = - log Kb : pKw = - log Kw . Ka = inv log (- pKa) Kb = inv log (- pKb) Kw = inv log (- pKw) ... calculate the pH : of a weak acid .
calculate the pH : of a weak base . equation for the : self ionisation of . water solubility product
The [BH+1] = [B], the ratio [BH+1]/[B] equals one, the [OH-1] equals Kb, and the pOH of the solution equals the pKb of the weak base. Remember that pH + pOH = 14. ... Calculate Ka for the weak acid
and the Kb for the weak base from this data. 8.
... abbr. HOAc, Ka = 1.8 x 10-5), calculate [H+] and compare % ionization for a. 1.0 M b. 0.1M c. 0.01 M HOAc solution. R HC2H3O2 ( H+ ... CA Ka pKa Rank of acid CB Kb pKb Rank base 1 CH3COOH 1.8 x
10-5 4.74 7 CH3COO- 5.6 x 10-10 9.26 6 2 C6H5CO2H ...
pKa, and pKb and pH scale: Autoionization of water, Kw, and. pOH. ... Calculate pH from Ka or Kb values and solution concentration (Section 16.6). Resources. Chemistry: The Molecular Science 1st
Edition, John W. Moore, Conrad L. Stanitski and Peter C. Jurs.
Kb Ka/pKa values ... calculate the [OH ...
Find Ka and pKa for a weak acid (or Kb and pKb for a weak base) ... Calculate the quantities of a conjugate acid and its conjugate base required to prepare a buffer solution of known pH and
concentration. Classroom options:
18.1.6. Identify the relative strengths of acids and bases using values of Ka, Kb, pKa and pKb. ... Calculate Kb for NH3. Kb = Kw = 1.0 x 10-14 = 1.8 x 10-5. Ka. 5.6 x 10-10. Significance of the
above relationship.
The equilibrium pressure in the tube is 1.54 atm. Calculate Kp. 7. What is the pH of 2.1 × 10-5 M NaOH(aq) at 25 °C? (K w = 1.0 × 10-14) 8. The equilibrium constant, Kc, for the following reaction is
1.0 × 10-5 at 1500 K. N2(g) + O2(g) ↔ 2 NO(g)
pKa = -logKa pKb = -logKb [A-] ... Calculate the pH of a buffer solution made from 0.20 M HC2H3O2 and 0.050 M C2H3O2- that has an acid dissociation ... Kb for nicotine is 1.05 x 10-6 (Answer: 8.021)
A buffer is prepared containing 0.788 M lactic acid, HC3H5O3 and 1.27 M calcium lactate, Ca ...
pH = ½ ( pKw – pKb – log Cs) EX. Calculate pH for 0.2M of NH4Cl sol. , ... [OH-] = Kb (x –log) pH = pKb + log EX. Calculate pH for a solution prepared by adding 10ml of 0.1M acetic acid to 20ml of
0.1M sodium acetate , Ka = 1.75x10-5 ?
Ka pKa pKb Kb . 1.3x10-4 _____ _____ _____ ... pH [H+] [OH-] pOH 8 _____ _____ ____ ____ _____ 1x10-2 ____ 4) Calculate the pH of aqueous 0.15M Ba(OH) 2 and 0.15M NH 3 solutions. Fold Quiz with your
work inside, then write your name on the outside.
If Ammonia has a pKb of 4.75, what is its Kb? 2. ... Calculate the Kb of a solution of 0.250 M of a weak base with a pH of 9.12. © Adrian Dingle’s Chemistry Pages 2004, 2005, 2006, 2007, 2008, 2009.
All rights reserved.
Calculate the percent ionization of propionic acid ... A 5.0 x 10-3 M solution has a pH of 9.95. Calculate the value of kb for this substance. What is the pKb for this base? 2 . Title: Calculating pH
of weak acid and base Author: teacher Last modified by: teacher Created Date: 2/3/2010 5:40:00 ...
ICE (Initial-Change-Equilibrium) Tables can be used to calculate concentrations. H A ... Kb, pKa and pKb. ... 18.1 Calculations Kw Ka Kb Author: Kathleen Butler Created Date: 11/15/2011 4:04:53 AM
Calculate the reaction quotient, Q, for the reaction as written: If Kc for this reaction at the temperature of the experiment is 3.7 ... Ka Conj Base [OH-] Kb pKb CH3NH2 11.95 SHOW ALL WORK. Include
proper units. Title: Chem 142 Author: GCCCD Last modified by: GCCCD Created Date: 2/14/2007 9:55 ...
Quantitative treatment of for acids and bases; using Ka and Kb, pKa and pKb and pH. ... How do you calculate the equilibrium concentrations based on either Ka or Kb? 4. How do you calculate the pH of
a buffer? Calculation involving pH and pOH based on Kw.
Classify the following compounds and calculate the pH values & percent ionization (hydrolysis) of 0.10M solution. Fill answers in the blanks and show works on separate paper. ... ( HNO2+OH- Kb=
2.2E-11 pKb=10.66 8.17 1.48 E-6 4 Fe(NO. 3) 3.
Relationships between: pH, pOH, pKw, Kw, [H+], [OH-] Ka, Kb, pKa, pKb . 17.3 Using these relationships in calculations . 11. 12. 13. 14. 17.4 . 15. 17.5 Acid strength and molecular structure. 16. 17.
... Add KOH to this buffer and calculate the new pH.
Kb = 4.4 x 10^-4, pKb = 3.36. 7. Sodium benzoate, NaC6H5COO, is the salt of the weak acid benzoic acid, C6H5COOH. A 0.10-molar solution of sodium benzoate has a pH of 8.60 at equilibrium. (level 3)
Calculate the [OH-] in the sodium benzoate solution described above.
From the pH at the equivalence point calculate Kb for the acetate ion. Kb for the acetate ion From Kb determine pKa for acetic acid. Calculate and ... determine pKb for ammonia. Calculate and report
the percent error in this value. Calculated ...
... the equilibrium is dependent on the base equilibrium constant, Kb. A base equilibrium expression can only be written ... Ka Kb = Kw. PKa + PKb = 14. ... you can calculate the Kb for its conjugate
base (A-). Example / Calculate the PH of 0.2 mol.L-1 of ethylene .( Kb = 4.2 x10-10) B + H2O BH+ ...
- Calculate Ka or Kb for conjugate pairs, work with pKa and pKb - Estimate the pH (acidic, basic, ... (Kw) - Calculate the equilibrium constant for a weak acid (Ka) or a weak base (Kb) from
experimental information (such as pH, pOH, [H3O +], [OH-]) Chapter 18 - Recognize a buffer system -
Calculate pH, pOH, pK a and pK b. 5. Apply Le Chatelier’s Principle to determine if reactant/product ... To calculate the value of pKa or of pKb given Ka or Kb for a weak acid or base. 2. To learn
the concept of buffer capacity 3.
What is the pKb of the unknown base? 14. The Kb of a weak base B is 1.00 x 10-9. Calculate the pH of a 0.100 M solution of the base. 15. What is the pH of a 0.20 M solution of NH4Cl? [Kb(NH3) = 1.8 ×
10–5] 16. Calculate the pH of a 0.021 M NaCN solution. [Ka(HCN) = 4.9 × 10–10] Author ...
Using the equivalence volume of NaOH and precise molarity of NaOH, calculate the precise molarity of the HCl solution you titrated. Precise molarity of HCl 3. The equivalence point for this titration
occurs at a pH of 7.
At equilibrium, a 0.039 M solution of propylamine has an OH- concentration of 3.74 x 10-3 M. Calculate the pH of this solution and Kb for propylamine. b) Write out the conjugate acid for compound
propylamine. c) Calculate the Ka of the conjugate acid.
Aniline pKb=9.4. Calculate pH of 0.1 M potassium hydrogen oxalate. pKa1=1.25, ... Ammonia is a weak base, so the most convenient approach is to calculate pOH using Kb, and then to convert it to pH.
From the Brønsted-Lowry theory we know that Ka×Kb=Kw, ... | {"url":"http://docs.askives.com/how-to-calculate-kb-from-pkb.html","timestamp":"2014-04-20T00:45:49Z","content_type":null,"content_length":"55095","record_id":"<urn:uuid:68461d62-e167-4fdc-b8b5-09f7cc3c46d7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project Euler 77: Value as the sum of primes in 5000 ways? | MathBlog
Project Euler 77: What is the first value which can be written as the sum of primes in over five thousand different ways?
Written by Kristian on 31 December 2011
Topics: Project Euler
In problem 77 of Project Euler we are asked the following question
It is possible to write ten as the sum of primes in exactly five different ways:
7 + 3
5 + 5
5 + 3 + 2
3 + 3 + 2 + 2
2 + 2 + 2 + 2 + 2
What is the first value which can be written as the sum of primes in over five thousand different ways?
This question is actually very similar to the problem we just solved in Problem 76 except that it asks us to find a number with a certain property, instead of asking what property a certain number
I took the easy solution and modified the solution for problem 76, to search for the first number with a solution larger than 5000. In order to do that we need to make two changes.
1. Instead of just asking for the target of 100, we will start at 2 and then run the algorithm untill we find the correct solution
2. Instead of using all integers between 1 and 99, we need an array of primes. This is similar to what we did in problem 31 with currency denominations
These changes can be made in the C# and results in the following code
int target = 2;
int[] primes = ESieve(2, 1000);
while (true) {
int[] ways = new int[target+1];
ways[0] = 1;
for (int i = 0; i < primes.Length; i++) {
for (int j = primes[i]; j <= target; j++) {
ways[j] += ways[j - primes[i]];
if (ways[target] > 5000) break;
Which gives us the following result
The first number written in over 5000 ways is 71
Solution took 0,6296 ms
My biggest surprise was actually that it was a really small number. I don’t know what I should have expected, but I found it to be much smaller than I would have expected. The execution time is fine
for this problem I think.
Wrapping up Problem 77
Once again I have made a really short description for the problem, since I think everything is covered in previous solutions, and once we see the idea the changes to the code are almost trivial. I
know that sounds snobbish, but really the main thing here is getting the idea.
You can find the full source code here.
The blog image is provided by Bob Rosenbaum who shared it under the creative commons license.
4 Comments For This Post I'd Love to Hear Yours!
1. Just out of curiosity- why didn’t you re-use the values you already found? Instead of allocating a new ‘ways’ array (And wrapping the process inside a while loop), couldn’t you keep it? And then
iterate it in order to find a value greater than 5000?
2. The problem with that is that I don’t have a fixed upper limit, so thats why I do what I do. But I think you are right that I could reuse a good part of the last area.
3. Hi,
i don’t really know about the optimizations of C# compiler for accessing arrays.
But you could even optimize a litte bit by replacing:
for (int i = 0; i < primes.Length; i++) {
for (int j = primes[i]; j <= target; j++) {
ways[j] += ways[j - primes[i]];
for (int i = 0; i < primes.Length; i++) {
int j = primes[i]
if(j > target) {
//leave the loop, also: primes[k] > target for all k > i,
int p = j;
for (; j <= target; j++) {
ways[j] += ways[j - p];
Just a small optimization to avoid checks in the inner for-loop.
Do know some good book on “Dynamic Programming” ?
Currently I need some time to understand your solutions ?
4. Hi DiMa
Good idea to break out at that point. However, the overhead of not doing so is in this case rather small. But still it is an optimization of the code speed.
I don’t have any really good references for dynamic programming. It has sort of snuck in on me at some time, and now it seems to “just” another tool in the box.
Leave a Comment Here's Your Chance to Be Heard!
You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
You can use short tags like: [code language="language"][/code]
The code tag supports the following languages: as3, bash, coldfusion, c, cpp, csharp, css, delphi, diff,erlang, groovy, javascript, java, javafx, perl, php, plain, powershell, python, ruby, scala,
sql, tex, vb, xml
You can use [latex]your formula[/latex] if you want to format something using latex
Notify me of comments via e-mail. You can also subscribe without commenting. | {"url":"http://www.mathblog.dk/project-euler-77-sum-of-primes-five-thousand-ways/","timestamp":"2014-04-19T06:52:29Z","content_type":null,"content_length":"46214","record_id":"<urn:uuid:d87ea3ff-fe7a-4711-a9fc-c0d64d915543>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
High-precision arithmetic for calculating N-gram probabilities
David Hall dlwh at cs.berkeley.edu
Sun Oct 16 20:18:22 BST 2011
On Sun, Oct 16, 2011 at 5:26 AM, Dan Maftei <ninestraycats at gmail.com> wrote:
> Thank you Wren and David. Here is what I gleamed so far:
> 1 Standard practice is to use Doubles for probabilities. Is the main reason
> for this because the small precision loss isn't worth reclaiming by using
> more complicated types? Further, do we often sum probabilities in NLP? If
> so, Ratios are a bad idea, due to many gcd calls. But if our only
> manipulation is multiplication, Ratios will not suffer a performance hit.
That's right. Summing is very common because that's how you get
marginal probabilities. I don't have much experience with rationals.
> 2 I'm unclear what log (_ + 1) and exp (_ - 1) refer to, or what is
> inaccurate about the standard log and exp functions. Same goes for what a
> and b represent in logSum (== a + log(1+ exp(b-a)). If I want to multiply
> probabilities, I use exp $ foldl' (+) 0 (map log probs), which is equivalent
> to foldl' (*) 1 probs. Is there something inaccurate or incorrect about
> that? NB: the probability of a large text is intractable, since foldl' (+) 0
> (map log probs) gives an exponent in the thousands. However, the perplexity
> of said text is computable, it just defers the exp function till after we
> multiply the log sums by -1/N: exp $ negate (1 / n) * logSums.
log(_ + 1) just means--it looks pretty Scala-y to me ;)--the log of
some double + 1. 1 + (small double) is basically 1, and so the naive
implementation of log(_ + 1) would be 0, or nearly so. a and b are
just doubles representing (possible unnormalized) log probabilities.
But in reality, log(1 + small double) is close to the small double.
Thus, you should log1p as a better approximation.
Don't call that logSums, call them sumOfLogs, or something. logSums
refers to the log of the sum of the probabilities, which is something
like log $ foldl (+) 0.0 (map (\p -> exp ( p - (foldl1 max logProbs)))
As for your examples, exp $ foldl' (+) 0 (map log probs) is fine,
except, as you pointed out, exp'ing will typically go poorly. foldl'
(*) 1 probs is of course only identical in the world of infinite
precision reals.
> 3 I am still unsure how to neatly randomize output, given the inaccuracy of
> Doubles (I have a working function but it requires a guard which is only
> necessary due to Double imprecision). David gave "2 * abs(high - low)/(high
> + low) <= epsilon", but I can't figure what the high and low refer to, or
> where to get that epsilon from. Isn't it machine-dependent? If not, I could
> use the value Wiki for double precision types, 1.11e-16.
Err, maybe I should have said "tolerance" instead of epsilon. high and
low are just poorly named variables. x and y may have been better.
That code basically says "ensure that the absolute difference between
x and y is small relative to their average"
epsilon/tolerance is your choice, 1E-6 is usually fine. 1E-4 is ok if
you're feeling sloppy.
-- David
> Dan
>> Message: 1
>> Date: Sat, 15 Oct 2011 20:36:45 -0400
>> From: wren ng thornton <wren at freegeek.org>
>> Subject: Re: High-precision arithmetic for calculating N-gram
>> probabilities
>> To: nlp at projects.haskell.org
>> Message-ID: <4E9A271D.6090505 at freegeek.org>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>> On 10/13/11 12:34 PM, David Hall wrote:
>> > Also, you'll probably want to
>> > use log(double) and operate using + as times and logSum (== a + log(1
>> > + exp(b-a)) as sum. This will avoid underflow.
>> Actually, for (a + log(1 + exp(b - a))) you want to first ensure that a
>> >= b. Otherwise you'd want to do (b + log(1 + exp(a - b)))
>> Also, there are more accurate implementations of log(1+_) and (exp _-1)
>> than the naive versions which become extremely inaccurate for small
>> arguments.
>> If you want to deal with probabilities in the log-domain, the logfloat
>> package[1] will handle all these details for you. In addition, it
>> defines things using newtypes so that you get type safety instead of
>> mixing up which things are in the log-domain and which are in the
>> normal-domain; and it provides a Num instance so that you can say what
>> you mean instead of what the implementation is doing. In fact, the
>> package was specifically designed for NLP tasks needing to do this sort
>> of thing. While the primary goal is to maximize precision, the secondary
>> goal is to maximize performance within those constraints; and the
>> implementation has been tested thoroughly on these criteria.
>> The only caveat is that the performance tuning was done a while ago,
>> back when GHC had horrible code generation for doubles, and so it uses
>> --fvia-C which has since been deprecated. I've yet to install GHC7 in
>> order to re-tune for the new backends.
>> [1] http://hackage.haskell.org/package/logfloat
>> --
>> Live well,
>> ~wren
>> ------------------------------
>> Message: 2
>> Date: Sat, 15 Oct 2011 20:40:42 -0400
>> From: wren ng thornton <wren at freegeek.org>
>> Subject: Re: High-precision arithmetic for calculating N-gram
>> probabilities
>> To: nlp at projects.haskell.org
>> Message-ID: <4E9A280A.10609 at freegeek.org>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>> On 10/13/11 2:40 PM, Dan Maftei wrote:
>> > Yeah, log sums are actually a necessity when calculating perplexity. :)
>> >
>> > If I ever get around to profiling Rationals vs. Doubles I'll let people
>> > know
>> > what I found. But upon reflection, I see that they shouldn't make a
>> > significant difference in performance.
>> The big performance problem with Rationals is that they're stored in
>> normal form. That means the implementation has to run gcd every time you
>> manipulate them. If you're really interested in performance, then you'd
>> want to try working with unnormalized ratios rather than using the
>> Rational type. Of course, then you have to be careful about not letting
>> the numbers get too large, otherwise the cost of working with full
>> Integers will be the new performance sinkhole.
>> --
>> Live well,
>> ~wren
>> ------------------------------
>> _______________________________________________
>> NLP mailing list
>> NLP at projects.haskell.org
>> http://projects.haskell.org/cgi-bin/mailman/listinfo/nlp
>> End of NLP Digest, Vol 12, Issue 3
>> **********************************
> _______________________________________________
> NLP mailing list
> NLP at projects.haskell.org
> http://projects.haskell.org/cgi-bin/mailman/listinfo/nlp
More information about the NLP mailing list | {"url":"http://projects.haskell.org/pipermail/nlp/2011-October/000086.html","timestamp":"2014-04-21T00:00:22Z","content_type":null,"content_length":"11880","record_id":"<urn:uuid:3c6daff3-18a2-4163-b1f7-9aedb94f47be>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which Lie groups have adjoint representations that are bounded away from zero?
up vote 7 down vote favorite
Studying stability of certain non-autonomous dynamical systems on Lie groups I have come across the following question: Exactly which finite-dimensional, real Lie groups have adjoint representations
that are bounded away from zero?
Edit: by "bounded away from zero" I mean that the image of the adjoint representation avoids an open neighborhood of zero in End(g), where g is the Lie algebra. Equivalently, the closure of the image
does not contain zero, or, the norm (pick your favorite one) of every element of the adjoint representation is bounded from below by one and the same positive number. By Hadamard's inequality, a
determinant bound will do as well. [end edit]
This should include compact Lie groups since for those there exists an inner product on the Lie algebra with respect to which all inner automorphisms are orthogonal, i.e. the elements of the adjoint
representation have norm 1. Correct?
Also, for abelian Lie groups the adjoint representation is trivial, hence again bounded away from zero.
I believe that semisimple Lie groups should also be included but can not think of a valid argument.
Is there actually a counter example? I tried the general linear group GL(2) but the elements of the adjoint representation that I tried always have (some) unit eigenvalues. Is this an accident? I
would have thought that GL(n) itself occurs as an adjoint representation somehow which would then not be bounded away from zero. But evidently I am not quite understanding the different dimensions
here (the adjoint representation of GL(2) is a subgroup of GL(4)).
My apologies if this is trivial but I could not find anything that looked relevant in several books on Lie groups.
rt.representation-theory lie-groups
3 It would help to say more precisely what "bounded away from zero" means in this context. Also, note that the adjoint representation of a reductive group such as a general linear group sends the
center to the identity element. In particular, GL(n) itself won't occur as (the image of) an adjoint representation. – Jim Humphreys Aug 19 '13 at 14:08
Thanks, I'll edit the question. Appreciate the comment about GL(n). – Jochen Trumpf Aug 20 '13 at 2:12
add comment
2 Answers
active oldest votes
The adjoint rep is always bounded away from $0$. Let $\mathfrak{g}_0$ be a simple quotient of $\mathfrak{g}$. (I consider the $1$-dimensional Lie algebra to be simple, so there is always
a simple quotient.) Let $\mathfrak{h}$ be the kernel of $\mathfrak{g} \to \mathfrak{g}_0$ and let $H = \exp(\mathfrak{h})$.
The adjoint action preserves $\mathfrak{h}$, so the adjoint representation is block upper triangular. The upper left block is the adjoint action of $G/H$ on its Lie algebra, which is $\
up vote 6 mathfrak{g}_0$. Since $\mathfrak{g}_0$ is simple, it is unimodular, meaning that the adjoint rep has determinant $1$. This idea is taken from anton's answer.
down vote
In summary, the adjoint rep of $G$ can be put in block upper triangular form with the upper left block a matrix of determinant $1$ (and not a $0 \times 0$ matrix).
Thanks, I will accept this answer once I fully understand it. – Jochen Trumpf Aug 20 '13 at 4:01
add comment
I am not sure what you mean by "bounded away from zero", but if you mean that the closure of the image of the adjoint representation does not contain zero, that is correct. A proof might
up vote 2 proceed by showing that the image lies in the adjoint group of the lie algebra and the latter lies in the group of elements of determinant one.
down vote
Is this not only the case for the Lie groups with unimodular Lie algebras? $1 = \det \operatorname{Ad}(\exp X) = \det \exp \operatorname{ad}(X) = e^{\operatorname{tr} \operatorname{ad}
(X)}$, hence $\operatorname{ad}(X)$ must be traceless for all $X$. Unless I'm missing something, this is not the case for every Lie algebra. – José Figueroa-O'Farrill Aug 19 '13 at
Which orbit of the adjoint representation are we taking? Are we asking that some orbit doesn't have zero in its closure? – Ben McKay Aug 19 '13 at 19:24
@Jose: Oh yes right, I was thinking of reductive groups. Sorry. – anton Aug 19 '13 at 20:07
Edited the question to spell out what I mean by "bounded away from zero". Thanks already for the reductive idea. Still evaluating David's answer above. – Jochen Trumpf Aug 20 '13 at
Does this not require the Lie group to be connected (and have a unimodular Lie algebra)? The proof idea given by @Jose seems to require that a group element can be written as a finite
product of exponentials. – Jochen Trumpf Aug 21 '13 at 1:15
show 1 more comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/139817/which-lie-groups-have-adjoint-representations-that-are-bounded-away-from-zero","timestamp":"2014-04-18T15:57:13Z","content_type":null,"content_length":"66703","record_id":"<urn:uuid:5d79fcd1-d9a5-4923-ba4b-3d4f9299bd70>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Knot theory on Low Dimensional Topology
Marc Culler and I released SnapPy 2.1 today. The main new feature is the ManifoldHP variant of Manifold which does all floating-point calculations in quad-double precision, which has four times as
many significant digits as the ordinary double precision numbers used by Manifold. More precisely, numbers used in ManifoldHP have 212 bits for the mantissa/significand (roughly 63 decimal digits)
versus 53 bits with Manifold.
Distinguishing the left-hand trefoil from the right-hand trefoil by colouring
This morning, I’ve been looking through a very entertaining paper in which Roger Fenn distinguishes the left-hand trefoil from the right-hand trefoil in a way that could be explained to elementary
school children.
R. Fenn, Tackling the trefoils. (more…)
What’s Next? A conference in question form
Mark your calendars now: in June 2014, Cornell University will host “What’s Next? The mathematical legacy of Bill Thurston”. It looks like it will be a very exciting event, see the (lightly edited)
announcement from the organizers below the fold.
What is the Shannon Capacity of a coloured knot?
I see topological objects as natural receptacles for information. Any knot invariant is information- perhaps a knot with crossing number $n$ is a fancy way of writing the number $n$, or a knot with
Alexander polynomial $\Delta(X)$ is a fancy way of carrying the information $\Delta(X)$. A few days ago, I was reading Tom Leinster’s nice description of Shannon capacity of a graph, and I was
wondering whether we could also define Shannon capacity for a knot. Avishy Carmi and I think that we can (and the knots I care about are coloured), and although the idea is rather raw I’d like to
record it here, mainly for my own benefit.
For millenea, the Inca used knots in the form of quipu to communicate information. Let’s think how we might attempt to do the same. (more…)
A noteworthy knot simplification algorithm
This post concerns an intriguing undergraduate research project in computer engineering:
Lewin, D., Gan O., Bruckstein A.M.,
TRIVIAL OR KNOT: A SOFTWARE TOOL AND ALGORITHMS FOR KNOT SIMPLIFICATION,
CIS Report No 9605, Technion, IIT, Haifa, 1996.
A curious aspect of the history of low dimensional topology are that it involves several people who started their mathematical life solving problems relating to knots and links, and then went on to
become famous for something entirely different. The 2005 Nobel Prize winner in Economics, Robert Aumann, whose game theory course I had the honour to attend as an undergrad, might be the most famous
example. In his 1956 PhD thesis, he proved asphericity of alternating knots, and that the Seifert surface is an essential surface which separates alternating knot complements into two components the
closures of both of which are handlebodies.
Daniel Lewin is another remarkable individual who started out in knot theory. His topological work is less famous than Aumann’s, and he was murdered at the age of 31 which gives his various
achievements less time to have been celebrated; but he was a remarkable individual, and his low dimensional topology work deserves to be much better known. (more…)
SnapPy 2.0 released
Marc Culler and I pleased to announce version 2.0 of SnapPy, a program for studying the topology and geometry of 3-manifolds. Many of the new features are graphical in nature, so we made a new
tutorial video to show them off. Highlights include
Tangle Machines- Positioning claim
Avishy Carmi and I are in the process of finalizing a preprint on what we call “tangle machines”, which are knot-like objects which store and process information. Topologically, these roughly
correspond to embedded rack-coloured networks of 2-spheres connected by line segments. Tangle machines aren’t classical knots, or 2-knots, or knotted handlebodies, or virtual knots, or even w-knot.
They’re a new object of study which I would like to market.
Below is my marketing strategy.
My positioning claim is:
• Tangle machines blaze a trail to information topology.
My three supporting points are:
1. Tangle machines pre-exist in a the sense of Plato. If you look at a knot from the perspective of information theory, you are inevitably led to their definition.
2. Tangle machines are interesting mathematical objects with rich algebraic structure which present a plethora of new and interesting questions with information theoretic content.
3. Tangle machines provide a language in which one might model “real-world” classical and quantum interacting processes in a new and useful way.
Next post, I’ll introduce tangle machines. Right now, I’d like to preface the discussion with a content-free pseudo-philosophical rant, which argues that different approaches to knot theory give rise
to different `most natural’ objects of study.
Lots and lots of Heegaard splittings
The main problem that I’ve been thinking about since graduate school (so around a decade now) is the following: How does the topology of a three-dimensional manifold determine its isotopy classes of
Heegaard splittings? Up until about a year ago, I would have predicted that most three-manifolds probably don’t have many distinct Heegaard splittings, maybe even just a single minimal genus Heegaard
splitting and then all of its stabilizations. Sure, plenty of examples have been constructed of three-manifolds with multiple distinct (unstabilized) splittings, but these all seemed a bit contrived,
like they should be the exceptions rather than the rule. I even wrote a blog post a couple years back stating what I called the generalized Scharlamenn-Tomova conjecture, which would imply that a
“generic” three-manifold has only one unstabilized splitting. However, since writing this post, my view has changed. Partially, this was the result of discovering a class of examples that disprove
this conjecture. (I’m hoping to post a preprint about this on the arXiv in the near future.) But it turns out there is an even simpler class of examples in which there appear to be lots and lots of
distinct Heegaard splitting. I can’t quite prove that they’re distinct, so in this post I’m going to replace my generalized Scharlemann-Tomova conjecture with a conjecture in quite the opposite
direction, which I will describe below.
New connection between geometric and quantum realms
A paper by Thomas Fiedler has just appeared on arXiv, describing a new link between geometric and quantum topology of knots. http://arxiv.org/abs/1304.0970
This is big news!! (more…) | {"url":"http://ldtopology.wordpress.com/category/knot-theory/","timestamp":"2014-04-16T07:26:25Z","content_type":null,"content_length":"54673","record_id":"<urn:uuid:0765fc33-9a79-46df-b59c-516ed4089bf7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating perimeter of regions in binary image using matlab
Hi all:
I am trying to calculate perimeter of regions in binary image using matlab. I am using the following code:
perimeter_stats = regionprops(binary_image,'Perimeter');
allPerimeter= [perimeter_stats.Perimeter];
The problem is in the results. The image contains large regions but the calculated perimeter is small (e.g. 3.2092). Can anyone help me in getting the right values of perimeter?
0 Comments
1 Answer
Accepted answer
Well you did something wrong. I doubt that the mean of allPerimeter is 3.2092 pixels. Why don't you list what allPerimeter is so I can see it. I would think that the minimum a perimeter could be is 1
pixel, but it's typically in the hundreds or thousands, not 3 unless your binary image is just a single dot.
5 Comments
Show 2 older comments
Did you notice the 1.0e003 in there? You multiply that by the numbers. This is not related to regionprops - that is how all of MATLAB works. So the actual perimeters are [3209.2, 802.9, 992.9, 189.5,
etc. These are quite reasonable numbers. There is no problem at all.
Many thanks for your clarification
So did that answer your question? Are you ready to mark this as "Answered", or is it still not solved for you? | {"url":"http://mathworks.com/matlabcentral/answers/55827","timestamp":"2014-04-18T03:53:44Z","content_type":null,"content_length":"29733","record_id":"<urn:uuid:2323e044-6c4b-4e3f-a022-1140811913cd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: Dark Matter: The Private Life of Sir Isaac Newton (Philip Kerr)
Contributed by Vijay Fafat
A multiple-murder mystery which outlandishly casts Newton in the role of Sherlock Holmes during his tenure as Warden at the British Royal Mint (Watson is played Christopher Ellis, nephew of
mathematician William Wallace).
Newton is hot on the trail of coin counterfeiters, till the chase turns murderous, with each victim’s body displaying a coded message. The messages turn out to be substitution ciphers with
modifications to make them difficult to decipher (the sequence +1, -1, +1, -1… makes an appearance). Newton solves it and all is well (except for the murder victims and the criminals)
Some excerpts:
• Newton: “All ciphers, if they are properly formed and systematic, are subject to mathematics, and what mathematics has made obscure, mathematics will also render visible”
• Newton: “I can assure you, Ellis, that (Brachistochrone problem) was no mere exercise, as you describe it. When no man in Europe could provide a solution, I solved it.”
• Newton: “Upon his (Occam’s) razorlike maxim we shall cut this case into exactly two halves. Fetch me some cider. My head has a sudden need for apples”
• “He is a scholar of real worth, for I have seen him extracting square roots without pen and paper, to seven places”, Ellis said. “I have seen a horse clap its hoof upon the ground seven times,”
remarked Newton, “but I do not think it was a mathematician”.
• Newton bowed deeply. “Doctor Wallis, I was not able to find anything general in quadratures, until I had understood your own work on infintesimals” | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf1034","timestamp":"2014-04-16T07:23:20Z","content_type":null,"content_length":"9705","record_id":"<urn:uuid:0112c1d6-244b-419c-935a-42337830d688>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
This interface imposes a total ordering on the objects of each class that implements it. This ordering is referred to as the class's
natural ordering
, and the class's
method is referred to as its
natural comparison method
Lists (and arrays) of objects that implement this interface can be sorted automatically by Collections.sort (and Arrays.sort). Objects that implement this interface can be used as keys in a sorted
map or as elements in a sorted set, without the need to specify a comparator.
The natural ordering for a class C is said to be consistent with equals if and only if e1.compareTo(e2) == 0 has the same boolean value as e1.equals(e2) for every e1 and e2 of class C. Note that null
is not an instance of any class, and e.compareTo(null) should throw a NullPointerException even though e.equals(null) returns false.
It is strongly recommended (though not required) that natural orderings be consistent with equals. This is so because sorted sets (and sorted maps) without explicit comparators behave "strangely"
when they are used with elements (or keys) whose natural ordering is inconsistent with equals. In particular, such a sorted set (or sorted map) violates the general contract for set (or map), which
is defined in terms of the equals method.
For example, if one adds two keys a and b such that (!a.equals(b) && a.compareTo(b) == 0) to a sorted set that does not use an explicit comparator, the second add operation returns false (and the
size of the sorted set does not increase) because a and b are equivalent from the sorted set's perspective.
Virtually all Java core classes that implement Comparable have natural orderings that are consistent with equals. One exception is java.math.BigDecimal, whose natural ordering equates BigDecimal
objects with equal values and different precisions (such as 4.0 and 4.00).
For the mathematically inclined, the relation that defines the natural ordering on a given class C is:
{(x, y) such that x.compareTo(y) <= 0}.
for this total order is:
{(x, y) such that x.compareTo(y) == 0}.
It follows immediately from the contract for
that the quotient is an
equivalence relation
, and that the natural ordering is a
total order
. When we say that a class's natural ordering is
consistent with equals
, we mean that the quotient for the natural ordering is the equivalence relation defined by the class's
{(x, y) such that x.equals(y)}.
This interface is a member of the Java Collections Framework. | {"url":"http://download.java.net/jdk7/archive/b123/docs/api/java/lang/Comparable.html?is-external=true","timestamp":"2014-04-16T22:33:30Z","content_type":null,"content_length":"26359","record_id":"<urn:uuid:88530ec5-39e9-4606-819e-8a697902312d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome, my friends
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts
in than circle donuts if the circumference of the circle touched the each of the corners of the square donut.
So you might end up with more donuts.
But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole?
A round donut with radius R[1] occupies the same space as a square donut with side 2R[1]. If the center circle of a round donut has a radius R[2] and the hole of a square donut has a side 2R
[2], then the area of a round donut is πR[1]^2 - πr[2]^2. The area of a square donut would be then 4R[1]^2 - 4R[2]^2. This doesn’t say much, but in general and throwing numbers, a full box
of square donuts has more donut per donut than a full box of round donuts.
The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R[2] = R[1]/4) and replacing in the proper expressions, we have a 27,6%
more donut in the square one (Round: 15πR[1]^2/16 ≃ 2,94R[1]^2, square: 15R[1]^2/4 = 3,75R[1]^2). Now, assuming a large center hole (R[2] = 3R[1]/4) we have a 27,7% more donut in the square
one (Round: 7πR[1]^2/16 ≃ 1,37R[1]^2, square: 7R[1]^2/4 = 1,75R[1]^2). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round.
tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
Thank you donut side of Tumblr.
This is something that will make me smile even in my worst moods.
Fav pics of Queenie.
my monarch, ladies and gents
i think the queen is my spirit animal
GOD SAVE THE QUEEN
So I saw Emma and Andrew twice today. Once during the photocall where Andrew highfived me and Emma waved at me.
And then in the evening at the premiere Andrew hugged me and Emma was like the sweetest person ever and she took a photo with me. I couldn’t be happier!
when ur entire class didnt do the homework
“My sister’s boyfriend, Fox, on his last day of high school. The sun was setting, and he and his friends were all playing around. I caught him in a moment of reflection.” By Petra Collins
DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS.
DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS. DRAGONS!!
a+ for the last one | {"url":"http://dasha-syomina.tumblr.com/","timestamp":"2014-04-18T10:35:25Z","content_type":null,"content_length":"113593","record_id":"<urn:uuid:356fa575-3791-4590-bcac-184af4673154>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
sin(x) is defined for all real numbers, so are sin(3x) and -4sin(3x)
Last edited by papex; May 28th 2010 at 08:14 PM.
Hello winsome Do you understand what is meant by the word domain of a function? It is the set of valid inputs to that function. For example, the domain of $\sqrt x$ is the set of non-negative
numbers. Now, as papex has said, $\sin x$, and therefore $\sin3x$ and $-4\sin 3x$, are defined for all values of $x$. In other words, the domain of these functions is the set of all real numbers.
There are various ways of writing this set. One is to use the symbol $\mathbb{R}$. Another is to use interval notation: $(-\infty, +\infty)$. Grandad | {"url":"http://mathhelpforum.com/trigonometry/146809-domain.html","timestamp":"2014-04-18T16:56:39Z","content_type":null,"content_length":"38085","record_id":"<urn:uuid:92c626aa-8cf1-4b5d-ad72-14f4bf080f17>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
volume of cyclinder
1. December 10th 2008, 06:58 AM #1
volume of cyclinder
Last edited by andyboy179; December 10th 2008 at 07:16 AM. Reason: wwrote something wrong
2. December 10th 2008, 07:21 AM #2
This is pretty much "plug and chug" into the formulas.
The volume of a cylinder is $\pi r^2h$ and the [surface] area is $2\pi r h + 2\pi r^2$. You are given what $r$ and $h$ are.
Can you try to take it from here?
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/geometry/64342-volume-cyclinder.html","timestamp":"2014-04-20T04:35:21Z","content_type":null,"content_length":"34556","record_id":"<urn:uuid:15b395c2-4b39-4da5-b3a0-c6f3d56f1e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two facts of limit
August 9th 2009, 08:15 PM #1
Super Member
Mar 2006
A simple limit fact
I'm reading some probability and statistics, and ran into two limit facts when I get to the part of Poisson Probability Function proof.
It says that $\lim _ {z \rightarrow 0 } (1-z)^{- \frac {1}{z} } = e$
It is almost embarrassing for me to ask, as I do remember encountering this problem when I took calc, and you would expect someone who finish Real Analysis would be able to solve them.
So far, for the first one, I used the l'Hôpital's rule with natural log, but then I have $\lim _ {z \rightarrow 0 } ( \frac {1}{z^2} ) ( \frac {-1}{1-z}) = \infty (-1)$, something was wrong.
But I forgot how to do it, any help?
Thank you.
$\lim_{z \to 0} \ln (1-z)^{-1/z}$
$=\lim_{z \to 0} \frac{-1}{z} \ln (1-z)$
$= \lim_{z \to 0} \frac{\ln (1-z)}{-z}$
$= \lim_{z \to 0} \frac{\frac{1}{1-z}(-1)}{-1}$
$= \lim_{z \to 0} \frac {1}{1-z} = 1$
so $\lim_{z \to 0} (1-z)^{-1/z} = e^{1} = e$
August 9th 2009, 08:37 PM #2 | {"url":"http://mathhelpforum.com/calculus/97520-two-facts-limit.html","timestamp":"2014-04-19T04:39:53Z","content_type":null,"content_length":"33861","record_id":"<urn:uuid:5c0af127-989f-49f8-9f2a-43632730a601>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
October 19th 2008, 06:12 AM #1
Oct 2008
let 1<=r<=n. A subset of{1,2....n} of cardinality "r" is chosen at random. How do i calculate the probability that 1 is an element of the chosen subset
The number of subset of cardinality r is
${{n} \choose {r}}$
and the number of those that doesn't contain 1 is ${{n-1} \choose {r-1}}$ as Plato told you.
The probability of taking a set that doesn't contain 1 is ${{n-1} \choose {r-1}}/{{n} \choose {r}}$
Last edited by vincisonfire; October 19th 2008 at 09:27 AM.
October 19th 2008, 08:58 AM #2
October 19th 2008, 09:16 AM #3 | {"url":"http://mathhelpforum.com/statistics/54462-probability.html","timestamp":"2014-04-18T10:01:03Z","content_type":null,"content_length":"37527","record_id":"<urn:uuid:bc83fe97-3dc8-4b5f-a570-fb5b770dc011>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
My watch list
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
• My watch list
• My saved searches
• My saved topics
• My newsletter
Bulk modulus
Bulk modulus values for some example substances
Water 2.2×10^9 Pa (value increases at higher pressures)
Air 1.42×10^5 Pa (adiabatic bulk modulus)
Air 1.01×10^5 Pa (constant temperature bulk modulus)
Steel 1.6×10^11 Pa
Glass 3.5×10^10 to 5.5×10^10 Pa
Solid helium 5×10^7 Pa (approximate)
The bulk modulus (K) of a substance essentially measures the substance's resistance to uniform compression. It is defined as the pressure increase needed to effect a given relative decrease in
As an example, suppose an iron cannon ball with bulk modulus 160 GPa (gigapascal) is to be reduced in volume by 0.5%. This requires a pressure increase of 0.005×160 GPa = 0.8 GPa. If the cannon ball
is subjected to a pressure increase of only 100 MPa, it will decrease in volume by a factor of 100 MPa/160 GPa = 0.000625, or 0.0625%.
The bulk modulus K can be formally defined by the equation:
$K=-V\frac{\partial p}{\partial V}$
where p is pressure, V is volume, and ∂p/∂V denotes the partial derivative of pressure with respect to volume. The inverse of the bulk modulus gives a substance's compressibility.
Other moduli describe the material's response (strain) to other kinds of stress: the shear modulus describes the response to shear, and Young's modulus describes the response to linear strain. For a
fluid, only the bulk modulus is meaningful. For an anisotropic solid such as wood or paper, these three moduli do not contain enough information to describe its behaviour, and one must use the full
generalized Hooke's law.
Strictly speaking, the bulk modulus is a thermodynamic quantity, and it is necessary to specify how the temperature varies in order to specify a bulk modulus: constant-temperature (K[T]), constant-
enthalpy (adiabatic K[S]), and other variations are possible. In practice, such distinctions are usually only relevant for gases.
For a gas, the adiabatic bulk modulus K[S] is approximately given by
$K_S=\kappa\, p$
κ is the adiabatic index, sometimes called γ.
p is the pressure.
In a fluid, the bulk modulus K and the density ρ determine the speed of sound c (pressure waves), according to the formula
Solids can also sustain transverse waves, for these one additional elastic modulus, for example the shear modulus, is needed to determine wave speeds.
• Bulk Elastic Properties on hyperphysics at Georgia State University
1. ^ Bulk modulus calculation of glasses
Elastic moduli for homogeneous isotropic materials
Bulk modulus (K) | Young's modulus (E) | Lamé's first parameter (λ) | Shear modulus (μ) | Poisson's ratio (ν) | P-wave modulus (M)
Conversion formulas
Homogeneous isotropic linear elastic materials have their elastic properties uniquely determined by any two moduli among these, thus given any two, any other of the elastic moduli can be calculated
according to these formulas.
$(\lambda,\,\mu)$ $(E,\,\mu)$ $(K,\,\lambda)$ $(K,\,\mu)$ $(\lambda,\,u)$ $(\mu,\,u)$ $(E,\,u)$ $(K,\, u)$ $(K,\,E)$
$K=\,$ $\lambda+ \frac{2\mu}{3}$ $\frac{E\mu}{3(3\ $\lambda\frac{1+u} $\frac{2\mu(1+u)}{3 $\frac{E}{3(1-2u)}$
mu-E)}$ {3u}$ (1-2u)}$
$E=\,$ $\mu\frac{3\lambda + 2\mu}{\ $9K\frac{K-\lambda}{3K- $\frac{9K\mu}{3K+\ $\frac{\lambda(1+u) $2\mu(1+u)\,$ $3K(1-2u)\,$
lambda + \mu}$ \lambda}$ mu}$ (1-2u)}{u}$
$\lambda= $\mu\frac{E-2\mu}{3 $K-\frac{2\mu}{3}$ $\frac{2 \mu u} $\frac{Eu}{(1+u) $\frac{3Ku} $\frac{3K(3K-E)}
\,$ \mu-E}$ {1-2u}$ (1-2u)}$ {1+u}$ {9K-E}$
$\mu=\,$ $3\frac{K-\lambda}{2}$ $\lambda\frac{1-2u} $\frac{E}{2+2u}$ $3K\frac{1-2u} $\frac{3KE}
{2u}$ {2+2u}$ {9K-E}$
$u=\,$ $\frac{\lambda}{2(\lambda + \ $\frac{E}{2\mu}-1$ $\frac{\lambda}{3K-\ $\frac{3K-2\mu}{2 $\frac{3K-E}{6K}
mu)}$ lambda}$ (3K+\mu)}$ $
$M=\,$ $\lambda+2\mu\,$ $\mu\frac{4\mu-E}{3 $3K-2\lambda\,$ $K+\frac{4\mu}{3}$ $\lambda \frac{1-u} $\mu\frac{2-2u} $E\frac{1-u}{(1+u) $3K\frac{1-u} $3K\frac{3K+E}
\mu-E}$ {u}$ {1-2u}$ (1-2u)}$ {1+u}$ {9K-E}$
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Bulk_modulus". A list of authors is available in Wikipedia. | {"url":"http://www.chemeurope.com/en/encyclopedia/Bulk_modulus.html","timestamp":"2014-04-20T20:59:39Z","content_type":null,"content_length":"60733","record_id":"<urn:uuid:3339e0e9-b2fa-4bcd-bc2d-875dbc610f3e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marketing Mix Lab: Generating Artificial Sales Data
Our statistics lecturers would often end each session with a demonstration of the power of the statistical model under discussion. This would usually mean generating some artificial data and showing
how good the tool was at recovering the parameters or correctly classifying the observations. It was highly artificial but had a very useful feature: you knew the true mechanism behind the data so
you could see how good your model was at getting at the truth.
We work with marketing data, building models to understand the effect of marketing activity on sales. Of course here, as in any real world situation, we don’t know which mechanism generated the data
(that’s what we are trying to find out). But we can get an idea of how good our tools are by testing them out on artificial data in the way we described above. If they don’t work here in these highly
idealised situations then we ought to be concerned.
In this series I’m going to take some very simple simulated data sets and look at how well some of the best known marketing mix modelling techniques do at getting back to the true values. I will
start by looking at LDSV (Least Squares Dummy Variables) models and then move on to mixed effects and Bayesian modelling.
There’s one other thing worth mentioning before we get started. With our simulated data sets we are able to turn the usual situation on its head and vary the data set rather than the modelling
approach. This means we can ask questions like: under what conditions do our models work best?
Building an artificial data set
Our world will be very simple. Weekly sales will follow an overall linear trend to which we will add an annual seasonal cycle which we imagine to be a function of temperature (simulated using a sine
wave). On top of that we need some marketing activity which we will add as TV adstock. Finally we will add some noise by simulating from a normal distribution. The final data generating equation
looks like this:
$sales_t = \alpha + \theta_1 week_t + \theta_2 temp_t + \theta_3 adstock_t + \epsilon_t$
where $\epsilon \sim N(0, \sigma^2)$
and adstock is defined recursively as
$adstock_t= 1-e^{-\frac{GRPs_t}{\phi}} + \lambda adstock_{t-1}$
I have generated this data set in R (we will use R throughout – if you are unfamiliar with this language please see the R homepage).
It would also be nice if we could vary the parameters to generate different sets of data so I have created the whole thing as an R function with the parameters as arguments.
# *--------------------------------------------------------------------
# | FUNCTION: create_test_sets
# | Creates simple artifical marketing mix data for testing code and
# | techniques
# *--------------------------------------------------------------------
# | Version |Date |Programmer |Details of Change
# | 01 |29/11/2011|Simon Raper |first version.
# *--------------------------------------------------------------------
# | INPUTS: base_p Number of base sales
# | trend_p Increase in sales for every unit increase
# | in time
# | season_p The seasonality effect will be
# | season_p*temp where -10<temp<10
# | ad_p The coefficient for the adstock
# | dim The dim parameter in adstock (see below)
# | dec The dec parameter in adstock (see below)
# | adstock_form If 1 then the form is:
# | ad_p*(1-exp(-GRPs/dim)+dec*adstock_t-1)
# | If 2 then the form is:
# | ad_p*(1-exp(-(GRPs+dec*GRPs_t-1)/dim)
# | Default is 1.
# | error_std Standard deviation of the noise
# *--------------------------------------------------------------------
# | OUTPUTS: dataframe Consists of sales, temp, tv_grps, week,
# | adstock
# |
# *--------------------------------------------------------------------
# | USAGE: create_test_sets(base_p,
# | trend_p,
# | season_p,
# | ad_p,
# | dim,
# | dec,
# | adstock_form,
# | error_std)
# |
# *--------------------------------------------------------------------
# | DEPENDS: None
# |
# *--------------------------------------------------------------------
# | NOTES: Usually the test will consists of trying to predict sales
# | using temp, tv_grps, week and recover the parameters.
# |
# *--------------------------------------------------------------------
#Adstock functions
for(i in 2:length){
for(i in 2:length){
#Function for creating test sets
create_test_sets<-function(base_p, trend_p, season_p, ad_p, dim, dec, adstock_form, error_std){
#National level model
#Five years of weekly data
#Base sales of base_p units
#Trend of trend_p extra units per week
#Winter is season_p*10 units below, summer is season_p*10 units above
#7 TV campaigns. Carry over is dec, theta is dim, beta is ad_p,
if (adstock_form==2){adstock<-adstock_calc_2(tv_grps, dec, dim)}
else {adstock<-adstock_calc_1(tv_grps, dec, dim)}
#Error has a std of error_var
error<-rnorm(5*52, mean=0, sd=error_std)
#Full series
#plot(sales, type='l', ylim=c(0,1200))
output<-data.frame(sales, temp, tv_grps, week, adstock)
Here is a line graph showing a simulated sales series generated with the following parameters:
#Plot the simulated sales
ggplot(data=test, aes(x=week, y=sales))+geom_line(size=1)+ opts(title ="Simulated Sales Data")
I’ve found these simulated data sets useful not only for experiments but also for debugging code (since we know exactly what to expect from them) and as toy examples to give to trainee analysts as
templates for future models.
With marketing mix models we often work with hierarchical data (e.g. sales in stores in regions). In the next post I will provide some code to build regional data sets. Following that we will get to
work on the modelling.
5 thoughts on “Marketing Mix Lab: Generating Artificial Sales Data”
1. Fantastic stuff, cant wait to see more! I have no experience in market mix modeling but do in CRM/database marketing and looking to expand my skill set from modeling that sort of environment to
this. Happy I stumbled upon your blog!
Posted by | February 8, 2012, 4:11 pm
□ Thanks Jeff. That’s very nice to hear. I’ve posted a couple more items on this subject (there’s one on ridge regression and one on visualising multi-collinearity) and I hope to add some more
soon. Good luck with expanding your skills into market mix modelling.
Posted by | February 10, 2012, 8:18 am
2. Hey Simon, I am wondering if you can recommend any good texts to learn market mix modeling? Also, can you recommend a technique to use – do you typically use Arima with regressors or gls for
this? Any recommendations for learning is appreciated. Thanks!
Posted by | July 19, 2012, 11:59 pm
□ Thanks!
Posted by | July 19, 2012, 11:59 pm
3. HI, thanks for the nice article. Have you created any R package to implement the MMM?
Posted by | November 18, 2013, 1:09 pm | {"url":"http://drunks-and-lampposts.com/2012/01/08/recovering-the-parameters-in-a-marketing-mix-model/","timestamp":"2014-04-17T13:00:51Z","content_type":null,"content_length":"71106","record_id":"<urn:uuid:e09e1e6a-5ba2-4e4b-97c6-9a1040d3f0d1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
free body diagram help
There are three forces acting at H. First there is a horizontal force acting toward point G. Second, there is a vertical force due to the weight of K. Third there is a force toward point N. The
magnitude of the downward force is, of course, mg. The magnitude of the force toward N is the same as the weight of the mass at L. Decompose the force toward N into its horizontal and vertical | {"url":"http://www.physicsforums.com/showthread.php?p=3866716","timestamp":"2014-04-19T17:41:51Z","content_type":null,"content_length":"29325","record_id":"<urn:uuid:80e85eae-d382-405f-b347-7d81fefbbadc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Large Dynamical Systems
Dr. Neil Balmforth
Large Dynamical Systems:
The atmosphere or the ocean is an extremely large dynamical system with many degrees of freedom. This size makes it impractical to model these systems on a first principles basis without some severe
approximations. Morever, state-of-the-art models are highly complicated dynamical systems themselves, which makes it challenging to assess the robustness of the models to variations in all the input
parameters and physics, and to test methods of reconstruction and prediction. Instead, one can work with smaller, more accessible systems to examine the viability of a physical process or the utility
of a technical method. One option is to gradually build up large complex systems by coupling together a number of much simpler subsystems. These coupled ensembles act as metaphors for the complex
systems that we encounter in nature. For example, ensembles of coupled oscillators can be use to explore the phenomenon of synchronization that is observed in biological populations such as flashing
fireflies and cooperative cells. And with lattices of coupled maps and simple chaotic systems, one can test the feasibility of unravelling and reconstructing the underlying governing dynamics from an
observed signal.
My collaborators in these efforts include Antonello Provenzale, Roberto Sassi and Ed Spiegel
Static, steady and chaotic fronts in a network of Lorenz system
(colormaps of the x-coordinate of all the subsystems on the (n,t)-plane for three networks with different coupling strengths)
Moore-Spiegel attractor
(phase portrait and time series of the Moore-Spiegel strange attractor)
Patterns of synchrony in a population of coupled oscillators
(population density on the phase-time plane for 100 oscillators coupled through a mean field) | {"url":"http://www.math.ubc.ca/~njb/Research/lrg-dyn.htm","timestamp":"2014-04-20T18:23:54Z","content_type":null,"content_length":"6044","record_id":"<urn:uuid:da87cddb-f217-42b4-afd5-194b6c38f533>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
For a Few Monads More
We've seen how monads can be used to take values with contexts and apply them to functions and how using >>= or do notation allows us to focus on the values themselves while the context gets handled
for us.
We've met the Maybe monad and seen how it adds a context of possible failure to values. We've learned about the list monad and saw how it lets us easily introduce non-determinism into our programs.
We've also learned how to work in the IO monad, even before we knew what a monad was!
In this chapter, we're going to learn about a few other monads. We'll see how they can make our programs clearer by letting us treat all sorts of values as monadic ones. Exploring a few monads more
will also solidify our intuition for monads.
The monads that we'll be exploring are all part of the mtl package. A Haskell package is a collection of modules. The mtl package comes with the Haskell Platform, so you probably already have it. To
check if you do, type ghc-pkg list in the command-line. This will show which Haskell packages you have installed and one of them should be mtl, followed by a version number.
Writer? I hardly know her!
We've loaded our gun with the Maybe monad, the list monad and the IO monad. Now let's put the Writer monad in the chamber and see what happens when we fire it!
Whereas Maybe is for values with an added context of failure and the list is for non-deterministic values, the Writer monad is for values that have another value attached that acts as a sort of log
value. Writer allows us to do computations while making sure that all the log values are combined into one log value that then gets attached to the result.
For instance, we might want to equip our values with strings that explain what's going on, probably for debugging purposes. Consider a function that takes a number of bandits in a gang and tells us
if that's a big gang or not. That's a very simple function:
isBigGang :: Int -> Bool
isBigGang x = x > 9
Now, what if instead of just giving us a True or False value, we want it to also return a log string that says what it did? Well, we just make that string and return it along side our Bool:
isBigGang :: Int -> (Bool, String)
isBigGang x = (x > 9, "Compared gang size to 9.")
So now instead of just returning a Bool, we return a tuple where the first component of the tuple is the actual value and the second component is the string that accompanies that value. There's some
added context to our value now. Let's give this a go:
ghci> isBigGang 3
(False,"Compared gang size to 9.")
ghci> isBigGang 30
(True,"Compared gang size to 9.")
So far so good. isBigGang takes a normal value and returns a value with a context. As we've just seen, feeding it a normal value is not a problem. Now what if we already have a value that has a log
string attached to it, such as (3, "Smallish gang."), and we want to feed it to isBigGang? It seems like once again, we're faced with this question: if we have a function that takes a normal value
and returns a value with a context, how do we take a value with a context and feed it to the function?
When we were exploring the Maybe monad, we made a function applyMaybe, which took a Maybe a value and a function of type a -> Maybe b and fed that Maybe a value into the function, even though the
function takes a normal a instead of a Maybe a. It did this by minding the context that comes with Maybe a values, which is that they are values with possible failure. But inside the a -> Maybe b
function, we were able to treat that value as just a normal value, because applyMaybe (which later became >>=) took care of checking if it was a Nothing or a Just value.
In the same vein, let's make a function that takes a value with an attached log, that is, an (a,String) value and a function of type a -> (b,String) and feeds that value into the function. We'll call
it applyLog. But because an (a,String) value doesn't carry with it a context of possible failure, but rather a context of an additional log value, applyLog is going to make sure that the log of the
original value isn't lost, but is joined together with the log of the value that results from the function. Here's the implementation of applyLog:
applyLog :: (a,String) -> (a -> (b,String)) -> (b,String)
applyLog (x,log) f = let (y,newLog) = f x in (y,log ++ newLog)
When we have a value with a context and we want to feed it to a function, we usually try to separate the actual value from the context and then try to apply the function to the value and then see
that the context is taken care of. In the Maybe monad, we checked if the value was a Just x and if it was, we took that x and applied the function to it. In this case, it's very easy to find the
actual value, because we're dealing with a pair where one component is the value and the other a log. So first we just take the value, which is x and we apply the function f to it. We get a pair of
(y,newLog), where y is the new result and newLog the new log. But if we returned that as the result, the old log value wouldn't be included in the result, so we return a pair of (y,log ++ newLog). We
use ++ to append the new log to the old one.
Here's applyLog in action:
ghci> (3, "Smallish gang.") `applyLog` isBigGang
(False,"Smallish gang.Compared gang size to 9")
ghci> (30, "A freaking platoon.") `applyLog` isBigGang
(True,"A freaking platoon.Compared gang size to 9")
The results are similar to before, only now the number of people in the gang had its accompanying log and it got included in the result log. Here are a few more examples of using applyLog:
ghci> ("Tobin","Got outlaw name.") `applyLog` (\x -> (length x, "Applied length."))
(5,"Got outlaw name.Applied length.")
ghci> ("Bathcat","Got outlaw name.") `applyLog` (\x -> (length x, "Applied length"))
(7,"Got outlaw name.Applied length")
See how inside the lambda, x is just a normal string and not a tuple and how applyLog takes care of appending the logs.
Monoids to the rescue
Be sure you know what
are at this point! Cheers.
Right now, applyLog takes values of type (a,String), but is there a reason that the log has to be a String? It uses ++ to append the logs, so wouldn't this work on any kind of list, not just a list
of characters? Sure it would. We can go ahead and change its type to this:
applyLog :: (a,[c]) -> (a -> (b,[c])) -> (b,[c])
Now, the log is a list. The type of values contained in the list has to be the same for the original list as well as for the list that the function returns, otherwise we wouldn't be able to use ++ to
stick them together.
Would this work for bytestrings? There's no reason it shouldn't. However, the type we have now only works for lists. It seems like we'd have to make a separate applyLog for bytestrings. But wait!
Both lists and bytestrings are monoids. As such, they are both instances of the Monoid type class, which means that they implement the mappend function. And for both lists and bytestrings, mappend is
for appending. Watch:
ghci> [1,2,3] `mappend` [4,5,6]
ghci> B.pack [99,104,105] `mappend` B.pack [104,117,97,104,117,97]
Chunk "chi" (Chunk "huahua" Empty)
Cool! Now our applyLog can work for any monoid. We have to change the type to reflect this, as well as the implementation, because we have to change ++ to mappend:
applyLog :: (Monoid m) => (a,m) -> (a -> (b,m)) -> (b,m)
applyLog (x,log) f = let (y,newLog) = f x in (y,log `mappend` newLog)
Because the accompanying value can now be any monoid value, we no longer have to think of the tuple as a value and a log, but now we can think of it as a value with an accompanying monoid value. For
instance, we can have a tuple that has an item name and an item price as the monoid value. We just use the Sum newtype to make sure that the prices get added as we operate with the items. Here's a
function that adds drink to some cowboy food:
import Data.Monoid
type Food = String
type Price = Sum Int
addDrink :: Food -> (Food,Price)
addDrink "beans" = ("milk", Sum 25)
addDrink "jerky" = ("whiskey", Sum 99)
addDrink _ = ("beer", Sum 30)
We use strings to represent foods and an Int in a Sum newtype wrapper to keep track of how many cents something costs. Just a reminder, doing mappend with Sum results in the wrapped values getting
added together:
ghci> Sum 3 `mappend` Sum 9
Sum {getSum = 12}
The addDrink function is pretty simple. If we're eating beans, it returns "milk" along with Sum 25, so 25 cents wrapped in Sum. If we're eating jerky we drink whiskey and if we're eating anything
else we drink beer. Just normally applying this function to a food wouldn't be terribly interesting right now, but using applyLog to feed a food that comes with a price itself into this function is
ghci> ("beans", Sum 10) `applyLog` addDrink
("milk",Sum {getSum = 35})
ghci> ("jerky", Sum 25) `applyLog` addDrink
("whiskey",Sum {getSum = 124})
ghci> ("dogmeat", Sum 5) `applyLog` addDrink
("beer",Sum {getSum = 35})
Milk costs 25 cents, but if we eat it with beans that cost 10 cents, we'll end up paying 35 cents. Now it's clear how the attached value doesn't always have to be a log, it can be any monoid value
and how two such values are combined into one depends on the monoid. When we were doing logs, they got appended, but now, the numbers are being added up.
Because the value that addDrink returns is a tuple of type (Food,Price), we can feed that result to addDrink again, so that it tells us what we should drink along with our drink and how much that
will cost us. Let's give it a shot:
ghci> ("dogmeat", Sum 5) `applyLog` addDrink `applyLog` addDrink
("beer",Sum {getSum = 65})
Adding a drink to some dog meat results in a beer and an additional 30 cents, so ("beer", Sum 35). And if we use applyLog to feed that to addDrink, we get another beer and the result is ("beer", Sum
The Writer type
Now that we've seen that a value with an attached monoid acts like a monadic value, let's examine the Monad instance for types of such values. The Control.Monad.Writer module exports the Writer w a
type along with its Monad instance and some useful functions for dealing with values of this type.
First, let's examine the type itself. To attach a monoid to a value, we just need to put them together in a tuple. The Writer w a type is just a newtype wrapper for this. Its definition is very
newtype Writer w a = Writer { runWriter :: (a, w) }
It's wrapped in a newtype so that it can be made an instance of Monad and that its type is separate from a normal tuple. The a type parameter represents the type of the value and the w type parameter
the type of the attached monoid value.
Its Monad instance is defined like so:
instance (Monoid w) => Monad (Writer w) where
return x = Writer (x, mempty)
(Writer (x,v)) >>= f = let (Writer (y, v')) = f x in Writer (y, v `mappend` v')
First off, let's examine >>=. Its implementation is essentially the same as applyLog, only now that our tuple is wrapped in the Writer newtype, we have to unwrap it when pattern matching. We take the
value x and apply the function f to it. This gives us a Writer w a value and we use a let expression to pattern match on it. We present y as the new result and use mappend to combine the old monoid
value with the new one. We pack that up with the result value in a tuple and then wrap that with the Writer constructor so that our result is a Writer value instead of just an unwrapped tuple.
So, what about return? It has to take a value and put it in a default minimal context that still presents that value as the result. So what would such a context be for Writer values? If we want the
accompanying monoid value to affect other monoid values as little as possible, it makes sense to use mempty. mempty is used to present identity monoid values, such as "" and Sum 0 and empty
bytestrings. Whenever we use mappend between mempty and some other monoid value, the result is that other monoid value. So if we use return to make a Writer value and then use >>= to feed that value
to a function, the resulting monoid value will be only what the function returns. Let's use return on the number 3 a bunch of times, only we'll pair it with a different monoid every time:
ghci> runWriter (return 3 :: Writer String Int)
ghci> runWriter (return 3 :: Writer (Sum Int) Int)
(3,Sum {getSum = 0})
ghci> runWriter (return 3 :: Writer (Product Int) Int)
(3,Product {getProduct = 1})
Because Writer doesn't have a Show instance, we had to use runWriter to convert our Writer values to normal tuples that can be shown. For String, the monoid value is the empty string. With Sum, it's
0, because if we add 0 to something, that something stays the same. For Product, the identity is 1.
The Writer instance doesn't feature an implementation for fail, so if a pattern match fails in do notation, error is called.
Using do notation with Writer
Now that we have a Monad instance, we're free to use do notation for Writer values. It's handy for when we have a several Writer values and we want to do stuff with them. Like with other monads, we
can treat them as normal values and the context gets taken for us. In this case, all the monoid values that come attached get mappended and so are reflected in the final result. Here's a simple
example of using do notation with Writer to multiply two numbers:
import Control.Monad.Writer
logNumber :: Int -> Writer [String] Int
logNumber x = Writer (x, ["Got number: " ++ show x])
multWithLog :: Writer [String] Int
multWithLog = do
a <- logNumber 3
b <- logNumber 5
return (a*b)
logNumber takes a number and makes a Writer value out of it. For the monoid, we use a list of strings and we equip the number with a singleton list that just says that we have that number.
multWithLog is a Writer value which multiplies 3 and 5 and makes sure that their attached logs get included in the final log. We use return to present a*b as the result. Because return just takes
something and puts it in a minimal context, we can be sure that it won't add anything to the log. Here's what we see if we run this:
ghci> runWriter multWithLog
(15,["Got number: 3","Got number: 5"])
Sometimes we just want some monoid value to be included at some particular point. For this, the tell function is useful. It's part of the MonadWriter type class and in the case of Writer it takes a
monoid value, like ["This is going on"] and creates a Writer value that presents the dummy value () as its result but has our desired monoid value attached. When we have a monadic value that has ()
as its result, we don't bind it to a variable. Here's multWithLog but with some extra reporting included:
multWithLog :: Writer [String] Int
multWithLog = do
a <- logNumber 3
b <- logNumber 5
tell ["Gonna multiply these two"]
return (a*b)
It's important that return (a*b) is the last line, because the result of the last line in a do expression is the result of the whole do expression. Had we put tell as the last line, () would have
been the result of this do expression. We'd lose the result of the multiplication. However, the log would be the same. Here is this in action:
ghci> runWriter multWithLog
(15,["Got number: 3","Got number: 5","Gonna multiply these two"])
Adding logging to programs
Euclid's algorithm is an algorithm that takes two numbers and computes their greatest common divisor. That is, the biggest number that still divides both of them. Haskell already features the gcd
function, which does exactly this, but let's implement our own and then equip it with logging capabilities. Here's the normal algorithm:
gcd' :: Int -> Int -> Int
gcd' a b
| b == 0 = a
| otherwise = gcd' b (a `mod` b)
The algorithm is very simple. First, it checks if the second number is 0. If it is, then the result is the first number. If it isn't, then the result is the greatest common divisor of the second
number and the remainder of dividing the first number with the second one. For instance, if we want to know what the greatest common divisor of 8 and 3 is, we just follow the algorithm outlined.
Because 3 isn't 0, we have to find the greatest common divisor of 3 and 2 (if we divide 8 by 3, the remainder is 2). Next, we find the greatest common divisor of 3 and 2. 2 still isn't 0, so now we
have have 2 and 1. The second number isn't 0, so we run the algorithm again for 1 and 0, as dividing 2 by 1 gives us a remainder of 0. And finally, because the second number is now 0, the final
result is 1. Let's see if our code agrees:
ghci> gcd' 8 3
It does. Very good! Now, we want to equip our result with a context, and the context will be a monoid value that acts as a log. Like before, we'll use a list of strings as our monoid. So the type of
our new gcd' function should be:
gcd' :: Int -> Int -> Writer [String] Int
All that's left now is to equip our function with log values. Here's the code:
import Control.Monad.Writer
gcd' :: Int -> Int -> Writer [String] Int
gcd' a b
| b == 0 = do
tell ["Finished with " ++ show a]
return a
| otherwise = do
tell [show a ++ " mod " ++ show b ++ " = " ++ show (a `mod` b)]
gcd' b (a `mod` b)
This function takes two normal Int values and returns a Writer [String] Int, that is, an Int that has a log context. In the case where b is 0, instead of just giving a as the result, we use a do
expression to put together a Writer value as a result. First we use tell to report that we're finished and then we use return to present a as the result of the do expression. Instead of this do
expression, we could have also written this:
Writer (a, ["Finished with " ++ show a])
However, I think the do expression is easier to read. Next, we have the case when b isn't 0. In this case, we log that we're using mod to figure out the remainder of dividing a and b. Then, the
second line of the do expression just recursively calls gcd'. Remember, gcd' now ultimately returns a Writer value, so it's perfectly valid that gcd' b (a `mod` b) is a line in a do expression.
While it may be kind of useful to trace the execution of this new gcd' by hand to see how the logs get appended, I think it's more insightful to just look at the big picture and view these as values
with a context and from that gain insight as to what the final result will be.
Let's try our new gcd' out. Its result is a Writer [String] Int value and if we unwrap that from its newtype, we get a tuple. The first part of the tuple is the result. Let's see if it's okay:
ghci> fst $ runWriter (gcd' 8 3)
Good! Now what about the log? Because the log is a list of strings, let's use mapM_ putStrLn to print those strings to the screen:
ghci> mapM_ putStrLn $ snd $ runWriter (gcd' 8 3)
8 mod 3 = 2
3 mod 2 = 1
2 mod 1 = 0
Finished with 1
I think it's awesome how we were able to change our ordinary algorithm to one that reports what it does as it goes along just by changing normal values to monadic values and letting the
implementation of >>= for Writer take care of the logs for us. We can add a logging mechanism to pretty much any function. We just replace normal values with Writer values where we want and change
normal function application to >>= (or do expressions if it increases readability).
Inefficient list construction
When using the Writer monad, you have to be careful which monoid to use, because using lists can sometimes turn out to be very slow. That's because lists use ++ for mappend and using ++ to add
something to the end of a list is slow if that list is really long.
In our gcd' function, the logging is fast because the list appending ends up looking like this:
a ++ (b ++ (c ++ (d ++ (e ++ f))))
Lists are a data structure that's constructed from left to right, and this is efficient because we first fully construct the left part of a list and only then add a longer list on the right. But if
we're not careful, using the Writer monad can produce list appending that looks like this:
((((a ++ b) ++ c) ++ d) ++ e) ++ f
This associates to the left instead of to the right. This is inefficient because every time it wants to add the right part to the left part, it has to construct the left part all the way from the
The following function works like gcd', only it logs stuff in reverse. First it produces the log for the rest of the procedure and then adds the current step to the end of the log.
import Control.Monad.Writer
gcdReverse :: Int -> Int -> Writer [String] Int
gcdReverse a b
| b == 0 = do
tell ["Finished with " ++ show a]
return a
| otherwise = do
result <- gcdReverse b (a `mod` b)
tell [show a ++ " mod " ++ show b ++ " = " ++ show (a `mod` b)]
return result
It does the recursion first, and binds its result value to result. Then it adds the current step to the log, but the current step goes at the end of the log that was produced by the recursion.
Finally, it presents the result of the recursion as the final result. Here it is in action:
ghci> mapM_ putStrLn $ snd $ runWriter (gcdReverse 8 3)
Finished with 1
2 mod 1 = 0
3 mod 2 = 1
8 mod 3 = 2
It's inefficient because it ends up associating the use of ++ to the left instead of to the right.
Difference lists
Because lists can sometimes be inefficient when repeatedly appended in this manner, it's best to use a data structure that always supports efficient appending. One such data structure is the
difference list. A difference list is similar to a list, only instead of being a normal list, it's a function that takes a list and prepends another list to it. The difference list equivalent of a
list like [1,2,3] would be the function \xs -> [1,2,3] ++ xs. A normal empty list is [], whereas an empty difference list is the function \xs -> [] ++ xs.
The cool thing about difference lists is that they support efficient appending. When we append two normal lists with ++, it has to walk all the way to the end of the list on the left of ++ and then
stick the other one there. But what if we take the difference list approach and represent our lists as functions? Well then, appending two difference lists can be done like so:
f `append` g = \xs -> f (g xs)
Remember, f and g are functions that take lists and prepend something to them. So, for instance, if f is the function ("dog"++) (just another way of writing \xs -> "dog" ++ xs) and g the function
("meat"++), then f `append` g makes a new function that's equivalent to the following:
\xs -> "dog" ++ ("meat" ++ xs)
We've appended two difference lists just by making a new function that first applies one difference list some list and then the other.
Let's make a newtype wrapper for difference lists so that we can easily give them monoid instances:
newtype DiffList a = DiffList { getDiffList :: [a] -> [a] }
The type that we wrap is [a] -> [a] because a difference list is just a function that takes a list and returns another. Converting normal lists to difference lists and vice versa is easy:
toDiffList :: [a] -> DiffList a
toDiffList xs = DiffList (xs++)
fromDiffList :: DiffList a -> [a]
fromDiffList (DiffList f) = f []
To make a normal list into a difference list we just do what we did before and make it a function that prepends it to another list. Because a difference list is a function that prepends something to
another list, if we just want that something, we apply the function to an empty list!
Here's the Monoid instance:
instance Monoid (DiffList a) where
mempty = DiffList (\xs -> [] ++ xs)
(DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs))
Notice how for lists, mempty is just the id function and mappend is actually just function composition. Let's see if this works:
ghci> fromDiffList (toDiffList [1,2,3,4] `mappend` toDiffList [1,2,3])
Tip top! Now we can increase the efficiency of our gcdReverse function by making it use difference lists instead of normal lists:
import Control.Monad.Writer
gcd' :: Int -> Int -> Writer (DiffList String) Int
gcd' a b
| b == 0 = do
tell (toDiffList ["Finished with " ++ show a])
return a
| otherwise = do
result <- gcd' b (a `mod` b)
tell (toDiffList [show a ++ " mod " ++ show b ++ " = " ++ show (a `mod` b)])
return result
We only had to change the type of the monoid from [String] to DiffList String and then when using tell, convert our normal lists into difference lists with toDiffList. Let's see if the log gets
assembled properly:
ghci> mapM_ putStrLn . fromDiffList . snd . runWriter $ gcdReverse 110 34
Finished with 2
8 mod 2 = 0
34 mod 8 = 2
110 mod 34 = 8
We do gcdReverse 110 34, then use runWriter to unwrap it from the newtype, then apply snd to that to just get the log, then apply fromDiffList to convert it to a normal list and then finally print
its entries to the screen.
Comparing Performance
To get a feel for just how much difference lists may improve your performance, consider this function that just counts down from some number to zero, but produces its log in reverse, like gcdReverse,
so that the numbers in the log will actually be counted up:
finalCountDown :: Int -> Writer (DiffList String) ()
finalCountDown 0 = do
tell (toDiffList ["0"])
finalCountDown x = do
finalCountDown (x-1)
tell (toDiffList [show x])
If we give it 0, it just logs it. For any other number, it first counts down its predecessor to 0 and then appends that number to the log. So if we apply finalCountDown to 100, the string "100" will
come last in the log.
Anyway, if you load this function in GHCi and apply it to a big number, like 500000, you'll see that it quickly starts counting from 0 onwards:
ghci> mapM_ putStrLn . fromDiffList . snd . runWriter $ finalCountDown 500000
However, if we change it to use normal lists instead of difference lists, like so:
finalCountDown :: Int -> Writer [String] ()
finalCountDown 0 = do
tell ["0"]
finalCountDown x = do
finalCountDown (x-1)
tell [show x]
And then tell GHCi to start counting:
ghci> mapM_ putStrLn . snd . runWriter $ finalCountDown 500000
We'll see that the counting is really slow.
Of course, this is not the proper and scientific way to test how fast our programs are, but we were able to see that in this case, using difference lists starts producing results right away whereas
normal lists take forever.
Oh, by the way, the song Final Countdown by Europe is now stuck in your head. Enjoy!
Reader? Ugh, not this joke again.
In the chapter about applicatives, we saw that the function type, (->) r is an instance of Functor. Mapping a function f over a function g will make a function that takes the same thing as g, applies
g to it and then applies f to that result. So basically, we're making a new function that's like g, only before returning its result, f gets applied to that result as well. For instance:
ghci> let f = (*5)
ghci> let g = (+3)
ghci> (fmap f g) 8
We've also seen that functions are applicative functors. They allow us to operate on the eventual results of functions as if we already had their results. Here's an example:
ghci> let f = (+) <$> (*2) <*> (+10)
ghci> f 3
The expression (+) <$> (*2) <*> (+10) makes a function that takes a number, gives that number to (*2) and (+10) and then adds together the results. For instance, if we apply this function to 3, it
applies both (*2) and (+10) to 3, giving 6 and 13. Then, it calls (+) with 6 and 13 and the result is 19.
Not only is the function type (->) r a functor and an applicative functor, but it's also a monad. Just like other monadic values that we've met so far, a function can also be considered a value with
a context. The context for functions is that that value is not present yet and that we have to apply that function to something in order to get its result value.
Because we're already acquainted with how functions work as functors and applicative functors, let's dive right in and see what their Monad instance looks like. It's located in
Control.Monad.Instances and it goes a little something like this:
instance Monad ((->) r) where
return x = \_ -> x
h >>= f = \w -> f (h w) w
We've already seen how pure is implemented for functions, and return is pretty much the same thing as pure. It takes a value and puts it in a minimal context that always has that value as its result.
And the only way to make a function that always has a certain value as its result is to make it completely ignore its parameter.
The implementation for >>= seems a bit cryptic, but it's really not all that. When we use >>= to feed a monadic value to a function, the result is always a monadic value. So in this case, when we
feed a function to another function, the result is a function as well. That's why the result starts off as a lambda. All of the implementations of >>= so far always somehow isolated the result from
the monadic value and then applied the function f to that result. The same thing happens here. To get the result from a function, we have to apply it to something, which is why we do (h w) here to
get the result from the function and then we apply f to that. f returns a monadic value, which is a function in our case, so we apply it to w as well.
If don't get how >>= works at this point, don't worry, because with examples we'll see how this is a really simple monad. Here's a do expression that utilizes this monad:
import Control.Monad.Instances
addStuff :: Int -> Int
addStuff = do
a <- (*2)
b <- (+10)
return (a+b)
This is the same thing as the applicative expression that we wrote earlier, only now it relies on functions being monads. A do expression always results in a monadic value and this one is no
different. The result of this monadic value is a function. What happens here is that it takes a number and then (*2) gets applied to that number and the result becomes a. (+10) is applied to the same
number that (*2) got applied to and the result becomes b. return, like in other monads, doesn't have any other effect but to make a monadic value that presents some result. This presents a+b as the
result of this function. If we test it out, we get the same result as before:
ghci> addStuff 3
Both (*2) and (+10) get applied to the number 3 in this case. return (a+b) does as well, but it ignores it and always presents a+b as the result. For this reason, the function monad is also called
the reader monad. All the functions read from a common source. To illustrate this even better, we can rewrite addStuff like so:
addStuff :: Int -> Int
addStuff x = let
a = (*2) x
b = (+10) x
in a+b
We see that the reader monad allows us to treat functions as values with a context. We can act as if we already know what the functions will return. It does this by gluing functions together into one
function and then giving that function's parameter to all of the functions that it was glued from. So if we have a lot of functions that are all just missing one parameter and they'd eventually be
applied to the same thing, we can use the reader monad to sort of extract their future results and the >>= implementation will make sure that it all works out.
Tasteful stateful computations
Haskell is a pure language and because of that, our programs are made of functions that can't change any global state or variables, they can only do some computations and return them results. This
restriction actually makes it easier to think about our programs, as it frees us from worrying what every variable's value is at some point in time. However, some problems are inherently stateful in
that they rely on some state that changes over time. While such problems aren't a problem for Haskell, they can be a bit tedious to model sometimes. That's why Haskell features a thing called the
state monad, which makes dealing with stateful problems a breeze while still keeping everything nice and pure.
When we were dealing with random numbers, we dealt with functions that took a random generator as a parameter and returned a random number and a new random generator. If we wanted to generate several
random numbers, we always had to use the random generator that a previous function returned along with its result. When making a function that takes a StdGen and tosses a coin three times based on
that generator, we had to do this:
threeCoins :: StdGen -> (Bool, Bool, Bool)
threeCoins gen =
let (firstCoin, newGen) = random gen
(secondCoin, newGen') = random newGen
(thirdCoin, newGen'') = random newGen'
in (firstCoin, secondCoin, thirdCoin)
It took a generator gen and then random gen returned a Bool value along with a new generator. To throw the second coin, we used the new generator, and so on. In most other languages, we wouldn't have
to return a new generator along with a random number. We could just modify the existing one! But since Haskell is pure, we can't do that, so we had to take some state, make a result from it and a new
state and then use that new state to generate new results.
You'd think that to avoid manually dealing with stateful computations in this way, we'd have to give up the purity of Haskell. Well, we don't have to, since there exist a special little monad called
the state monad which handles all this state business for us and without giving up any of the purity that makes Haskell programming so cool.
So, to help us understand this concept of stateful computations better, let's go ahead and give them a type. We'll say that a stateful computation is a function that takes some state and returns a
value along with some new state. That function would have the following type:
s -> (a,s)
s is the type of the state and a the result of the stateful computations.
Assignment in most other languages could be thought of as a stateful computation. For instance, when we do x = 5 in an imperative language, it will usually assign the value 5 to the variable x and it
will also have the value 5 as an expression. If you look at that functionally, you could look at it as a function that takes a state (that is, all the variables that have been assigned previously)
and returns a result (in this case 5) and a new state, which would be all the previous variable mappings plus the newly assigned variable.
This stateful computation, a function that takes a state and returns a result and a new state, can be thought of as a value with a context as well. The actual value is the result, whereas the context
is that we have to provide some initial state to actually get that result and that apart from getting a result we also get a new state.
Stacks and stones
Say we want to model operating a stack. You have a stack of things one on top of another and you can either push stuff on top of that stack or you can take stuff off the top of the stack. When you're
putting an item on top of the stack we say that you're pushing it to the stack and when you're taking stuff off the top we say that you're popping it. If you want to something that's at the bottom of
the stack, you have to pop everything that's above it.
We'll use a list to represent our stack and the head of the list will be the top of the stack. To help us with our task, we'll make two functions: pop and push. pop will take a stack, pop one item
and return that item as the result and also return a new stack, without that item. push will take an item and a stack and then push that item onto the stack. It will return () as its result, along
with a new stack. Here goes:
type Stack = [Int]
pop :: Stack -> (Int,Stack)
pop (x:xs) = (x,xs)
push :: Int -> Stack -> ((),Stack)
push a xs = ((),a:xs)
We used () as the result when pushing to the stack because pushing an item onto the stack doesn't have any important result value, its main job is to change the stack. Notice how we just apply the
first parameter of push, we get a stateful computation. pop is already a stateful computation because of its type.
Let's write a small piece of code to simulate a stack using these functions. We'll take a stack, push 3 to it and then pop two items, just for kicks. Here it is:
stackManip :: Stack -> (Int, Stack)
stackManip stack = let
((),newStack1) = push 3 stack
(a ,newStack2) = pop newStack1
in pop newStack2
We take a stack and then we do push 3 stack, which results in a tuple. The first part of the tuple is a () and the second is a new stack and we call it newStack1. Then, we pop a number from newStack1
, which results in a number a (which is the 3) that we pushed and a new stack which we call newStack2. Then, we pop a number off newStack2 and we get a number that's b and a newStack3. We return a
tuple with that number and that stack. Let's try it out:
ghci> stackManip [5,8,2,1]
Cool, the result is 5 and the new stack is [8,2,1]. Notice how stackManip is itself a stateful computation. We've taken a bunch of stateful computations and we've sort of glued them together. Hmm,
sounds familiar.
The above code for stackManip is kind of tedious since we're manually giving the state to every stateful computation and storing it and then giving it to the next one. Wouldn't it be cooler if,
instead of giving the stack manually to each function, we could write something like this:
stackManip = do
push 3
a <- pop
Well, using the state monad will allow us to do exactly this. With it, we will be able to take stateful computations like these and use them without having to manage the state manually.
The State monad
The Control.Monad.State module provides a newtype that wraps stateful computations. Here's its definition:
newtype State s a = State { runState :: s -> (a,s) }
A State s a is a stateful computation that manipulates a state of type s and has a result of type a.
Now that we've seen what stateful computations are about and how they can even be thought of as values with contexts, let's check out their Monad instance:
instance Monad (State s) where
return x = State $ \s -> (x,s)
(State h) >>= f = State $ \s -> let (a, newState) = h s
(State g) = f a
in g newState
Let's take a gander at return first. Our aim with return is to take a value and make a stateful computation that always has that value as its result. That's why we just make a lambda \s -> (x,s). We
always present x as the result of the stateful computation and the state is kept unchanged, because return has to put a value in a minimal context. So return will make a stateful computation that
presents a certain value as the result and keeps the state unchanged.
What about >>=? Well, the result of feeding a stateful computation to a function with >>= has to be a stateful computation, right? So we start off with the State newtype wrapper and then we type out
a lambda. This lambda will be our new stateful computation. But what goes on in it? Well, we somehow have to extract the result value from the first stateful computation. Because we're in a stateful
computation right now, we can give the stateful computation h our current state s, which results in a pair of result and a new state: (a, newState). Every time so far when we were implementing >>=,
once we had the extracted the result from the monadic value, we applied the function f to it to get the new monadic value. In Writer, after doing that and getting the new monadic value, we still had
to make sure that the context was taken care of by mappending the old monoid value with the new one. Here, we do f a and we get a new stateful computation g. Now that we have a new stateful
computation and a new state (goes by the name of newState) we just apply that stateful computation g to the newState. The result is a tuple of final result and final state!
So with >>=, we kind of glue two stateful computations together, only the second one is hidden inside a function that takes the previous one's result. Because pop and push are already stateful
computations, it's easy to wrap them into a State wrapper. Watch:
import Control.Monad.State
pop :: State Stack Int
pop = State $ \(x:xs) -> (x,xs)
push :: Int -> State Stack ()
push a = State $ \xs -> ((),a:xs)
pop is already a stateful computation and push takes an Int and returns a stateful computation. Now we can rewrite our previous example of pushing 3 onto the stack and then popping two numbers off
import Control.Monad.State
stackManip :: State Stack Int
stackManip = do
push 3
a <- pop
See how we've glued a push and two pops into one stateful computation? When we unwrap it from its newtype wrapper we get a function to which we can provide some initial state:
ghci> runState stackManip [5,8,2,1]
We didn't have to bind the second pop to a because we didn't use that a at all. So we could have written it like this:
stackManip :: State Stack Int
stackManip = do
push 3
Pretty cool. But what if we want to do this: pop one number off the stack and then if that number is 5 we just put it back onto the stack and stop but if it isn't 5, we push 3 and 8 back on? Well,
here's the code:
stackStuff :: State Stack ()
stackStuff = do
a <- pop
if a == 5
then push 5
else do
push 3
push 8
This is quite straightforward. Let's run it with an initial stack.
ghci> runState stackStuff [9,0,2,1,0]
Remember, do expressions result in monadic values and with the State monad, a single do expression is also a stateful function. Because stackManip and stackStuff are ordinary stateful computations,
we can glue them together to produce further stateful computations.
moreStack :: State Stack ()
moreStack = do
a <- stackManip
if a == 100
then stackStuff
else return ()
If the result of stackManip on the current stack is 100, we run stackStuff, otherwise we do nothing. return () just keeps the state as it is and does nothing.
The Control.Monad.State module provides a type class that's called MonadState and it features two pretty useful functions, namely get and put. For State, the get function is implemented like this:
get = State $ \s -> (s,s)
So it just takes the current state and presents it as the result. The put function takes some state and makes a stateful function that replaces the current state with it:
put newState = State $ \s -> ((),newState)
So with these, we can see what the current stack is or we can replace it with a whole other stack. Like so:
stackyStack :: State Stack ()
stackyStack = do
stackNow <- get
if stackNow == [1,2,3]
then put [8,3,1]
else put [9,2,1]
It's worth examining what the type of >>= would be if it only worked for State values:
(>>=) :: State s a -> (a -> State s b) -> State s b
See how the type of the state s stays the same but the type of the result can change from a to b? This means that we can glue together several stateful computations whose results are of different
types but the type of the state has to stay the same. Now why is that? Well, for instance, for Maybe, >>= has this type:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
It makes sense that the monad itself, Maybe, doesn't change. It wouldn't make sense to use >>= between two different monads. Well, for the state monad, the monad is actually State s, so if that s was
different, we'd be using >>= between two different monads.
Randomness and the state monad
At the beginning of this section, we saw how generating numbers can sometimes be awkward because every random function takes a generator and returns a random number along with a new generator, which
must then be used instead of the old one if we want to generate another random number. The state monad makes dealing with this a lot easier.
The random function from System.Random has the following type:
random :: (RandomGen g, Random a) => g -> (a, g)
Meaning it takes a random generator and produces a random number along with a new generator. We can see that it's a stateful computation, so we can wrap it in the State newtype constructor and then
use it as a monadic value so that passing of the state gets handled for us:
import System.Random
import Control.Monad.State
randomSt :: (RandomGen g, Random a) => State g a
randomSt = State random
So now if we want to throw three coins (True is tails, False is heads) we just do the following:
import System.Random
import Control.Monad.State
threeCoins :: State StdGen (Bool,Bool,Bool)
threeCoins = do
a <- randomSt
b <- randomSt
c <- randomSt
return (a,b,c)
threeCoins is now a stateful computations and after taking an initial random generator, it passes it to the first randomSt, which produces a number and a new generator, which gets passed to the next
one and so on. We use return (a,b,c) to present (a,b,c) as the result without changing the most recent generator. Let's give this a go:
ghci> runState threeCoins (mkStdGen 33)
((True,False,True),680029187 2103410263)
Nice. Doing these sort of things that require some state to be kept in between steps just became much less of a hassle!
Error error on the wall
We know by now that Maybe is used to add a context of possible failure to values. A value can be a Just something or a Nothing. However useful it may be, when we have a Nothing, all we know is that
there was some sort of failure, but there's no way to cram some more info in there telling us what kind of failure it was or why it failed.
The Either e a type on the other hand, allows us to incorporate a context of possible failure to our values while also being able to attach values to the failure, so that they can describe what went
wrong or provide some other useful info regarding the failure. An Either e a value can either be a Right value, signifying the right answer and a success, or it can be a Left value, signifying
failure. For instance:
ghci> :t Right 4
Right 4 :: (Num t) => Either a t
ghci> :t Left "out of cheese error"
Left "out of cheese error" :: Either [Char] b
This is pretty much just an enhanced Maybe, so it makes sense for it to be a monad, because it can also be viewed as a value with an added context of possible failure, only now there's a value
attached when there's an error as well.
Its Monad instance is similar to that of Maybe and it can be found in Control.Monad.Error:
instance (Error e) => Monad (Either e) where
return x = Right x
Right x >>= f = f x
Left err >>= f = Left err
fail msg = Left (strMsg msg)
return, as always, takes a value and puts it in a default minimal context. It wraps our value in the Right constructor because we're using Right to represent a successful computation where a result
is present. This is a lot like return for Maybe.
The >>= examines two possible cases: a Left and a Right. In the case of a Right, the function f is applied to the value inside it, similar to how in the case of a Just, the function is just applied
to its contents. In the case of an error, the Left value is kept, along with its contents, which describe the failure.
The Monad instance for Either e makes an additional requirement, and that is that the type of the value contained in a Left, the one that's indexed by the e type parameter, has to be an instance of
the Error type class. The Error type class is for types whose values can act like error messages. It defines the strMsg function, which takes an error in the form of a string and returns such a
value. A good example of an Error instance is, well, the String type! In the case of String, the strMsg function just returns the string that it got:
ghci> :t strMsg
strMsg :: (Error a) => String -> a
ghci> strMsg "boom!" :: String
But since we usually use String to describe the error when using Either, we don't have to worry about this too much. When a pattern match fails in do notation, a Left value is used to signify this
Anyway, here are a few examples of usage:
ghci> Left "boom" >>= \x -> return (x+1)
Left "boom"
ghci> Right 100 >>= \x -> Left "no way!"
Left "no way!"
When we use >>= to feed a Left value to a function, the function is ignored and an identical Left value is returned. When we feed a Right value to a function, the function gets applied to what's on
the inside, but in this case that function produced a Left value anyway!
When we try to feed a Right value to a function that also succeeds, we're tripped up by a peculiar type error! Hmmm.
ghci> Right 3 >>= \x -> return (x + 100)
Ambiguous type variable `a' in the constraints:
`Error a' arising from a use of `it' at <interactive>:1:0-33
`Show a' arising from a use of `print' at <interactive>:1:0-33
Probable fix: add a type signature that fixes these type variable(s)
Haskell says that it doesn't know which type to choose for the e part of our Either e a typed value, even though we're just printing the Right part. This is due to the Error e constraint on the Monad
instance. So if you get type errors like this one when using Either as a monad, just add an explicit type signature:
ghci> Right 3 >>= \x -> return (x + 100) :: Either String Int
Right 103
Alright, now it works!
Other than this little hangup, using this monad is very similar to using Maybe as a monad. In the previous chapter, we used the monadic aspects of Maybe to simulate birds landing on the balancing
pole of a tightrope walker. As an exercise, you can rewrite that with the error monad so that when the tightrope walker slips and falls, we remember how many birds were on each side of the pole when
he fell.
Some useful monadic functions
In this section, we're going to explore a few functions that either operate on monadic values or return monadic values as their results (or both!). Such functions are usually referred to as monadic
functions. While some of them will be brand new, others will be monadic counterparts of functions that we already know, like filter and foldl. Let's see what they are then!
liftM and friends
When we started our journey to the top of Monad Mountain, we first looked at functors, which are for things that can be mapped over. Then, we learned about improved functors called applicative
functors, which allowed us to apply normal functions between several applicative values as well as to take a normal value and put it in some default context. Finally, we introduced monads as improved
applicative functors, which added the ability for these values with context to somehow be fed into normal functions.
So every monad is an applicative functor and every applicative functor is a functor. The Applicative type class has a class constraint such that our type has to be an instance of Functor before we
can make it an instance of Applicative. But even though Monad should have the same constraint for Applicative, as every monad is an applicative functor, it doesn't, because the Monad type class was
introduced to Haskell way before Applicative.
But even though every monad is a functor, we don't have to rely on it having a Functor instance because of the liftM function. liftM takes a function and a monadic value and maps it over the monadic
value. So it's pretty much the same thing as fmap! This is liftM's type:
liftM :: (Monad m) => (a -> b) -> m a -> m b
And this is the type of fmap:
fmap :: (Functor f) => (a -> b) -> f a -> f b
If the Functor and Monad instances for a type obey the functor and monad laws, these two amount to the same thing (and all the monads that we've met so far obey both). This is kind of like pure and
return do the same thing, only one has an Applicative class constraint whereas the other has a Monad one. Let's try liftM out:
ghci> liftM (*3) (Just 8)
Just 24
ghci> fmap (*3) (Just 8)
Just 24
ghci> runWriter $ liftM not $ Writer (True, "chickpeas")
ghci> runWriter $ fmap not $ Writer (True, "chickpeas")
ghci> runState (liftM (+100) pop) [1,2,3,4]
ghci> runState (fmap (+100) pop) [1,2,3,4]
We already know quite well how fmap works with Maybe values. And liftM does the same thing. For Writer values, the function is mapped over the first component of the tuple, which is the result. Doing
fmap or liftM over a stateful computation results in another stateful computation, only its eventual result is modified by the supplied function. Had we not mapped (+100) over pop in this case before
running it, it would have returned (1,[2,3,4]).
This is how liftM is implemented:
liftM :: (Monad m) => (a -> b) -> m a -> m b
liftM f m = m >>= (\x -> return (f x))
Or with do notation:
liftM :: (Monad m) => (a -> b) -> m a -> m b
liftM f m = do
x <- m
return (f x)
We feed the monadic value m into the function and then we apply the function f to its result before putting it back into a default context. Because of the monad laws, this is guaranteed not to change
the context, only the result that the monadic value presents. We see that liftM is implemented without referencing the Functor type class at all. This means that we can implement fmap (or liftM,
whatever you want to call it) just by using the goodies that monads offer us. Because of this, we can conclude that monads are stronger than just regular old functors.
The Applicative type class allows us to apply functions between values with contexts as if they were normal values. Like this:
ghci> (+) <$> Just 3 <*> Just 5
Just 8
ghci> (+) <$> Just 3 <*> Nothing
Using this applicative style makes things pretty easy. <$> is just fmap and <*> is a function from the Applicative type class that has the following type:
(<*>) :: (Applicative f) => f (a -> b) -> f a -> f b
So it's kind of like fmap, only the function itself is in a context. We have to somehow extract it from the context and map it over the f a value and then assemble the context back together. Because
all functions are curried in Haskell by default, we can use the combination of <$> and <*> to apply functions that take several parameters between applicative values.
Anyway, it turns out that just like fmap, <*> can also be implemented by using only what the Monad type class give us. The ap function is basically <*>, only it has a Monad constraint instead of an
Applicative one. Here's its definition:
ap :: (Monad m) => m (a -> b) -> m a -> m b
ap mf m = do
f <- mf
x <- m
return (f x)
mf is a monadic value whose result is a function. Because the function is in a context as well as the value, we get the function from the context and call it f, then get the value and call that x and
then finally apply the function to the value and present that as a result. Here's a quick demonstration:
ghci> Just (+3) <*> Just 4
Just 7
ghci> Just (+3) `ap` Just 4
Just 7
ghci> [(+1),(+2),(+3)] <*> [10,11]
ghci> [(+1),(+2),(+3)] `ap` [10,11]
Now we see that monads are stronger than applicatives as well, because we can use the functions from Monad to implement the ones for Applicative. In fact, many times when a type is found to be a
monad, people first write up a Monad instance and then make an Applicative instance by just saying that pure is return and <*> is ap. Similarly, if you already have a Monad instance for something,
you can give it a Functor instance just saying that fmap is liftM.
The liftA2 function is a convenience function for applying a function between two applicative values. It's defined simply like so:
liftA2 :: (Applicative f) => (a -> b -> c) -> f a -> f b -> f c
liftA2 f x y = f <$> x <*> y
The liftM2 function does the same thing, only it has a Monad constraint. There also exist liftM3 and liftM4 and liftM5.
We saw how monads are stronger than applicatives and functors and how even though all monads are functors and applicative functors, they don't necessarily have Functor and Applicative instances, so
we examined the monadic equivalents of the functions that functors and applicative functors use.
The join function
Here's some food for thought: if the result of one monadic value is another monadic value i.e. if one monadic value is nested inside the other, can you flatten them to just a single normal monadic
value? Like, if we have Just (Just 9), can we make that into Just 9? It turns out that any nested monadic value can be flattened and that this is actually a property unique to monads. For this, the
join function exists. Its type is this:
join :: (Monad m) => m (m a) -> m a
So it takes a monadic value within a monadic value and gives us just a monadic value, so it sort of flattens it. Here it is with some Maybe values:
ghci> join (Just (Just 9))
Just 9
ghci> join (Just Nothing)
ghci> join Nothing
The first line has a successful computation as a result of a successful computation, so they're both just joined into one big successful computation. The second line features a Nothing as a result of
a Just value. Whenever we were dealing with Maybe values before and we wanted to combine several of them into one, be it with <*> or >>=, they all had to be Just values for the result to be a Just
value. If there was any failure along the way, the result was a failure and the same thing happens here. In the third line, we try to flatten what is from the onset a failure, so the result is a
failure as well.
Flattening lists is pretty intuitive:
ghci> join [[1,2,3],[4,5,6]]
As you can see, for lists, join is just concat. To flatten a Writer value whose result is a Writer value itself, we have to mappend the monoid value.
ghci> runWriter $ join (Writer (Writer (1,"aaa"),"bbb"))
The outer monoid value "bbb" comes first and then to it "aaa" is appended. Intuitively speaking, when you want to examine what the result of a Writer value is, you have to write its monoid value to
the log first and only then can you examine what it has inside.
Flattening Either values is very similar to flattening Maybe values:
ghci> join (Right (Right 9)) :: Either String Int
Right 9
ghci> join (Right (Left "error")) :: Either String Int
Left "error"
ghci> join (Left "error") :: Either String Int
Left "error"
If we apply join to a stateful computation whose result is a stateful computation, the result is a stateful computation that first runs the outer stateful computation and then the resulting one.
ghci> runState (join (State $ \s -> (push 10,1:2:s))) [0,0,0]
The lambda here takes a state and puts 2 and 1 onto the stack and presents push 10 as its result. So when this whole thing is flattened with join and then run, it first puts 2 and 1 onto the stack
and then push 10 gets carried out, pushing a 10 on to the top.
The implementation for join is as follows:
join :: (Monad m) => m (m a) -> m a
join mm = do
m <- mm
Because the result of mm is a monadic value, we get that result and then just put it on a line of its own because it's a monadic value. The trick here is that when we do m <- mm, the context of the
monad in which we are in gets taken care of. That's why, for instance, Maybe values result in Just values only if the outer and inner values are both Just values. Here's what this would look like if
the mm value was set in advance to Just (Just 8):
joinedMaybes :: Maybe Int
joinedMaybes = do
m <- Just (Just 8)
Perhaps the most interesting thing about join is that for every monad, feeding a monadic value to a function with >>= is the same thing as just mapping that function over the value and then using
join to flatten the resulting nested monadic value! In other words, m >>= f is always the same thing as join (fmap f m)! It makes sense when you think about it. With >>=, we're always thinking about
how to feed a monadic value to a function that takes a normal value but returns a monadic value. If we just map that function over the monadic value, we have a monadic value inside a monadic value.
For instance, say we have Just 9 and the function \x -> Just (x+1). If we map this function over Just 9, we're left with Just (Just 10).
The fact that m >>= f always equals join (fmap f m) is very useful if we're making our own Monad instance for some type because it's often easier to figure out how we would flatten a nested monadic
value than figuring out how to implement >>=.
The filter function is pretty much the bread of Haskell programming (map being the butter). It takes a predicate and a list to filter out and then returns a new list where only the elements that
satisfy the predicate are kept. Its type is this:
filter :: (a -> Bool) -> [a] -> [a]
The predicate takes an element of the list and returns a Bool value. Now, what if the Bool value that it returned was actually a monadic value? Whoa! That is, what if it came with a context? Could
that work? For instance, what if every True or a False value that the predicate produced also had an accompanying monoid value, like ["Accepted the number 5"] or ["3 is too small"]? That sounds like
it could work. If that were the case, we'd expect the resulting list to also come with a log of all the log values that were produced along the way. So if the Bool that the predicate returned came
with a context, we'd expect the final resulting list to have some context attached as well, otherwise the context that each Bool came with would be lost.
The filterM function from Control.Monad does just what we want! Its type is this:
filterM :: (Monad m) => (a -> m Bool) -> [a] -> m [a]
The predicate returns a monadic value whose result is a Bool, but because it's a monadic value, its context can be anything from a possible failure to non-determinism and more! To ensure that the
context is reflected in the final result, the result is also a monadic value.
Let's take a list and only keep those values that are smaller than 4. To start, we'll just use the regular filter function:
ghci> filter (\x -> x < 4) [9,1,5,2,10,3]
That's pretty easy. Now, let's make a predicate that, aside from presenting a True or False result, also provides a log of what it did. Of course, we'll be using the Writer monad for this:
keepSmall :: Int -> Writer [String] Bool
keepSmall x
| x < 4 = do
tell ["Keeping " ++ show x]
return True
| otherwise = do
tell [show x ++ " is too large, throwing it away"]
return False
Instead of just and returning a Bool, this function returns a Writer [String] Bool. It's a monadic predicate. Sounds fancy, doesn't it? If the number is smaller than 4 we report that we're keeping it
and then return True.
Now, let's give it to filterM along with a list. Because the predicate returns a Writer value, the resulting list will also be a Writer value.
ghci> fst $ runWriter $ filterM keepSmall [9,1,5,2,10,3]
Examining the result of the resulting Writer value, we see that everything is in order. Now, let's print the log and see what we got:
ghci> mapM_ putStrLn $ snd $ runWriter $ filterM keepSmall [9,1,5,2,10,3]
9 is too large, throwing it away
Keeping 1
5 is too large, throwing it away
Keeping 2
10 is too large, throwing it away
Keeping 3
Awesome. So just by providing a monadic predicate to filterM, we were able to filter a list while taking advantage of the monadic context that we used.
A very cool Haskell trick is using filterM to get the powerset of a list (if we think of them as sets for now). The powerset of some set is a set of all subsets of that set. So if we have a set like
[1,2,3], its powerset would include the following sets:
In other words, getting a powerset is like getting all the combinations of keeping and throwing out elements from a set. [2,3] is like the original set, only we excluded the number 1.
To make a function that returns a powerset of some list, we're going to rely on non-determinism. We take the list [1,2,3] and then look at the first element, which is 1 and we ask ourselves: should
we keep it or drop it? Well, we'd like to do both actually. So we are going to filter a list and we'll use a predicate that non-deterministically both keeps and drops every element from the list.
Here's our powerset function:
powerset :: [a] -> [[a]]
powerset xs = filterM (\x -> [True, False]) xs
Wait, that's it? Yup. We choose to drop and keep every element, regardless of what that element is. We have a non-deterministic predicate, so the resulting list will also be a non-deterministic value
and will thus be a list of lists. Let's give this a go:
ghci> powerset [1,2,3]
This takes a bit of thinking to wrap your head around, but if you just consider lists as non-deterministic values that don't know what to be so they just decide to be everything at once, it's a bit
The monadic counterpart to foldl is foldM. If you remember your folds from the folds section, you know that foldl takes a binary function, a starting accumulator and a list to fold up and then folds
it from the left into a single value by using the binary function. foldM does the same thing, except it takes a binary function that produces a monadic value and folds the list up with that.
Unsurprisingly, the resulting value is also monadic. The type of foldl is this:
foldl :: (a -> b -> a) -> a -> [b] -> a
Whereas foldM has the following type:
foldM :: (Monad m) => (a -> b -> m a) -> a -> [b] -> m a
The value that the binary function returns is monadic and so the result of the whole fold is monadic as well. Let's sum a list of numbers with a fold:
ghci> foldl (\acc x -> acc + x) 0 [2,8,3,1]
The starting accumulator is 0 and then 2 gets added to the accumulator, resulting in a new accumulator that has a value of 2. 8 gets added to this accumulator resulting in an accumulator of 10 and so
on and when we reach the end, the final accumulator is the result.
Now what if we wanted to sum a list of numbers but with the added condition that if any number is greater than 9 in the list, the whole thing fails? It would make sense to use a binary function that
checks if the current number is greater than 9 and if it is, fails, and if it isn't, continues on its merry way. Because of this added possibility of failure, let's make our binary function return a
Maybe accumulator instead of a normal one. Here's the binary function:
binSmalls :: Int -> Int -> Maybe Int
binSmalls acc x
| x > 9 = Nothing
| otherwise = Just (acc + x)
Because our binary function is now a monadic function, we can't use it with the normal foldl, but we have to use foldM. Here goes:
ghci> foldM binSmalls 0 [2,8,3,1]
Just 14
ghci> foldM binSmalls 0 [2,11,3,1]
Excellent! Because one number in the list was greater than 9, the whole thing resulted in a Nothing. Folding with a binary function that returns a Writer value is cool as well because then you log
whatever you want as your fold goes along its way.
Making a safe RPN calculator
When we were solving the problem of implementing a RPN calculator, we noted that it worked fine as long as the input that it got made sense. But if something went wrong, it caused our whole program
to crash. Now that we know how to take some code that we have and make it monadic, let's take our RPN calculator and add error handling to it by taking advantage of the Maybe monad.
We implemented our RPN calculator by taking a string like "1 3 + 2 *", breaking it up into words to get something like ["1","3","+","2","*"] and then folding over that list by starting out with an
empty stack and then using a binary folding function that adds numbers to the stack or manipulates numbers on the top of the stack to add them together and divide them and such.
This was the main body of our function:
import Data.List
solveRPN :: String -> Double
solveRPN = head . foldl foldingFunction [] . words
We made the expression into a list of strings, folded over it with our folding function and then when we were left with just one item in the stack, we returned that item as the answer. This was the
folding function:
foldingFunction :: [Double] -> String -> [Double]
foldingFunction (x:y:ys) "*" = (x * y):ys
foldingFunction (x:y:ys) "+" = (x + y):ys
foldingFunction (x:y:ys) "-" = (y - x):ys
foldingFunction xs numberString = read numberString:xs
The accumulator of the fold was a stack, which we represented with a list of Double values. As the folding function went over the RPN expression, if the current item was an operator, it took two
items off the top of the stack, applied the operator between them and then put the result back on the stack. If the current item was a string that represented a number, it converted that string into
an actual number and returned a new stack that was like the old one, except with that number pushed to the top.
Let's first make our folding function capable of graceful failure. Its type is going to change from what it is now to this:
foldingFunction :: [Double] -> String -> Maybe [Double]
So it will either return Just a new stack or it will fail with Nothing.
The reads function is like read, only it returns a list with a single element in case of a successful read. If it fails to read something, then it returns an empty list. Apart from returning the
value that it read, it also returns the part of the string that it didn't consume. We're going to say that it always has to consume the full input to work and make it into a readMaybe function for
convenience. Here it is:
readMaybe :: (Read a) => String -> Maybe a
readMaybe st = case reads st of [(x,"")] -> Just x
_ -> Nothing
Testing it out:
ghci> readMaybe "1" :: Maybe Int
Just 1
ghci> readMaybe "GO TO HELL" :: Maybe Int
Okay, it seems to work. So, let's make our folding function into a monadic function that can fail:
foldingFunction :: [Double] -> String -> Maybe [Double]
foldingFunction (x:y:ys) "*" = return ((x * y):ys)
foldingFunction (x:y:ys) "+" = return ((x + y):ys)
foldingFunction (x:y:ys) "-" = return ((y - x):ys)
foldingFunction xs numberString = liftM (:xs) (readMaybe numberString)
The first three cases are like the old ones, except the new stack gets wrapped in a Just (we used return here to do this, but we could have written Just just as well). In the last case, we do
readMaybe numberString and then we map (:xs) over it. So if the stack xs is [1.0,2.0] and readMaybe numberString results in a Just 3.0, the result is Just [3.0,1.0,2.0]. If readMaybe numberString
results in a Nothing then the result is Nothing. Let's try out the folding function by itself:
ghci> foldingFunction [3,2] "*"
Just [6.0]
ghci> foldingFunction [3,2] "-"
Just [-1.0]
ghci> foldingFunction [] "*"
ghci> foldingFunction [] "1"
Just [1.0]
ghci> foldingFunction [] "1 wawawawa"
It looks like it's working! And now it's time for the new and improved solveRPN. Here it is ladies and gents!
import Data.List
solveRPN :: String -> Maybe Double
solveRPN st = do
[result] <- foldM foldingFunction [] (words st)
return result
Just like before, we take the string and make it into a list of words. Then, we do a fold, starting with the empty stack, only instead of doing a normal foldl, we do a foldM. The result of that foldM
should be a Maybe value that contains a list (that's our final stack) and that list should have only one value. We use a do expression to get that value and we call it result. In case the foldM
returns a Nothing, the whole thing will be a Nothing, because that's how Maybe works. Also notice that we pattern match in the do expression, so if the list has more than one value or none at all,
the pattern match fails and a Nothing is produced. In the last line we just do return result to present the result of the RPN calculation as the result of the final Maybe value.
Let's give it a shot:
ghci> solveRPN "1 2 * 4 +"
Just 6.0
ghci> solveRPN "1 2 * 4 + 5 *"
Just 30.0
ghci> solveRPN "1 2 * 4"
ghci> solveRPN "1 8 wharglbllargh"
The first failure happens because the final stack isn't a list with one element in it and so the pattern matching in the do expression fails. The second failure happens because readMaybe returns a
Composing monadic functions
When we were learning about the monad laws, we said that the <=< function is just like composition, only instead of working for normal functions like a -> b, it works for monadic functions like a ->
m b. For instance:
ghci> let f = (+1) . (*100)
ghci> f 4
ghci> let g = (\x -> return (x+1)) <=< (\x -> return (x*100))
ghci> Just 4 >>= g
Just 401
In this example we first composed two normal functions, applied the resulting function to 4 and then we composed two monadic functions and fed Just 4 to the resulting function with >>=.
If we have a bunch of functions in a list, we can compose them one all into one big function by just using id as the starting accumulator and the . function as the binary function. Here's an example:
ghci> let f = foldr (.) id [(+1),(*100),(+1)]
ghci> f 1
The function f takes a number and then adds 1 to it, multiplies the result by 100 and then adds 1 to that. Anyway, we can compose monadic functions in the same way, only instead normal composition we
use <=< and instead of id we use return. We don't have to use a foldM over a foldr or anything because the <=< function makes sure that composition happens in a monadic fashion.
When we were getting to know the list monad in the previous chapter, we used it to figure out if a knight can go from one position on a chessboard to another in exactly three moves. We had a function
called moveKnight which took the knight's position on the board and returned all the possible moves that he can make next. Then, to generate all the possible positions that he can have after taking
three moves, we made the following function:
in3 start = return start >>= moveKnight >>= moveKnight >>= moveKnight
And to check if he can go from start to end in three moves, we did the following:
canReachIn3 :: KnightPos -> KnightPos -> Bool
canReachIn3 start end = end `elem` in3 start
Using monadic function composition, we can make a function like in3, only instead of generating all the positions that the knight can have after making three moves, we can do it for an arbitrary
number of moves. If you look at in3, we see that we used moveKnight three times and each time we used >>= to feed it all the possible previous positions. So now, let's make it more general. Here's
how to do it:
import Data.List
inMany :: Int -> KnightPos -> [KnightPos]
inMany x start = return start >>= foldr (<=<) return (replicate x moveKnight)
First we use replicate to make a list that contains x copies of the function moveKnight. Then, we monadically compose all those functions into one, which gives us a function that takes a starting
position and non-deterministically moves the knight x times. Then, we just make the starting position into a singleton list with return and feed it to the function.
Now, we can change our canReachIn3 function to be more general as well:
canReachIn :: Int -> KnightPos -> KnightPos -> Bool
canReachIn x start end = end `elem` inMany x start
Making monads
In this section, we're going to look at an example of how a type gets made, identified as a monad and then given the appropriate Monad instance. We don't usually set out to make a monad with the sole
purpose of making a monad. Instead, we usually make a type that whose purpose is to model an aspect of some problem and then later on if we see that the type represents a value with a context and can
act like a monad, we give it a Monad instance.
As we've seen, lists are used to represent non-deterministic values. A list like [3,5,9] can be viewed as a single non-deterministic value that just can't decide what it's going to be. When we feed a
list into a function with >>=, it just makes all the possible choices of taking an element from the list and applying the function to it and then presents those results in a list as well.
If we look at the list [3,5,9] as the numbers 3, 5 and 9 occurring at once, we might notice that there's no info regarding the probability that each of those numbers occurs. What if we wanted to
model a non-deterministic value like [3,5,9], but we wanted to express that 3 has a 50% chance of happening and 5 and 9 both have a 25% chance of happening? Let's try and make this happen!
Let's say that every item in the list comes with another value, a probability of it happening. It might make sense to present this like this then:
In mathematics, probabilities aren't usually expressed in percentages, but rather in real numbers between a 0 and 1. A 0 means that there's no chance in hell for something to happen and a 1 means
that it's happening for sure. Floating point numbers can get real messy real fast because they tend to lose precision, so Haskell offers us a data type for rational numbers that doesn't lose
precision. That type is called Rational and it lives in Data.Ratio. To make a Rational, we write it as if it were a fraction. The numerator and the denominator are separated by a %. Here are a few
ghci> 1%4
1 % 4
ghci> 1%2 + 1%2
1 % 1
ghci> 1%3 + 5%4
19 % 12
The first line is just one quarter. In the second line we add two halves to get a whole and in the third line we add one third with five quarters and get nineteen twelfths. So let'use throw out our
floating points and use Rational for our probabilities:
ghci> [(3,1%2),(5,1%4),(9,1%4)]
[(3,1 % 2),(5,1 % 4),(9,1 % 4)]
Okay, so 3 has a one out of two chance of happening while 5 and 9 will happen one time out of four. Pretty neat.
We took lists and we added some extra context to them, so this represents values withs contexts too. Before we go any further, let's wrap this into a newtype because something tells me we'll be
making some instances.
import Data.Ratio
newtype Prob a = Prob { getProb :: [(a,Rational)] } deriving Show
Alright. Is this a functor? Well, the list is a functor, so this should probably be a functor as well, because we just added some stuff to the list. When we map a function over a list, we apply it to
each element. Here, we'll apply it to each element as well, only we'll leave the probabilities as they are. Let's make an instance:
instance Functor Prob where
fmap f (Prob xs) = Prob $ map (\(x,p) -> (f x,p)) xs
We unwrap it from the newtype with pattern matching, apply the function f to the values while keeping the probabilities as they are and then wrap it back up. Let's see if it works:
ghci> fmap negate (Prob [(3,1%2),(5,1%4),(9,1%4)])
Prob {getProb = [(-3,1 % 2),(-5,1 % 4),(-9,1 % 4)]}
Another thing to note is that the probabilities should always add up to 1. If those are all the things that can happen, it doesn't make sense for the sum of their probabilities to be anything other
than 1. A coin that lands tails 75% of the time and heads 50% of the time seems like it could only work in some other strange universe.
Now the big question, is this a monad? Given how the list is a monad, this looks like it should be a monad as well. First, let's think about return. How does it work for lists? It takes a value and
puts it in a singleton list. What about here? Well, since it's supposed to be a default minimal context, it should also make a singleton list. What about the probability? Well, return x is supposed
to make a monadic value that always presents x as its result, so it doesn't make sense for the probability to be 0. If it always has to present it as its result, the probability should be 1!
What about >>=? Seems kind of tricky, so let's make use of the fact that m >>= f always equals join (fmap f m) for monads and think about how we would flatten a probability list of probability lists.
As an example, let's consider this list where there's a 25% chance that exactly one of 'a' or 'b' will happen. Both 'a' and 'b' are equally likely to occur. Also, there's a 75% chance that exactly
one of 'c' or 'd' will happen. 'c' and 'd' are also equally likely to happen. Here's a picture of a probability list that models this scenario:
What are the chances for each of these letters to occur? If we were to draw this as just four boxes, each with a probability, what would those probabilities be? To find out, all we have to do is
multiply each probability with all of probabilities that it contains. 'a' would occur one time out of eight, as would 'b', because if we multiply one half by one quarter we get one eighth. 'c' would
happen three times out of eight because three quarters multiplied by one half is three eighths. 'd' would also happen three times out of eight. If we sum all the probabilities, they still add up to
Here's this situation expressed as a probability list:
thisSituation :: Prob (Prob Char)
thisSituation = Prob
[( Prob [('a',1%2),('b',1%2)] , 1%4 )
,( Prob [('c',1%2),('d',1%2)] , 3%4)
Notice that its type is Prob (Prob Char). So now that we've figure out how to flatten a nested probability list, all we have to do is write the code for this and then we can write >>= simply as join
(fmap f m) and we have ourselves a monad! So here's flatten, which we'll use because the name join is already taken:
flatten :: Prob (Prob a) -> Prob a
flatten (Prob xs) = Prob $ concat $ map multAll xs
where multAll (Prob innerxs,p) = map (\(x,r) -> (x,p*r)) innerxs
The function multAll takes a tuple of probability list and a probability p that comes with it and then multiplies every inner probability with p, returning a list of pairs of items and probabilities.
We map multAll over each pair in our nested probability list and then we just flatten the resulting nested list.
Now we have all that we need, we can write a Monad instance!
instance Monad Prob where
return x = Prob [(x,1%1)]
m >>= f = flatten (fmap f m)
fail _ = Prob []
Because we already did all the hard work, the instance is very simple. We also defined the fail function, which is the same as it is for lists, so if there's a pattern match failure in a do
expression, a failure occurs within the context of a probability list.
It's also important to check if the monad laws hold for the monad that we just made. The first one says that return x >>= f should be equal to f x. A rigorous proof would be rather tedious, but we
can see that if we put a value in a default context with return and then fmap a function over that and flatten the resulting probability list, every probability that results from the function would
be multiplied by the 1%1 probability that we made with return, so it wouldn't affect the context. The reasoning for m >>= return being equal to just m is similar. The third law states that f <=< (g
<=< h) should be the same as (f <=< g) <=< h. This one holds as well, because it holds for the list monad which forms the basis of the probability monad and because multiplication is associative. 1%2
* (1%3 * 1%5) is equal to (1%2 * 1%3) * 1%5.
Now that we have a monad, what can we do with it? Well, it can help us do calculations with probabilities. We can treat probabilistic events as values with contexts and the probability monad will
make sure that those probabilities get reflected in the probabilities of the final result.
Say we have two normal coins and one loaded coin that gets tails an astounding nine times out of ten and heads only one time out of ten. If we throw all the coins at once, what are the odds of all of
them landing tails? First, let's make probability values for a normal coin flip and for a loaded one:
data Coin = Heads | Tails deriving (Show, Eq)
coin :: Prob Coin
coin = Prob [(Heads,1%2),(Tails,1%2)]
loadedCoin :: Prob Coin
loadedCoin = Prob [(Heads,1%10),(Tails,9%10)]
And finally, the coin throwing action:
import Data.List (all)
flipThree :: Prob Bool
flipThree = do
a <- coin
b <- coin
c <- loadedCoin
return (all (==Tails) [a,b,c])
Giving it a go, we see that the odds of all three landing tails are not that good, despite cheating with our loaded coin:
ghci> getProb flipThree
[(False,1 % 40),(False,9 % 40),(False,1 % 40),(False,9 % 40),
(False,1 % 40),(False,9 % 40),(False,1 % 40),(True,9 % 40)]
All three of them will land tails nine times out of forty, which is less than 25%. We see that our monad doesn't know how to join all of the False outcomes where all coins don't land tails into one
outcome. That's not a big problem, since writing a function to put all the same outcomes into one outcome is pretty easy and is left as an exercise to the reader (you!)
In this section, we went from having a question (what if lists also carried information about probability?) to making a type, recognizing a monad and finally making an instance and doing something
with it. I think that's quite fetching! By now, we should have a pretty good grasp on monads and what they're about. | {"url":"http://learnyouahaskell.com/for-a-few-monads-more","timestamp":"2014-04-16T14:34:00Z","content_type":null,"content_length":"125716","record_id":"<urn:uuid:574cffb4-c68d-4702-a1e4-be1ae0741b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Investigating the diameter of solid particles effects on a laminar nanofluid flow in a curved tube using a two phase approach
Akbarinia, A. and Laur, R. (2009) Investigating the diameter of solid particles effects on a laminar nanofluid flow in a curved tube using a two phase approach. INTERNATIONAL JOURNAL OF HEAT AND
FLUID FLOW, 30 ( 4). pp. 706-714. ISSN 0142-727X
Full text is not hosted in this archive but may be available via the Official URL, or by requesting a copy from the corresponding author.
Official URL: http://www.sciencedirect.com/science?_ob=ArticleUR...
In this paper, we report the results of our numerical studies on laminar mixed convection heat transfer in a circular Curved tube with a nanofluid consisting of water and 1 vol.% Al2O3. Three
dimensional elliptic governing equations have been used. Two phase mixture model and control volume technique have been implemented to study flow field. Effects of the diameter of particles diameter
decreases the Nusselt number and secondary flow, while the axial velocity augments. When the particles are in order of nano meter, increasing the diameter of particles, do not change the flow
behaviors. The distribution of solid nanoparticles is uniform and constant in curved tube (C) 2009 Elsevier Inc. All right reserved.
Repository Staff Only: item control page | {"url":"http://www.nanoarchive.org/7343/","timestamp":"2014-04-20T08:32:30Z","content_type":null,"content_length":"16279","record_id":"<urn:uuid:802608d9-6c0d-48d9-a357-721f40987a72>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does Ribet's level lowering theorem hold for prime powers?
up vote 8 down vote favorite
I often use the following theorem (that one can state more generally) in my research.
Let E/Q be an elliptic curve of conductor N corresponding to a modular form f(E), l a prime of good or multiplicative reduction for E, and \rho(l) the 2 dimensional mod l Galois representation given
by the action on the l-torsion points. Suppose that the torsion subscheme E[l] extends to a finite flat group scheme over Z_l, and let p be a prime of multiplicative reduction for E such that \rho(l)
is unramified at p (e.g. the number field (Q(E[l]) generated by the coordinates of the l-torsion points is unramified at p). Then there exists a modular form f of conductor N/p such that f is
congruent to f(E) mod l (when f has Fourier coefficients over Z then this means that all but finitely many of the coefficents are congruent mod l); one can `lower the level' from N to N/p.
Does such a result hold for powers of primes? E.g. if this holds for the mod l^n representation (instead of the mod l) does one get a congruence mod l^n?
nt.number-theory elliptic-curves galois-representations
add comment
3 Answers
active oldest votes
There's some slides from a talk by Ian Kiming here which discuss this question. He states a theorem (on slide number 8) corresponding to the existence of the map from a Hecke algebra at
level N/(p^u) (where p^u is the largest power of p dividing N) to Z/ell^n Z. As buzzard says, it's not clear that this map will lift, but Kiming speculates that if you allow the weight of
up vote 5 your modular form to vary you can find a char 0 lift.
down vote
add comment
If you put yourself in a position where an R=T theorem holds at level N/p (e.g.E[ell] irreducible, big image, ell>2), then you'll get a map from a Hecke algebra at level N/p to Z/ell^nZ.
up vote 3 But in general I don't see why this ring homomorphism should lift to a homomorphism from T to a char 0 integral domain and would bet on counterexamples if I were a betting man.
down vote
1 Richard Taylor's comment here is pertinent: boxen.math.washington.edu/msri06/problems/html/node28.html – user1125 Nov 20 '09 at 11:24
add comment
One case that you can say a bit more is if the congruence number at level $N/p$ is coprime to $l$. Specifically, if $f(E)$ is congruent to a unique modular form $g$ at level $N/p$
up vote 3 down modulo $l$, then you can say that $f(E)$ is congruent to $g$ modulo $l^n$.
Here is Soroosh's paper on this: arxiv.org/abs/1009.0284 – William Stein Sep 9 '10 at 19:50
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory elliptic-curves galois-representations or ask your own question. | {"url":"http://mathoverflow.net/questions/969/does-ribets-level-lowering-theorem-hold-for-prime-powers?sort=newest","timestamp":"2014-04-16T07:40:54Z","content_type":null,"content_length":"58632","record_id":"<urn:uuid:89bab72f-15dd-448b-a6a6-d1bb137c59c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 41
- Theoret. Comput. Sci , 1982
"... Trees are a very basic object in computer science. They intervene in nearly a;ly domain, and they are studied for their own, or used to represent conveniently a given situation. There are at
least three directions where investigations on trees themselves are motivated, and this for different reasons ..."
Cited by 56 (0 self)
Add to MetaCart
Trees are a very basic object in computer science. They intervene in nearly a;ly domain, and they are studied for their own, or used to represent conveniently a given situation. There are at least
three directions where investigations on trees themselves are motivated, and this for different reasons. First, the notion of tree is the basis of
- Foundations of Software Technology and Theoretical Computer Science, Lecture Notes in Computer Science , 1998
"... . We deal with matching and locating of patterns in forests of variable arity. A pattern consists of a structural and a contextual condition for subtrees of a forest, both of which are given as
tree or forest regular languages. We use the notation of constraint systems to uniformly specify both kind ..."
Cited by 53 (5 self)
Add to MetaCart
. We deal with matching and locating of patterns in forests of variable arity. A pattern consists of a structural and a contextual condition for subtrees of a forest, both of which are given as tree
or forest regular languages. We use the notation of constraint systems to uniformly specify both kinds of conditions. In order to implement pattern matching we introduce the class of pushdown forest
automata. We identify a special class of contexts such that not only pattern matching but also locating all of a forest's subtrees matching in context can be performed in a single traversal. We also
give a method for computing the reachable states of an automaton in order to minimize the size of transition tables. 1 Introduction In Standard Generalized Markup Language (SGML) [Gol90] documents
are represented as trees. A node in a document tree may have arbitrarily many children, independent of the symbol at that node. A sequence of documents or subdocuments is called a forest. A main task
in do...
, 1999
"... Introduction This note shows preliminaries of the hedge automaton theory. In the XML community, this theory has been recently recognized as a simple but powerful model for XML schemata. In
particular, the design of two schema languages for XML, namely RSL(Regular Schema Language) and DSD (Document ..."
Cited by 37 (2 self)
Add to MetaCart
Introduction This note shows preliminaries of the hedge automaton theory. In the XML community, this theory has been recently recognized as a simple but powerful model for XML schemata. In
particular, the design of two schema languages for XML, namely RSL(Regular Schema Language) and DSD (Document Structure Description) , is directly based on this theory. 2 Hedges First, we introduce
hedges. Informally, a hedge is a sequence of trees. In the XML terminology, a hedge is a sequence of elements possibly interevened by character data (or types of character data); in particular, an
XML document is a hedge. A hedge over a finite set E (of symbols) and a finite set X (of variables) is: (1) e (the null hedge), (2) x, where x is a variable in X, (3) a(u), where a is a symbol in E
and u is a hedge (the addition of a symbol as the root node), or (4) uv, where u and v are hedges (the concatenation of two hedges). Figure 1 depicts three hedges: a{e), a{x), and a{e)b{b{e)x).
Observe that el
- INFORMATION AND COMPUTATION , 2000
"... ... this paper, and it is the starting point for proving some novel results about the undecidability of second-order unification presented in the rest of the paper. We prove that second-order
unification is undecidable in the following three cases: (1) each second-order variable occurs at most t ..."
Cited by 33 (16 self)
Add to MetaCart
... this paper, and it is the starting point for proving some novel results about the undecidability of second-order unification presented in the rest of the paper. We prove that second-order
unification is undecidable in the following three cases: (1) each second-order variable occurs at most twice and there are only two second-order variables; (2) there is only one second-order variable
and it is unary; (3) the following conditions (i)#(iv) hold for some fixed integer n: (i) the arguments of all second-order variables are ground terms of size <n, (ii) the arity of all second-order
variables is <n, (iii) the number of occurrences of second-order variables is #5, (iv) there is either a single second-order variable or there are two second-order variables and no first-order
- Science of Computer Programming , 2004
"... Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete
structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has ..."
Cited by 26 (12 self)
Add to MetaCart
Abstract. Dynamic programming is a classical programming technique, applicable in a wide variety of domains such as stochastic systems analysis, operations research, combinatorics of discrete
structures, flow problems, parsing of ambiguous languages, and biosequence analysis. Little methodology has hitherto been available to guide the design of such algorithms. The matrix recurrences that
typically describe a dynamic programming algorithm are difficult to construct, error-prone to implement, and, in nontrivial applications, almost impossible to debug completely. This article
introduces a discipline designed to alleviate this problem. We describe an algebraic style of dynamic programming over sequence data. We define its formal framework, based on a combination of
grammars and algebras, and including a formalization of Bellman’s Principle. We suggest a language used for algorithm design on a convenient level of abstraction. We outline three ways of
implementing this language, including an embedding in a lazy functional language. The workings of the
, 2000
"... . Ambiguity in dynamic programming arises from two independent sources, the non-uniqueness of optimal solutions and the particular recursion scheme by which the search space is evaluated.
Ambiguity, unless explicitly considered, leads to unnecessarily complicated, inflexible, and sometimes even ..."
Cited by 25 (10 self)
Add to MetaCart
. Ambiguity in dynamic programming arises from two independent sources, the non-uniqueness of optimal solutions and the particular recursion scheme by which the search space is evaluated. Ambiguity,
unless explicitly considered, leads to unnecessarily complicated, inflexible, and sometimes even incorrect dynamic programming algorithms. Building upon the recently developed algebraic approach to
dynamic programming, we formalize the notions of ambiguity and canonicity. We argue that the use of canonical yield grammars leads to transparent and versatile dynamic programming algorithms. They
provide a master copy of recurrences, that can solve all DP problems in a well-defined domain. We demonstrate the advantages of such a systematic approach using problems from the areas of RNA folding
and pairwise sequence comparison. 1 Motivation and Overview 1.1 Ambiguity Issues in Dynamic Programming Dynamic Programming (DP) solves combinatorial optimization problems. It is a classical p...
, 1999
"... Motivation: Dynamic programming is probably the most popular programming method in bioinformatics. Sequence comparison, gene recognition, RNA structure prediction and hundreds of other problems
are solved by ever new variants of dynamic programming. Currently, the development of a successful dynamic ..."
Cited by 25 (9 self)
Add to MetaCart
Motivation: Dynamic programming is probably the most popular programming method in bioinformatics. Sequence comparison, gene recognition, RNA structure prediction and hundreds of other problems are
solved by ever new variants of dynamic programming. Currently, the development of a successful dynamic programming algorithm is a matter of experience, talent, and luck. The typical matrix recurrence
relations that make up a dynamic programming algorithm are intricate to construct, and difficult to implement reliably. No general problem independent guidance is available. Results: This article
introduces a systematic method for constructing dynamic programming solutions to problems in biosequence analysis. By a conceptual splitting of the algorithm into a recognition and an evaluation
phase, algorithm development is simplified considerably, and correct recurrences can be derived systematically. Without additional effort, the method produces an early, executable prototype expressed
in a funct...
- Journal of Symbolic Computation , 1995
"... Reduction Systems Definition 2.2. An Abstract Reduction System (short: ARS) consists of a set A and a sequence ! i of binary relations on A, labelled by some set I. We often drop the label if I
is a singleton. We write A j= P if the ARS A = (A; ! i ; : : : ); i 2 I has the property P . Further we ..."
Cited by 12 (0 self)
Add to MetaCart
Reduction Systems Definition 2.2. An Abstract Reduction System (short: ARS) consists of a set A and a sequence ! i of binary relations on A, labelled by some set I. We often drop the label if I is a
singleton. We write A j= P if the ARS A = (A; ! i ; : : : ); i 2 I has the property P . Further we write A j= P Q iff A j= P and A j= Q. An ARS A = (A; !) has the diamond property , A j= \Sigma, iff
/;! ` !;/. It has the Church-Rosser property (is confluent), A j= CR, iff (A; !!) j= \Sigma. Given an ARS A = (A; !), we write CR(t) as shorthand for (fu j t !! ug; !) j= CR. 4 Stefan Kahrs Under
most circumstances, confluence is a useful property of ARSs, mainly because: if (A; !) j= CR, and if two elements x; y 2 A are equivalent w.r.t. the smallest equivalence containing !, then there is a
z 2 A such that x!! z //y. Roughly: the ARS decides the equivalence. An ARS A = (A; ! a ; ! b ) commutes directly , A j= CD, iff / a ; ! b ` ! b ; / a . To prove confluence of an ARS, it is
- REWRITING TECHNIQUES AND APPLICATIONS , 1997
"... We show that simultaneous rigid E-unification, or SREU for short, is decidable and in fact EXPTIME-complete in the case of one variable. This result implies that the ... fragment of
intuitionistic logic with equality is decidable. Together with a previous result regarding the undecidability of the ..."
Cited by 10 (10 self)
Add to MetaCart
We show that simultaneous rigid E-unification, or SREU for short, is decidable and in fact EXPTIME-complete in the case of one variable. This result implies that the ... fragment of intuitionistic
logic with equality is decidable. Together with a previous result regarding the undecidability of the 99-fragment, we obtain a complete classification of decidability of the prenex fragment of
intuitionistic logic with equality, in terms of the quantifier prefix. It is also proved that SREU with one variable and a constant bound on the number of rigid equations is Pcomplete. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=614108","timestamp":"2014-04-17T05:19:04Z","content_type":null,"content_length":"37527","record_id":"<urn:uuid:0ba3c3a0-7882-4d47-90b5-536e45996955>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I'm a new student using MIT's OCW for 6.00 Intro Computer Science. I finished the first lecture problem set according to the instructions but seem stuck on where I go to submit it. I have searched
the website using the search tool and even googled but end up right back at the same pdf I started with that simply says in the final step, "submit". Can anyone help me please?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
u dont have to submit anything for that course since its a past years course. it's a self learner program. but if u need feedback on your work u can ask here and someone will surely help u.
Best Response
You've already chosen the best response.
register for the course in edx https://www.edx.org/ where you will be submiting your problem set and get results
Best Response
You've already chosen the best response.
yes, agree on what @mwas says.
Best Response
You've already chosen the best response.
the course is called MITx 6.00x
Best Response
You've already chosen the best response.
Thanks @MicroBot, I was led to understand that this was a "Spring 2013" course when I registered and all that, only that I realised once the material was made available to me that it was older
2011,2008 etc... That is when I was inclined in that direction. Again tanks, for setting me straight. Just one other thing, if we don't submit stuff, then how are we tracked and will we get to
sit exams/quizzes in a "strict timed/controlled environment" or are we simply going to be awarded a certificate (like a certificate of attendence) for say interacting here?
Best Response
You've already chosen the best response.
@mwas Thank you for your suggestion but I am already registered, I did so in mid January, and have (IMO) gone over all the documentation with a fine toothed comb. I couldn't seem to find the
"submit" page or link anywhere
Best Response
You've already chosen the best response.
@khz me and @mwas talk about the same course tbh! yes its the spring 2013 course u need to do...its just before it was started u had the older courses as extra optional material.... now in this
6.00x course from week 2 u will have actual Problem Sets to submit ...u will submit them directly on the coursware tab.Remeber there will be a due date for each PS. plus in between of the videos
u get some finger exercises to do.They are there to help u understand the material....the due date is at the end of the whole course but i suggest u do it while watching the videos.
Best Response
You've already chosen the best response.
Thanks @MicroBot, I have seen the error of my ways after retracing my steps right from registration. I did see the correct place you mentioned on the courseware tab. Now to catch up on my lessons
for this week. I will close this question and give a medal to you for setting me straight.
Best Response
You've already chosen the best response.
ty @khz GL with the course
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/511887d7e4b09e16c5c8f910","timestamp":"2014-04-21T04:39:37Z","content_type":null,"content_length":"48493","record_id":"<urn:uuid:42537fac-a052-4b48-8f78-d597107c227f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steven Strogatz writes about the elements of mathematics in the NY Times
Posted by: Dave Richeson | February 1, 2010
Steven Strogatz writes about the elements of mathematics in the NY Times
Yesterday the mathematician Steven Stragatz wrote the first article in a mulit-part series for the NY Times. In this first article, called From Fish to Infinity, he describes his intent.
Crazy as it sounds, over the next several weeks… I’ll be writing about the elements of mathematics, from pre-school to grad school, for anyone out there who’d like to have a second chance at the
subject — but this time from an adult perspective. It’s not intended to be remedial. The goal is to give you a better feeling for what math is all about and why it’s so enthralling to those who
get it.
So, let’s begin with pre-school.
I’m very interested in to see what Steve has to write on this topic. I’ll re-post the links here as they come out.
List of his essays [updated each time a new one appears]:
I was lucky enough to have a similar opportunity to write some articles for the Bangkok post with the same theme. It’s a lot of fun for the writer as you have so many choices of things you can write
By: David on February 1, 2010
at 4:35 pm
• That’s cool. Are your articles online? Do you have a link?
By: Dave Richeson on February 1, 2010
at 8:55 pm
Thanks for the heads up. I was gratified to see that the writer is an actual mathematician, as opposed to some schlub with an education degree.
By: sherifffruitfly on February 7, 2010
at 5:52 pm
• pupils have trouble learning math–and end up hating math–because of teachers with a condescending attitude like yours.
By: juno on October 10, 2012
at 9:11 pm
Posted in Links, Math, Teaching | Tags: mathematics, NY Times, Steven Strogatz | {"url":"http://divisbyzero.com/2010/02/01/steven-strogatz-writes-about-the-elements-of-mathematics-in-the-ny-times/?like=1&source=post_flair&_wpnonce=3b93935130","timestamp":"2014-04-17T04:14:05Z","content_type":null,"content_length":"67065","record_id":"<urn:uuid:96b6131b-533d-465a-b995-d22792563f60>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Is physical space three dimensional? Mathematical perspectives...
Replies: 4 Last Post: Feb 4, 2013 10:04 PM
Messages: [ Previous | Next ]
Re: Is physical space three dimensional? Mathematical perspectives...
Posted: Feb 4, 2013 9:03 PM
typically obfuscating nondimensional analysis,
you twosome. space is clearly three-dimensional
for several applications (surveying & navigation e.g.),
and it is clearly "more complicated that that,"
for both interatomic & intergalactic processes,
hence the enormous effioacy of stringtheory,
beginning with the Kaluza theory.
however, the "compactification to a string"
of Klein may not be necessary;
it is simply an abstuse math-speak.
it is not that Kaluza was wrong, it's just that
no-one knew how to treat of such dimensionality,
other than through phase-spaces (like Hamiltonians and
Lagrangians) -- with the notable exception
of quaternions, or "the first-get vector mechanics,
wherefrom we get *all* of the lingo thereof."
> How difficult it is to speak of space. The physicists and astronomers of this day insist that space is isotropic. Remove all the material from space and it may be so, but that is not the space I
live in. Isotropic space is a misnomer. Space is clearly structured, especially when we unify it with time. But here I don't mean to go this far. I'm back at the first stage hopefully.
> > Th. Kaluza agreed with Einstein and in 1921 tried
> > to explain SRT using 5D space.
Date Subject Author
2/2/13 Re: Is physical space three dimensional? Mathematical perspectives... socratus@bezeqint.net
2/4/13 Re: Is physical space three dimensional? Mathematical perspectives... Brian Q. Hutchings
2/4/13 Re: Is physical space three dimensional? Mathematical perspectives... Brian Q. Hutchings | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2432391&messageID=8248941","timestamp":"2014-04-19T04:53:17Z","content_type":null,"content_length":"19719","record_id":"<urn:uuid:42ad58c2-3f19-406d-96fe-50dddf3986ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Billerica Statistics Tutor
Find a North Billerica Statistics Tutor
...SPSS is great, but with all the syntax files and output, chaos can descend on the unsuspecting! Scheduling: I am available days, evenings, and weekends, and can meet you anywhere the T
goes.Because I enjoy reading, I have an extensive vocabulary. I can show you tips on how to expand your vocabulary.
18 Subjects: including statistics, English, writing, GRE
...I've used the TI-84 graphing calculator to explore the relation of m and b in the equation y = mx + b and its graph. I've used Excel to model the solids to derive their volume. I also used
Excel to show that the area and perimeter of a regular polygon approaches the area and circumference of a circle as n (number of sides) approaches infinity.
13 Subjects: including statistics, physics, ASVAB, algebra 1
...I also own my own business for the last 4 years. I had to give numerous presentations and public speeches during my many years of schooling and my professional career in multinational
corporations. I am happy to help students reach their full potential and become better at public speaking I hav...
67 Subjects: including statistics, reading, English, calculus
...I have my masters plus extensive additional coursework. With over 15 years experience teaching math, and science I am well versed in various topics from pre-algebra through calculus (including
algebra 1 and 2, geometry, pre-calculus and probability) as well as SAT and GRE prep.Algebra 2 includes...
23 Subjects: including statistics, physics, calculus, geometry
...I have successfully used this technique to improve the ear-training skills of learners from adolescents to senior citizens. I have a Bachelor's Degree in Music Education with a focus on Voice.
In addition to my academic and teaching credentials, I have extensive experience teaching basic general music concepts to students aging from young adults to senior citizens.
46 Subjects: including statistics, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/north_billerica_statistics_tutors.php","timestamp":"2014-04-20T11:05:24Z","content_type":null,"content_length":"24489","record_id":"<urn:uuid:f0bd86ff-5f6e-48e9-93e4-33b5017de1c9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing errors with square roots of infinitesimals.
Automatic differentiation (AD) gives a way to carry out
uncertainty propagation
. But used in the obvious way it leads to
. This article introduces "square roots of infinitesimals" that can be used to give more accurate results.
In the real world measurements have errors and we often want to know how much our final answers are affected by those errors. One tool for measuring the sensitivity to errors of our results is
calculus. In fact, we can use automatic differentiation to give us a nice way to model error. Here's an implementation:
> import Control.Monad
> import Control.Monad.State
> import Control.Applicative
> import qualified Data.IntMap as I
> infixl 6 .+.
> infixl 7 .*
> infixl 7 *.
> data D a = D a a deriving (Eq, Show)
> instance Num a => Num (D a) where
> D x a+D x' a' = D (x + x') (a + a')
> D x a*D x' a' = D (x*x') (a*x' + x*a')
> negate (D x a) = D (negate x) (negate a)
> fromInteger n = D (fromInteger n) 0
> abs _ = error "No abs"
> signum _ = error "No signum"
> d = D 0 1
As I've talked about
, the value
can be thought of as an infinitesimal number whose square is zero. However, to first order we can replace
with a small number and use it to compute errors. Here's a function to perform such a substitution:
> approx :: Num a => D a -> a -> a
> approx (D x d) e = x+d*e
Suppose we have a square whose side we've measured as 1m to an accuracy of 1cm. We can represent this as:
> sq_side = D 1 0.01
We can now compute the area:
> sq_area = sq_side^2
We get
D 1.0 2.0e-2
. We can interpret this as meaning the area is 1m
with an accuracy of 0.02m
We can make "to an accuracy of" more precise. Differentiation models a function locally as an approximate linear function. (1m+δ)
). So at 1m, the function to compute area locally scales lengths by 2m. So if the length measurement is actually a sample from a distribution with given mean 1m and standard deviation 0.01m, the area
is approximately a sample from a distribution with mean 1m
and SD 0.02m
This approach is nice, but sometimes we want a little more accuracy in our estimates. In particular, if you square a sample from a normal distribution with small variance and positive mean, then the
nonlinearity of the squaring operation means that samples that are larger than the mean move further away from the mean, when squared, than samples less than the mean. So we should actually expect
our area computations to be slightly
upwards from 1m
. Unfortunately, this is a second order effect that isn't visible from looking only at first derivatives.
That's not a problem, we can easily compute second derivatives using automatic differentiation. However, that can complicate things. What happens if we use multiple measurements to compute a
quantity? Each one is a different sample from a different distribution and we don't want these measurements to be correlated. If we approach this in the obvious way, when we want to use n
measurements we'll need to compute n
partial second derivatives. However, by tweaking AD slightly we'll only need n derivatives.
Square roots of infinitesimals
In addition to the usual infinitesimal d we want to introduce quantities, w_i, that represent independent random "noise" variables that are infinitesimal in size. We'll be interested in expectation
values so we'll also need an expectation function, e. We want e(w_i)=0. But w
is always positive so its expectation is always greater than or equal to zero. We want our random variables to be infinitesimal so we pick e(w
)=d. We also want e(w
)=0 because of independence. If the w
are already infinitesimal, the dw
should be zero. So let's define an algebraic structure that captures these relationships. So we extend
D a
by introducing
w i
so that:
Any element of this algebra can be written as x+ad+Σb
. We represent b sparsely by using an
. Here's an implementation:
> data S a = S a a (I.IntMap a) deriving (Eq, Show)
> (.+.) :: Num a => I.IntMap a -> I.IntMap a -> I.IntMap a
> ( *.) :: Num a => a -> I.IntMap a -> I.IntMap a
> (.*) :: Num a => I.IntMap a -> a -> I.IntMap a
> (.*.) :: Num a => I.IntMap a -> I.IntMap a -> a
> (.+.) = I.unionWith (+)
> a *. v = I.map (a *) v
> v .* b = I.map (* b) v
> a .*. b = I.fold (+) 0 $ I.intersectionWith (*) a b
> instance Num a => Num (S a) where
> S x a b+S x' a' b' = S (x + x') (a + a') (b .+. b')
> S x a b*S x' a' b' = S (x*x') (a*x' + x*a' + b.*.b') (x*.b' .+. b.*x')
> negate (S x a b) = S (negate x) (negate a) (I.map negate b)
> fromInteger n = S (fromInteger n) 0 I.empty
> abs _ = error "No abs"
> signum _ = error "No signum"
Here are the individual
w i
> w :: Num a => Int -> S a
> w i = S 0 0 (I.fromList [(i, 1)])
We compute expectation values linearly by mapping the
w i
to zero:
> e :: Num a => S a -> D a
> e (S x a _) = D x a
We can also represent numbers whose values we know precisely:
> sure x = S x 0 I.empty
Let's revisit the area example. This time we can represent the length of the side of our square as
> sq_side' = 1+0.01*w 0
> sq_area' = sq_side'^2
We get
S 1.0 1.0e-4 (fromList [(0,2.0e-2)])
. We can directly read off that we have a bias of 10
which is 1cm^2. We can encapsulate this as:
> mean f = approx (e f) 1
We can directly read off the variance from the element of the algebra. However, we can also compute the variance using
. It's just:
> var f = mean ((f-sure (mean f))^2)
(Note that this gives a very slightly different result from the value you can read off directly from the
object. It depends on whether we're measuring the deviation around the unbiased or biased mean. To the order we're considering here the difference is small. Here's
> var' (S _ _ v) = I.fold (+) 0 $ I.map (\x -> x^2) v
We can also define covariance:
> cov f g = mean ((f-sure (mean f))*(g-sure (mean g)))
More functions
We can now follow through just like with automatic differentiation to compute lots more functions. We use the fact that:
> instance Fractional a => Fractional (S a) where
> fromRational x = S (fromRational x) 0 I.empty
> recip (S x a b) = let r = recip x
> in S r (-a*r*r+r*r*r*(b.*.b)) ((-r*r)*.b)
> instance Floating a => Floating (S a) where
> pi = sure pi
> sin (S x a b) = let s = sin x
> c = cos x
> in S s (a*c - s/2*(b.*.b)) (c*.b)
> cos (S x a b) = let s = sin x
> c = cos x
> in S c (-a*s - c/2*(b.*.b)) ((-s)*.b)
> exp (S x a b) = let e = exp x
> in S e (a*e + e/2*(b.*.b)) (e*.b)
> sqrt (S x a b) = let s = sqrt x
> in S s (a/(2*s)-1/(4*s*s*s)*(b.*.b)) (1/(2*s)*.b)
> log (S x a b) = let r = 1/x
> in S (log x) (r*a-r*r/2*(b.*.b)) (r*.b)
> asin = undefined
> acos = undefined
> atan = undefined
> sinh = undefined
> cosh = undefined
> tanh = undefined
> asinh = undefined
> acosh = undefined
> atanh = undefined
A real example
Let's make this effort worthwhile. We'll compute errors for a computation that uses the errors in a messy nonlinear way. Suppose we're in the lab measuring radioactive decay. We measure the geiger
counter reading at times t = 0hr, 1hr, 2hr, 3hr, 4hr at which point we compute an estimate for when the decay will drop to one tenth of its original value. We'll assume the decay fits a model counts/
sec = a exp(-λt) and that the counts have an error with SD 0.05. We're going to compute the error in the estimated time to hit one tenth radioactivity in the case when the half life is 30 minutes and
> t = [0..4]
> counts = map (\i-> 2*exp(-0.5*fromIntegral i)+0.05*w i) t
We'll be fitting a curve using logarithmic regression so we'll need the following function. Given a pair of lists
it returns
(m, c)
where y=mx+c is the standard least squares fit.
> regress :: Fractional a => [a] -> [a] -> (a, a)
> regress x y =
> let sx = sum x
> sy = sum y
> sxx = sum $ map (^2) x
> sxy = sum $ zipWith (*) x y
> n = fromIntegral (length x)
> s = 1/(sx*sx-n*sxx)
> in (s*sx*sy-s*n*sxy, -s*sxx*sy+s*sx*sxy)
Logarithmic regression:
> (m, c) = regress (map fromIntegral t) (map log counts)
> lambda = -m
> a = exp c
> t_tenth = -log (0.1/a)/lambda
We can now go ahead and compute the mean and variance of our estimate:
*Main> mean t_tenth
*Main> var t_tenth
The correct time is about 5.991 so the regression method above is biased by about 0.01. If we repeated the same experiment over and over again and averaged the estimates we got from logarithmic
regression the process would not converge to the correct result. In fact, we can compute "ground truth" by simulating the experiment a million times in Octave and estimate the mean and variance from
that. The code is in the appendix. Obviously this is a much slower process but it clearly demonstrates the biasedness of using regression this way.
GNU Octave, version 3.4.0
Copyright (C) 2011 John W. Eaton and others.
ans = 5.9798
ans = 0.15948
Final thoughts
This is yet another example of extending automatic differentiation. We have variants for single variable differentiation, multivariate differentiation, multiple differentiation,
divided differences
, splitting a function into
odd and even parts
and now automatic error propagation.
This stuff was very loosely inspired by reading
An Introduction to Stochastic Processes in Physics
. I'm attempting to capture the semi-formal rules used in that book to reason about differentials and you can think of the algebra above as representing stochastic differentials. I made a guess that
the algebra is called the
algebra. Sure enough, you'll get a few
The most similar published work I can find is
Automatic Propagation of Uncertainties
but it seems to just use ordinary AD.
This technique may be useful for
Extended Kalman Filtering
I haven't done the work to make precise statements about how accurate you can expect my estimates of expectations to be.
It's possible to implement a monad with syntax similar to other probability monads by using state to bump up the
w i
each time you generate a new random variable. But bear in mind, these are always intended to be used as *infinitesimal* random variables.
Appendix: Octave code
m = 5;
n = 1000000;
x = repmat([0:m-1]',1,n);
y = repmat([2*exp(-0.5*[0:m-1]')],1,n)+0.05*normrnd(0,1,m,n);
sx = sum(x);
sxx = sum(x.*x);
p = sum(log(y));
q = sum(x.*log(y));
s = 1./(sx.*sx-m*sxx);
m = s.*sx.*p-m*s.*q; # Redefined
c = -s.*sxx.*p+s.*sx.*q;
lambda = -m;
a = exp(c);
x_tenth = -log(0.1./a)./lambda;
3 comments:
Aaron Denney said...
Stochastic calculus comes in two forms, Itō as you noticed, and Stratonovich. They're both defined with a limiting process similar to the Riemann integral. They basically differ in that the Itō
formulation is explicitly "causal" and function evaluation happens at the beginning of a segment. The Stratonovich formulation uses a balanced "midpoint" evaluation strategy. Unlike the Riemann
integral case, these two converge differently. The big draw for the Itō formulation is that the calculations are much easier, as many expectation values are easily seen to be zero. The big draw
for the Stratonovich formulation is that corresponds more directly to physical differential equations, and is the only one that has the right transformation properties to be used on manifolds.
Orwin said...
It is fair to point out that rounding errors in computation do not match errors of estimate in physics, which may run to three decimal places. Conversion to binary makes the problem worse: to
think otherwise is just structuralist myth-making.
Only in the ancient continued fractions are rational and irrational numbers comprehensively distinguished by termination. But an error of estimate encompasses noise as well as finite accuracy, so
the problem is generic, and comparable rather to heat of computation!
Polymath said...
Abraham Robinson's Non-Standard Analysis introduces infinitesimals into the number-line. It would be interesting (and challenging) to construct an algorithmic 'library' implementing this
rigorous, but relatively simple theory. I expect that implementation of your ideas in NS Analysis would be result in very simple programs.
The library would have application to many areas of mathematics. That said, I am not volunteering to map the theory into any existing language! | {"url":"http://blog.sigfpe.com/2011/08/computing-errors-with-square-roots-of.html","timestamp":"2014-04-16T05:11:45Z","content_type":null,"content_length":"74578","record_id":"<urn:uuid:32462576-32fd-4f1f-be20-5f92d4855a61>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
CCSS.Math.Content.6.RP.A.1 - Wolfram Demonstrations Project
US Common Core State Standard Math 6.RP.A.1
Demonstrations 1 - 2 of 2
Description of Standard: Understand the concept of a ratio and use ratio language to describe a ratio relationship between two quantities. For example, “The ratio of wings to beaks in the bird house
at the zoo was 2:1, because for every 2 wings there was 1 beak.” “For every vote candidate A received, candidate C received nearly three votes.”
Proportional Quantities Ratio Notation | {"url":"http://www.demonstrations.wolfram.com/education.html?edutag=CCSS.Math.Content.6.RP.A.1","timestamp":"2014-04-16T21:52:23Z","content_type":null,"content_length":"23573","record_id":"<urn:uuid:59168773-dc0a-46cf-859d-bfb279631d7b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a good graphing calculator?
What i I am asking is ... Ok I need a graphic calculator
Is it worth it to shell out the $$$ for TI-89 now to future proof or get a TI-83 and buy a TI-89 if I need one....
TI-89 not worth the extra $$$...
TI-83 overpriced, old and too slow.
Casio vs TI Graphing Calculator Comparison: http://www.casioeducation.com/resour...comparison.pdf
Try looking into this comparison chart as you can see the Casio 9860G outperforms the TI-83 in every category, and almost neck to neck when compared to the TI-89. If you wanted the Speed and the
Performances of the TI-89 but with the price tag less than the TI-83 then go with the Casio model. It has a 15 MHZ CPU just like the TI-89 so you know you're getting a powerful Graphing Calculator
Casio FX-9860G video: http://www.youtube.com/watch?v=F11kohfvu9I | {"url":"http://www.physicsforums.com/showthread.php?t=260341","timestamp":"2014-04-20T08:35:40Z","content_type":null,"content_length":"29562","record_id":"<urn:uuid:bc9eb7ca-3fd4-4d7f-b6d4-b7c536275274>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
*Cliffs Quick Review Series Precalculus
by W. Michael Kelley
CliffsQuickReview course guides cover the essentials of your toughest classes. You're sure to get a firm grip on core concepts and key material and be ready for the test with this guide at your
Whether you're new to functions, analytic geometry, and matrices or just brushing up on those topics, CliffsQuickReview Precalculus can help. This guide introduces each topic, defines key terms,
and walks you through each sample problem step-by-step. In no time, you'll be ready to tackle other concepts in this book such as
Arithmetic and algebraic skills Functions and their graphs Polynomials, including binomial expansion Right and oblique angle trigonometry Equations and graphs of conic sections Matrices and their
application to systems of equations
Publisher: Wiley, John & Sons, Incorporated Pub. Date: February 2004 228pp Series: CliffsQuickReview Series
Price: $2.98
*Super Review : Pre-Calculus
Author: Research and Education Association
Get all you need to know with Super Reviews! Each Super Review is packed with in-depth, student-friendly topic reviews that fully explain everything about the subject. The Pre-Calculus Super Review
includes sets, numbers, operations and properties, coordinate geometry, fundamental algebraic topics, solving equations and inequalities, functions, trigonometry, exponents and logarithms, conic
sections, matrices, and determinants. Take the Super Review quizzes to see how much you've learned - and where you need more study. Makes an excellent study aid and textbook companion. Great for
Publisher: Research & Education Association Pub. Date: June 2000 ISBN-13: 9780878910885 328pp Series: Super Reviews
Price: $2.98
500 Pre-Calculus Questions
Sharpen your skills and prepare for your precalculus exam with a wealth of essential facts in a quick-and-easy Q&A format!
Get the question-and-answer practice you need with McGraw-Hill's 500 College Precalculus Questions. Organized for easy reference and intensive practice, the questions cover all essential
precalculus topics and include detailed answer explanations.
The 500 practice questions are similar to course exam questions so you will know what to expect on test day. Each question includes a fully detailed answer that puts the subject in context. This
additional practice helps you build your knowledge, strengthen test-taking skills, and build confidence. From ethical theory to epistemology, this book covers the key topics in precalculus.
Prepare for exam day with:
• 500 essential precalculus questions and answers organized by subject
• Detailed answers that provide important context for studying
• Content that follows the current college 101 course curriculum
Price: $16.00
Calculus for Dummies
by Mark Ryan
The mere thought of having to take a required calculus course is enough to make legions of students break out in a cold sweat. Others who have no intention of ever studying the subject have this
notion that calculus is impossibly difficult unless you happen to be a direct descendant of Einstein.
Well, the good news is that you can master calculus. It's not nearly as tough as its mystique would lead you to think. Much of calculus is really just very advanced algebra, geometry, and trig. It
builds upon and is a logical extension of those subjects. If you can do algebra, geometry, and trig, you can do calculus.
Calculus For Dummies is intended for three groups of readers:
Students taking their first calculus course – If you're enrolled in a calculus course and you find your textbook less than crystal clear, this is the book for you. It covers the most important
topics in the first year of calculus: differentiation, integration, and infinite series.
Students who need to brush up on their calculus to prepare for other studies – If you've had elementary calculus, but it's been a couple of years and you want to review the concepts to prepare for,
say, some graduate program, Calculus For Dummies will give you a thorough, no-nonsense refresher course.
Adults of all ages who'd like a good introduction to the subject – Non-student readers will find the book's exposition clear and accessible. Calculus For Dummies takes calculus out of the ivory
tower and brings it down to earth.
Publisher: Wiley, John & Sons, Incorporated Pub. Date: August 2003 ISBN-13: 9780764524981 380pp Series: For Dummies Series Edition Number: 1
Price: $19.99
EZ 101 Statistics
by Martin Sternstein
Books in the EZ-101 Study Keys series are intended as brush-up reviews for a variety of college-101 courses. They are designed as a set of classroom "notes" that reflect typical lecture material
presented in a classroom over the course of a semester. As such, they make handy supplements to college textbooks and serve as valuable pre-exam reviews. This overview of statistics covers nine
general themes: descriptive statistics, shape, probability, probability distributions, planning a study, the population proportion, the population mean, Chi-square analysis, and regression
Publisher: Barron's Educational Series, Incorporated Pub. Date: September 2005 ISBN-13: 9780764129155 200pp Series: EZ-101 Study Keys Edition Number: 2
Item is temporarily out of stock, shipping will be delayed.
Price: $8.99
Price: $19.95
Price: $19.95
Linear Algebra for Dummies
Does linear algebra leave you feeling lost? No worries —this easy-to-follow guide explains the how and the why of solving linear algebra problems in plain English. From matrices to vector spaces to
linear transformations, you'll understand the key concepts and see how they relate to everything from genetics to nutrition to spotted owl extinction.
• Line up the basics — discover several different approaches to organizing numbers and equations, and solve systems of equations algebraically or with matrices
• Relate vectors and linear transformations — link vectors and matrices with linear combinations and seek solutions of homogeneous systems
• Evaluate determinants — see how to perform the determinant function on different sizes of matrices and take advantage of Cramer's rule
• Hone your skills with vector spaces — determine the properties of vector spaces and their subspaces and see linear transformation in action
• Tackle eigenvalues and eigenvectors — define and solve for eigenvalues and eigenvectors and understand how they interact with specific matrices
Price: $19.99
Price: $17.95
Physics Demystified
Gibilisco, Stan (Author)
Understanding PHYSICS just got a whole lot EASIER!
Stumped trying to make sense of physics? Here's your solution. "Physics Demystified," Second Edition helps you grasp the essential concepts with ease.
Written in a step-by-step format, this practical guide begins by covering classical physics, including mass, force, motion, momentum, work, energy, and power, as well as the temperature and states
of matter. Electricity, magnetism, and electronics are discussed as are waves, particles, space, and time. Detailed examples, concise explanations, and worked problems make it easy to understand
the material, and end-of-chapter quizzes and a final exam help reinforce learning.
"It's a no-brainer! You'll learn about: "Scientific notation, units, and constantsNewton's laws of motionKirchhoff's lawsAlternating current and semiconductorsOpticsRelativity theory
"Simple enough for a beginner, but detailed enough for an advanced student, Physics Demystified, Second Edition helps you master this challenging and diverse subject. It's also the perfect resource
to prepare you for higher-level physics classes and college placement tests.
Price: $20.00
Showing Results 1 - 10 of 23 | {"url":"http://ucdavisstores.com/MerchList.aspx?ID=12239","timestamp":"2014-04-18T15:52:56Z","content_type":null,"content_length":"135489","record_id":"<urn:uuid:3fcb6132-3668-4f66-94f7-e16108ef8615>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Isoperimetric inequality
From Encyclopedia of Mathematics
(in geometry and physics)
A general term referring to the inequality
For the best known isoperimetric inequalities, namely the classical one and its analogues in the Minkowski spaces Isoperimetric inequality, classical.
An extensive coverage of isoperimetric inequalities between the elements of the simplest figures, mainly polygons, can be found in [1]. Such isoperimetric inequalities are called geometric
For elementary isoperimetric inequalities between such parameters of sets in [2] and [3]. Among them are: Young's inequality:
Gale's inequality:
and the Loomis–Whitney inequality:
where [4], [5]). In Bieberbach's inequality, the diameter can be replaced by the mean width (see [5]).
In connection with problems of arrangement and covering, isoperimetric inequalities are considered that are specific for polyhedra, with the introduction of the number of edges or the sum of their
lengths, etc. (see [6]).
For convex bodies, many isoperimetric inequalities (including the classical ones and a number of inequalities between integrals of symmetric functions of principal curvatures) are special cases of
inequalities between composite objects (see Mixed-volume theory; Minkowski inequality).
The use of isoperimetric inequalities as estimates for some parameters of figures in terms of others arose within the limits of geometry. The class of isoperimetric inequalities is enriched by
mathematical physics, the theory of functions of a complex variable, functional analysis, the theory of approximations of functions, and the calculus of variations. Isoperimetric inequalities in
Riemannian geometry are noticeably more complex.
In mathematical physics, isoperimetric inequalities arose (firstly as conjectures) in papers of A. Saint-Venant (1856):
where [7] and [8]. Certain estimates for the first eigen value [9].
In functional analysis, conditions for boundedness and compactness of imbedding operators (see Imbedding theorems) for Sobolev spaces have been given in terms of isoperimetric inequalities
(connecting measure and capacity) (see [10], [11]).
For example, the estimate
Here Capacity).
Isoperimetric inequalities for volume and area are used in the proof of a priori estimates for solutions of linear and quasi-linear elliptic equations (see [12], [26]).
Specific isoperimetric inequalities arise for convex bodies in a Minkowski space in connection with the theory of approximation of functions (see Self-perimeter; Width).
Applying isoperimetric inequalities is a standard device in the theory of conformal and quasi-conformal mappings. Inequality (4) below is an example of a conformally-invariant isoperimetric
Isoperimetric inequalities involving the mean curvature of a submanifold, in particular for minimal surfaces, play an important part in the solution of the Plateau problem.
In the Riemannian geometry of non-homogeneous spaces, generalizations of the classical isoperimetric inequalities have been studied in detail only in the two-dimensional case. Let [13]):
The isoperimetric inequality (1) is valid also for a two-dimensional manifold of bounded curvature, which is a more general type of manifold than a Riemannian manifold. Equality in (1) is attained
for a non-regular object — a domain isometric to the lateral surface of a right circular cone with complete angle Winding number). In particular, for the geodesic length
Isoperimetric inequality (1) is a special case of the estimate
where Euler characteristic of the compact domain with boundary, Gaussian curvature. Similar to (2) are estimates for the area of a [14]). If the surface [15]):
where [16]). In particular, for a simply-connected saddle surface in
The inequalities mentioned remain true for general (non-regular) surfaces in [16]).
For an sectional curvature or the Ricci curvature. The simplest is a bound for the volume
where [17]). Similar isoperimetric inequalities are valid for a tubular [18]).
If the least upper bound [19]). The following linear isoperimetric inequality holds for a domain
the exact value of
In spaces of negative curvature, a number of estimates have been obtained for convex domains that generalize isoperimetric inequalities for convex bodies in [20], [21]). Thus,
The inequality mean curvature. The following isoperimetric inequality holds in the three-dimensional case:
where scalar curvature of
In the classical isoperimetric inequality, the area is bounded from above. For closed simply-connected surfaces, the area can be bounded from below in terms of the length
The exact value of [22]):
for the extremal length
The problem of similar inequalities for [22]. If
here [24], [25] for more details.
In the theory of minimal surfaces and surfaces like them, a number of isoperimetric inequalities have been obtained that hold not only for smooth [26], [27]:
In methods of proof and applications, lower bounds for the volume [27], [28] for some generalizations to minimal films in
[1] O. Bottema, "Geometric inequalities" , Noordhoff (1969) MR0262932 Zbl 0174.52401
[2] H. Hadwiger, "Vorlesungen über Inhalt, Oberfläche und Isoperimetrie" , Springer (1957) MR0102775 Zbl 0078.35703
[3] T. Bonnesen, W. Fenchel, "Theorie der konvexen Körper" , Springer (1934) MR0344997 MR0372748 MR1512278 Zbl 0008.07708 Zbl 60.0673.01
[4] L. Danzer, B. Grünbaum, V.L. Klee, "Helly's theorem and its relatives" V.L. Klee (ed.) , Convexity , Proc. Symp. Pure Math. , 7 , Amer. Math. Soc. (1963) pp. 101–180 MR157289
[5] M.S. Mel'nikov, "On the relation between volume and diameter of sets in Uspekhi Mat. Nauk , 18 : 4 (1963) pp. 165–170 (In Russian)
[6] L. Fejes Toth, "Lagerungen in der Ebene, auf der Kugel und im Raum" , Springer (1972) Zbl 0229.52009
[7] G. Pólya, G. Szegö, "Isoperimetric inequalities in mathematical physics" , Princeton Univ. Press (1951) MR0043486 Zbl 0044.38301
[8] L.E. Payne, "Isoperimetric inequalities and their applications" Siam Rev. , 9 : 3 (1967) pp. 453–488 MR0218975 Zbl 0154.12602
[9] M. Berger, P. Ganduchon, E. Mazet, "Le spectre d'une variété riemannienne" , Springer (1971) MR0282313
[10] V.G. Maz'ya, "Certain integral inequalities for functions of several variables" , Problems in mathematical analysis , 3 , Leningrad (1972) pp. 33–68 (In Russian)
[11] V.G. Maz'ya, "Classes of sets and measures connected with imbedding theorems" , Imbedding theorems and their applications , Moscow (1970) pp. 142–159 (In Russian)
[12] V.G. Maz'ya, "On weak solutions of the Dirichlet and Neumann problems" Trans. Moscow Math. Soc. , 20 (1969) pp. 135–172 Trudy Moskov. Mat. Obshch. , 20 (1969) pp. 137–172 Zbl 0226.35027
[13] A.D. Aleksandrov, V.V. Strel'tsov, "Isoperimetric problem and estimates of the length of a curve on a surface" Proc. Steklov Inst. Math. , 76 (1965) pp. 81–99 Trudy Mat. Inst. Steklov. , 76
(1965) pp. 67–80 Zbl 0162.25903
[14] Yu.D. Burago, "Note on the isoperimetric inequality on two-dimensional surfaces" Siberian Math. J. , 14 : 3 (1973) pp. 463–465 Sibirsk. Mat. Zh. , 14 : 3 (1973) pp. 666–668 Zbl 0281.52005
[15] J.K. Shahin, "Some integral formulas for closed hypersurfaces in Euclidean space" Proc. Amer. Math. Soc. , 19 : 3 (1968) pp. 609–613 MR0227906 Zbl 0165.24304
[16] Yu.D. Burago, "Isoperimetric inequalities in the theory of surfaces of bounded external curvature" , Consultants Bureau (1970) (Translated from Russian) MR0276905 Zbl 0198.55001
[17] R.L. Bishop, R.J. Crittenden, "Geometry of manifolds" , Acad. Press (1964) MR0169148 Zbl 0132.16003
[18] J. Cheeger, "Finiteness theorems for Riemannian manifolds" Amer. J. Math. , 92 : 1 (1970) pp. 61–74 MR0263092 Zbl 0194.52902
[19] G.A. Margulis, "Thesis" , VI All-Union Topological Conference , Tbilisi (1972) (In Russian)
[20] B.V. Dekster, "Estimates of the volume of a region in a Riemannian space" Math. USSR Sb. , 17 : 1 (1972) pp. 61–87 Mat. Sb. , 88 : 1 (1972) pp. 61–87 Zbl 0251.53047
[21] Yu.A. Volkov, B.V. Dekster, "Estimates of the curvature of a three-dimensional evolute" Math. USSR Sb. , 12 : 4 (1970) pp. 615–640 Mat. Sb. , 83 : 4 (1970) pp. 616–638 MR0275337 Zbl 0219.53055
[22] M. Berger, "Du côté de chez Pu" Ann. Sci. École Norm Sup. Sér. 4 , 5 : 1 (1972) pp. 1–44 MR0309008 Zbl 0227.52005
[23] M. Berger, "A l'ombre de Loewner" Ann. Sci. École Norm. Sup. Sér. 4 , 5 : 2 (1972) pp. 241–260 MR0309009
[24] W.R. Derrick, "A weighted volume-diameter inequality for J. Math. Mech. , 18 : 5 (1968) pp. 453–472 MR246204
[25] F. Almgren, "An isoperimetric inequality" Proc. Amer. Math. Soc. , 15 : 2 (1964) pp. 284–285 MR0159925 Zbl 0187.31203
[26] J. Michael, L. Simon, "Sobolev and mean-value inequalities on generalized submanifolds of Comm. Pure Appl. Math. , 26 : 23 (1973) pp. 361–379 MR0344978
[27] W. Allard, "On the first variation of a varifold" Ann. of Math. , 95 (1972) pp. 417–491 MR0307015 Zbl 0252.49028
[28] A.T. Fomenko, "Minimal compacta in Riemannian manifolds and a conjecture of Reifenberg" Izv. Akad. Nauk SSSR Ser. Mat. , 36 : 5 (1972) pp. 1049–1079 (In Russian) MR0333901
For an up-to-date account of geometric inequalities supplementing [1], see [a3].
[a1] C. Bandle, "Isoperimetric inequalities and applications" , Pitman (1980) MR0572958 Zbl 0436.35063
[a2] H. Kaul, "Isoperimetrische Ungleichung und Gauss–Bonnet–Formel für Arch. Rat. Mech. Anal. , 45 (1972) pp. 194–221
[a3] D.S. Mitrinović, J.E. Pečarić, V. Volenec, "Recent advances in geometric inequalities" , Kluwer (1989) MR1022443 Zbl 0679.51004
How to Cite This Entry:
Isoperimetric inequality. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Isoperimetric_inequality&oldid=28222
This article was adapted from an original article by Yu.D. Burago (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"http://www.encyclopediaofmath.org/index.php/Isoperimetric_inequality","timestamp":"2014-04-19T12:12:42Z","content_type":null,"content_length":"56487","record_id":"<urn:uuid:fd95e96f-a38a-4a20-83c6-f3edc656bc85>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
What do you think?
Re: What do you think?
By writing a program in M and running it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
I know that. But, did you fill the square and then check all differences or did you do it by filling the squares one by one?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
I could write down the pseudocode for the algorithm.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Sure! Just hide it!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Have you finished writing the pseudo-code yet?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Not yet, I got busy with something else.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Pseudocode provided.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym
Have you actually tried running the code, because I don't thnk it would work...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
It probably would not work. The pseudocode monster came and ate it up.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
You have a square of side 5 divided into 25 unit squares. In the central square a 0 is placed.
Question: In how many ways can the square be filled with non-negative numbers so that the absolute difference of numbers in two adjacent squares is never greater than 1? What is the answer if the
large square has a side of 11 instead?
Got 433 for the 3x3 matrix, so I'm hopeful that it's the answer.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
That is what we got too! I never did get an answer for a 5 x 5 and an 11 x 11, yikes!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
Last month, I worked out a generating function with 3 variables for 3x3, can't find the sheet where I worked it out!
Anyway, there was no symmetry or pattern in it, and 5x5 would need 5 variables and much more complicated.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
I sure would have liked to have seen that gf for the 3 x 3. I tried a recurrence or two but nothing seemed to work.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
I'm working on gf for 3x3 again, may take some time.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr
If you have three variables, each one must stand for something. What would [x^n]F(x,y,z), [y^n]F(x,y,z) and [z^n]F(x,y,z) represent?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Sorry, it's 2 variables for 3x3
Last edited by gAr (2013-01-09 05:27:17)
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
It sure does get the right answer!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Yes! In the same manner, we can use 4 variables for 5x5 matrix, but requires too many cases to consider, quite a problem!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
I have had the same problem with every approach, too complicated to go to the 5 x 5.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
I think this is the hardest question I ever posed, besides "Why?".
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
It is very valuable to you, is it not?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
What is?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
You may learn the Colonel's dictum.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=243686","timestamp":"2014-04-16T13:16:54Z","content_type":null,"content_length":"44917","record_id":"<urn:uuid:bee8013e-48c1-4b18-9410-b31aac9832fa>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taylor Series problem, approximate and actual value.
i) Derive the two variable second order Taylor series approximation,
below, to f(x,y) = x3 + y3 – 5xy centred at (a,b) = (3,5).
(ii) Calculate and state this approximate value at (x,y) = (5,7).
(iii) Calculated and state the actual value of f(x,y) at (5,7).
(iv) Calculated and state the error, Q(x,y) - f(x,y) at (5,7).
This is a simple one,
i've managed to work out part (i) and (ii) , provided my working isn't wrong.
anyway, i'll show you what i've done.
for (a,b) = (3,5) , f(x,y) = 77+2(x-3) + 60(y-5) + 1/2!(18)(x-3)^2 + 1/2!(22)(x-3)(y-5) + 1/2!(70)(x-3)(y-5) + 1/2!(30)(y-5)^2 .....
so before I move on, I need to check if part(ii) has the same procedure as part(i)?? they changed the variable from (a,b) to (x,y) so do I still just sub in x,y = 5,7 like how i subbed in a,b =
3,5 for part i?
part (iii) whats the difference between an actual and approximate value and how do u go about calculating it? does it have anything to do with truncation errors?
(iv) i guess you just subtract part (iii) and (ii) | {"url":"http://mathhelpforum.com/calculus/205531-taylor-series-problem-approximate-actual-value.html","timestamp":"2014-04-23T16:56:24Z","content_type":null,"content_length":"38877","record_id":"<urn:uuid:2935dee5-da0c-41a4-bd0a-95dec87b0896>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Della Fenster, associate professor of mathematics at the University of Richmond, will deliver the keynote address.
Della Fenster, associate professor of mathematics at the University of Richmond, will deliver the keynote address “Mathematics: A Question of History” at the second annual Smoky Mountain
Undergraduate Research Conference on the History of Mathematics.
Fenster, who speaks in Room 214 of the McKee Building at Western Carolina University at 10 a.m. Saturday, April 22, will set the tone for a day of undergraduate presentations on topics from
“Zero-phobia” to “Florence Nightingale: Statistician.”
At the University of Richmond, Fenster teaches across the mathematics curriculum, including calculus, linear algebra, abstract algebra and number theory, as well as a course designed to cultivate
critical thinking for first-year students. Her honors include the 2003 University of Richmond Distinguished Educator Award and the 2004 State Council of Higher Education for Virginia Outstanding
Faculty Award.
Her research lies in the history of mathematics, particularly in the late 19th and early 20th centuries. She recently returned from the Erwin Schrödinger Institute for Mathematics and Physics in
Vienna, Austria, where she pursued a collaborative project related to the development of class field theory and continued work on a biography of the American mathematician Leonard Dickson.
“I hope the audience will begin to view mathematics as a vibrant field with a very human dimension,” Fenster said.
The Mathematical Association of America is helping fund the conference, which is designed to create opportunities for undergraduate students to present mathematically oriented talks and to expand
their knowledge of theory, history and applications. Conference presenters include students from Western, Davidson College, Auburn University at Montgomery and University of North Carolina-Asheville.
“The conference gives undergraduates a forum in which to present and explore research in an area that is central to both mathematics and mathematics education in a place that is central to many
universities and colleges in the southeastern United States,” said Sloan Despeaux, conference organizer and WCU assistant professor of math and computer science.
Participants include WCU student Chelsea Gilliam from Hickory, who will be co-presenting a paper about the history of pi, a number (approximated as 3.14159) that represents the ratio of a circle's
circumference to its diameter.
“We are going to be talking about ancient methods used to come up with the value of pi,” said Gilliam, who plans to major in math and minor in special education. She began studying the history of pi
after a discussion in a history of math class that deepened her interest in the subject. “This class turned out to be one of the most interesting courses I've ever taken,” she said. “I am proud to be
part of the conference, especially as a freshman.”
Constance Markley, a WCU junior majoring in secondary mathematics education, studied 17th century French mathematician Pierre de Fermat and his famous Last Theorem, which she said many consider the
holy grail of the mathematics world.
“Researching this topic was a blast,” said Markley of Waynesville. “Fermat's Last Theorem itself is so simple that a 10-year-old can understand it, yet it took over 350 years of the world's most
brilliant mathematicians' diligent work to prove it.”
For more information, check out paws.wcu.edu/Despeaux/2smurchom.html or contact Sloan Despeaux at (828) 227-3825 or despeaux@wcu.edu . | {"url":"http://www.wcu.edu/pubinfo/news/2006/undergradresearchconf.htm","timestamp":"2014-04-18T11:00:21Z","content_type":null,"content_length":"7473","record_id":"<urn:uuid:ff65c4fe-61ff-4bd9-b119-737c098b4a71>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rounding Numbers
Rounding Numbers
A rounded number has about the same value as the number you start with, but it is less exact.
For example, 341 rounded to the nearest hundred is 300. That is because 341 is closer in value to 300 than to 400. When rounding off to the nearest dollar, $1.89 becomes $2.00, because $1.89 is
closer to $2.00 than to $1.00
Rules for Rounding
Here's the general rule for rounding:
• If the number you are rounding is followed by 5, 6, 7, 8, or 9, round the number up. Example: 38 rounded to the nearest ten is 40^1
• If the number you are rounding is followed by 0, 1, 2, 3, or 4, round the number down. Example: 33 rounded to the nearest ten is 30
What Are You Rounding to?
When rounding a number, you first need to ask: what are you rounding it to? Numbers can be rounded to the nearest ten, the nearest hundred, the nearest thousand, and so on.
Consider the number 4,827.
• 4,827 rounded to the nearest ten is 4,830
• 4,827 rounded to the nearest hundred is 4,800
• 4,827 rounded to the nearest thousand is 5,000
All the numbers to the right of the place you are rounding to become zeros. Here are some more examples:
• 34 rounded to the nearest ten is 30
• 6,809 rounded to the nearest hundred is 6,800
• 1,951 rounded to the nearest thousand is 2,000
Rounding and Fractions
Rounding fractions works exactly the same way as rounding whole numbers. The only difference is that instead of rounding to tens, hundreds, thousands, and so on, you round to tenths, hundredths,
thousandths, and so on.
• 7.8199 rounded to the nearest tenth is 7.8
• 1.0621 rounded to the nearest hundredth is 1.06
• 3.8792 rounded to the nearest thousandth is 3.879
Here's a tip: to avoid getting confused in rounding long decimals, look only at the number in the place you are rounding to and the number that follows it. For example, to round 5.3824791401 to the
nearest hundredth, just look at the number in the hundredths place—8—and the number that follows it—2. Then you can easily round it to 5.38.
Rounding and Sums
Rounding can make sums easy. For example, at a grocery store you might pick up items with the following prices:
• $2.25
• $0.88
• $2.69
If you wanted to know about how much they would cost, you could add up the prices with a pen and paper, or try to add them in your head. Or you could do it the simple way—you could estimate by
rounding off to the nearest dollar, like this:
• $2.00
• $1.00
• $3.00
By rounding off, you could easily figure out that you would need about $6.00 to pay for your groceries. This is pretty close to the exact number of $5.82.
As you can see, in finding a round sum, it is quickest to round the numbers before adding them.
1. Some statisticians prefer to round 5 to the nearest even number. As a result, about half of the time 5 will be rounded up, and about half of the time it will be rounded down. In this way, 26.5
rounded to the nearest even number would be 26—it would be rounded down. And, 77.5 rounded to the nearest even number would be 78—it would be rounded up.
Fact Monster/Information Please® Database, © 2008 Pearson Education, Inc. All rights reserved.
Mean and Median Numbers More than a Million
More on Rounding Numbers - Rules Examples For Fractions Sums from Fact Monster: | {"url":"http://www.factmonster.com/math/numbers/rounding.html","timestamp":"2014-04-16T04:43:40Z","content_type":null,"content_length":"22810","record_id":"<urn:uuid:71352e26-9ef4-4383-9281-8513f7320990>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 Definition
The Venn diagram below illustrates the complement of $A$ in red.
2 Properties
• $(A^{{\complement}})^{\complement}=A$
• $\emptyset^{\complement}=X$
• $X^{\complement}=\emptyset$
• If $A$ and $B$ are subsets of $X$, then $A\setminus B=A\cap B^{\complement}$, where the complement is taken in $X$.
3 de Morgan’s laws
Let $X$ be a set with subsets $A_{i}\subset X$ for $i\in I$, where $I$ is an arbitrary index-set. In other words, $I$ can be finite, countable, or uncountable. Then
$\displaystyle\left(\bigcup_{{i\in I}}A_{i}\right)^{\complement}$ $\displaystyle=$ $\displaystyle\bigcap_{{i\in I}}A_{i}^{\complement},$
$\displaystyle\left(\bigcap_{{i\in I}}A_{i}\right)^{\complement}$ $\displaystyle=$ $\displaystyle\bigcup_{{i\in I}}A_{i}^{\complement}.$
Mathematics Subject Classification
no label found
Added: 2002-02-13 - 08:02
Attached Articles | {"url":"http://planetmath.org/Complement","timestamp":"2014-04-16T13:14:58Z","content_type":null,"content_length":"62179","record_id":"<urn:uuid:1cb015ee-05d3-40bf-a872-3e16baaf3a40>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laboratory for Emerging Technologies - Publications
Published Papers
(Approximately in reverse chronological order)
S.H. Park, Y. Nanishi, J.M. Xu, E.J. Yoon, " Improved emission efficiency of a- plane GaN light emitting diodes with silica nano-spheres integrated into a-plane GaN buffer layer," Appl. Phys. Lett.,
100,191116, 2012.
S. Jung, K.-B. Kim, G. Fernandes, J. H. Kim, F. Wahab and J. Xu, " Enhanced thermoelectric power in nanopatterned carbon nanotube film," Nanotechnology , vol. 23135704, 2012.
Jin Ho Kim, Do-Joong Lee, Seungwoo Jung, Michael Jokubaitis, Hyunmi Kim, Ki-Bum Kim, Jimmy Xu, " Carbon Nanotube Composite for EMI Shielding and Thermal Signature Reduction ," J. of Composite
Materials, 2012.
V. Chivukula, D. Ciplys, J.H.Kim, R.Rimeika, J.Xu, M. Shur, " Surface Acoustic Wave Response to Optical Absorption by Graphene Composite Film," IEEE Ultrasonics, Ferroelectrics, and Frequency Control
,265-270, 59(2) 2012.
L. Kuznetsova, G. E. Fernandes, and J. Xu, " Silicon emission in and out resonant coupling with high Q optical mode ," Proc. SPIE, vol. 8268, 82681G, 2012.
L. Kuznetsova, G. E. Fernandes, and J. Xu, " Silicon emission in and out resonant coupling with high Q optical mode ," Proc. SPIE, vol. 8268, 82681G, 2012.
R. Osgood III, S. Giardini, J. Carlson, G. E. Fernandes, J. H. Kim, J. Xu, M. Chin, B. Nichols, M. Dubey, P. Parilla, J. Berry, R. O'Hayre, D. Ginley, P. Periasamy, H. Guthrey, R. O'Hayre, and W.
Buchwald, " Diode-coupled Ag nanoantennas for nanorectenna energy conversion ," Proc. SPIE, vol. 8096-35, 2011.
A. K. Sood, E. J. Egerton, Y. R. Puri, G. Fernandes, J. H. Kim, J. Xu, N. Goldsman, N. K. Dhar, P. S. Wijewarnasuriya, and B. I. Lineberry, " Carbon nanotube based microbolometer development for IR
imager and sensor applications ," Proc. SPIE, vol. 8155, 815513, 2011.
J. H. Kim, G. Withey, C. -H. Wu, and J. Xu, " Carbon Nanotube Array for Addressable Nano-Bioelectronic Transducers ," IEEE Sensors Journal, vol. 11, 1274, 2011.
A. Dobrinsky, A. Sadrzadeh, B.I. Yakobson, J. Xu, " Electronic Structure of Graphene Nanoribbons Subjected to Twist and Nonuniform Strain ," (invited)special issue, Intl. J. of High Speed Electronics
and Systems, vol. 20(1), 153-160, 2011.
S. K. Panda, D. Han, H. Yoo, H. Shin, H. Park, and J. Xu, " Synthesis of Step-Shaped Bismuth Nanowires-An Approach Towards the Fabrication of Self-Homojunction ," Electrochemical and Solid-State
Letters, vol. 14 (6), E21-E24, 2011.
H. Park, J. H. Kim, R. Beresford, and J. Xu, " Effects of electrical contacts on the giant photoconductive gain in a nanowire ," Appl. Phys. Lett., vol. 99, 143110, 2011.
G. E. Fernandes, M. B. Tzolov, J. H. Kim, Z. Liu, and J. Xu, " Quantum Functional Multiple-Heterojunction Devices in Filled Carbon Nanotubes ," J. Phys. Chem. C, vol. 114, 22703, 2010.
H. Park, R. Beresford, S. Hong, and J. Xu, " Geometry-and-size-dependence of electrical properties of metal contacts on semiconducting nanowires ," J. Appl. Phys., vol. 108, 094308, 2010.
G. E. Fernandes1, Z. Liu, J. H. Kim, C. -H. Hsu, M. B. Tzolov, and J. Xu, " Quantum Dot/Carbon Nanotube/Silicon Double Heterojunction for Multi-Band Room Temperature Infrared Detection ," IoP
Nanotechnology, vol. 21, 465204, 2010.
C.-H. Hsu, S. Cloutier, S. Palefsky, J. Xu, " Synthesis of Diamond Nanowires using Atmospheric-Pressure Chemical Vapor Deposition ," Nano Lett., vol. 10, 3272--3276, 2010.
A. K. Sood, R. A. Richwine, Y. R. Puri, A. Akturk, N. Goldsman, S. Potbhare, G. Fernandes, C. -H. Hsu, J. H. Kim, J. Xu, N. K. Dhar, P. S. Wijewarnasuriya, and B. I. Lineberry, " Design and
Development of Carbon Nanostructure based Microbolometers for IR Imagers and Sensors ," Proc. SPIE, vol. 7679, 76791Q, 2010.
R. Osgood III, S. Giardini, J. Carlson, B. Kimball, M. Hoey, G. E. Fernandes, Z. Liu, J. H. Kim, J. Xu, and W. Buchwald, " Coupled, Large-Area Gold Nanowire Arrays for Nanorectenna Energy Conversion
," Proc. SPIE, vol. 7757, 775715, 2010.
J. H. Kim, H. Park, C. -H. Hsu and J. Xu, "Facile Synthesis of Bismuth Sulfide Nanostructures and Morphology Tuning by Biomolecule ," J. Phys. Chem. C, vol. 114, pp. 9634-9639, 2010.
Z. Liu, J.M. Shainline, G.E. Fernandes, J. Xu, J. Chen, and C.F. Gmachl, "Continuous-wave subwavelength microdisk lasers at 1.53 um," Optics Express, vol. 18, pp. 19242-19248, 2010.
G. E. Fernandes, J. H. Kim, Z. Liu, J. Shainline, R. Osgood III, and J. Xu, "Using a Forest of Gold Nanowires to Reduce the Reflectivity of Silicon ," Opt. Mater., vol. 32, pp. 623-626, 2010.
J. M. Shainline, G. Fernandes, Z. Liu, and J. Xu, "Broad tuning of whispering-gallery modes in silicon microdisks," Opt. Express, vol. 18, pp. 14345-14352, 2010.
H. Park, H. Shin, J. H. Kim, S. Hong, and J. Xu, "Memory effect of a single-walled carbon nanotube on nitride-oxide structure under various bias conditions ," Appl. Phys. Lett., vol. 96, 023101, 2010
H.Q. Nguyen, S.M. Hollen, M.D. Stewart, Jr., J. Shainline, A. Yin, J. Xu, and J.M. Valles, " Observation of giant positive magnetoresistance in a Cooper pair insulator ," Phys. Rev. Lett., vol. 103,
157001, 2009.
M. D. Stewart Jr., H. Q. Nguyen, S. M. Hollen, A. Yin, J. M. Xu, and J. M. Valles Jr., " Enhanced suppression of superconductivity in amorphous films with nano-scale patterning ," Physica C, vol.
774, 469, 2009.
J.M. Shainline and J. Xu, "Slow light and band gaps in metallodielectric cylinder arrays ," Optics Express, vol. 17, pp. 8879-8891, 2009.
Z. Liu, J. H. Kim, G. E. Fernandes, and J. Xu, "Room Temperature Photocurrent Response of PbS/InP Heterojunction ," Appl. Phys. Lett., vol. 95, 231113, 2009.
J. Shainline, S. Elston, Z. Liu, G. Fernandes, R. Zia, and J. Xu, "Subwavelength silicon microcavities," Opt. Express, vol. 17, pp. 23323-23331, 2009.
J.M. Shainline and J. Xu, (invited) "Directly-pumped silicon lasers," Optics and Photonics News, vol. 19, no. 5, pp. 34-39, 2008.
H. Park, D. Han, J. H. Kim, C.-H. Hsu, H. Shin, and J. Xu, "One Material, Multiple Faces - Nanostructured Bismuth," ECS Transaction, vol. 25 (11), pp. 25-33, 2009.
N. Lebedev, S. Trammell, S. Tsoi, A. Spano, J. Xu, J.-H. Kim, M. Twigg, and J. Schnur, "Increasing efficiency of photoelectronic conversion by encapsulation of photosynthetic reaction center proteins
in arrayed carbon nanotube electrode," Langmuir, vol. 24, no. 16, pp. 8871-8876, 2008.
M. Jouzi, M. Kerby, A. Tripathi, and J. Xu, "A nanoneedle method for high-sensitivity low-background monitoring of protein activity," Langmuir, vol. 24, no. 19, pp. 10786-10790, 2008.
G. Withey, J.-H. Kim, and J.M. Xu, ?œDNA-programmable multiplexing for scalable, renewable redox protein bio-nanoelectronics,??Bioelectrochemistry, vol. 74, pp. 111??17, 2008.
T.F. Kuo, M.B. Tzolov, D.A. Straus, and J. Xu, ?œElectron transport characteristics of the carbon nanotubes/Si heterodimensional heterostructure,??Appl. Phys. Lett., vol. 92, art. no. 212107, 2008.
M. Stewart, A.J. Yin, J.M. Xu, and J. Valles, ?œMagnetic-field-tuned superconductor-to-insulator transitions in amorphous Bi films with nanoscale hexagonal arrays of holes,??Phys. Rev. B., vol. 77,
art. no. 140501, 2008.
M. Stewart, A.J. Yin, J.M. Xu, and J. Valles, ?œSuperconducting pair correlations in an amorphous insulating nano-honeycomb film,?? Science, vol. 318, pp. 1273-1275, 2007. (listed in the AIP
Bullitine of Physics News, Number 850 #1, December 13 , 2007, as one of ?˜Ten Top Physics Stories for 2007??)
J. Shainline and J. Xu, (invited) ?œSilicon as an emissive optical medium,??Laser & Photon. Rev., vol 1, no. 4, pp. 334??48, 2007.
E. Rotem, J. Shainline, and J.M. Xu, ?œElectroluminescence of nanopatterned silicon with carbon implantation and solid phase epitaxial regrowth,??Optics Express, vol. 15, no. 21, pp. 14099-14106,
E. Rotem, J. Shainline, and J.M. Xu, "Enhanced photoluminescence from nanopatterned carbon-rich silicon grown by solid-phase epitaxy," Appl. Phys. Lett., vol. 91, ?art. no. 051127, 2007.
D.P. Wang, D.E. Feldman, B.R. Perkins, A.J. Yin, G.H. Wang, J.M. Xu, and A. Zaslavsky, "Hopping conduction in disordered carbon nanotubes," Solid State Comm., vol. 142, no. 5, pp. 287-291, 2007.
W. Guo, R.S. Guico, J.M. Xu, and R. Beresford, "Kinetic Monte Carlo simulation of InAs quantum dot growth on nonlithographically patterned substrates," J. Vac. Sci. Technol. B, vol. 24, no. 3, pp.
1072-1076, 2007.
A.J. Yin, R. Guico, and J.M. Xu, ?œFabrication of anodic aluminum oxide templates on curved surfaces,??Nanotechnology, vol. 18, no. 3, art. no. 035304, 2007.
J.I. Yeh, S. Du, T. Xia, A. Lazareck, J-H. Kim, J. Xu, and A.E. Nel, "Coordinated nanobiosensors for enhanced detection: integration of three-dimensional structures to toxicological applications,"
ECS Transactions, vol. 3, no. 29, pp. 115-126, 2007.
J. Yeh, A. Lazareck, and J.M. Xu, "Peptide nanowires for coordination and signal transduction of peroxidase biosensors to carbon nanotube electrode arrays," Biosensors and Bioelectronics, vol. 23,
pp. 568-574, 2007.
M. Tzolov, T.-F. Kuo, D. Straus, and J.M. Xu, ?œCarbon nanotube-silicon heterojunction hyperspectral photocurrent response,??Mater. Res. Soc. Symp. Proc., vol. 963, art. no. 0963-Q22-01, 2007.
A. Yin, M. Tzolov, D. Cardimona, and J. Xu, ?œFabrication of highly ordered anodic aluminum oxide templates on silicon substrate,??IET Circuits, Devices & Systems, vol 1., no.3, pp. 205-209, 2007.
T. Andrews, A. Yin, and J.M. Xu, ?œInelastic laser light scattering study of an ordered array of carbon nanotubes,??Appl. Phys. Lett., vol. 90, art. no. 201918, 2007.
D. Straus, T.-F. Kuo, M. Tzolov, and J. Xu, "Photocurrent response of the carbon nanotube-silicon heterojunction array," IET Circuits, Devices & Systems, vol. 1, no. 3, pp. 200-204, 2007.
M. Tzolov, T.-F. Kuo, D. Straus, and J. Xu, "Carbon nanotube - silicon heterojunction arrays and infrared photocurrent responses," J. Phys. Chem. C, vol. 111, no. 15, pp. 5800-5804, 2007.
G.D. Withey, J.-H. Kim, and J.M. Xu, ?œWiring efficiency of a metallizable DNA linker for site-addressable nanobioelectronic assembly,??Nanotechnology, vol. 18, no. 42, art. no. 424025, 2007.
R.S. Guico, M. Tzolov, W. Guo, S.G. Cloutier, R. Beresford, and J. Xu, "Fabrication and optical characterization of highly ordered InAs/GaAs quantum dots on nonlithographically patterned substrates,"
J. Vac. Sci. Technol. B, vol. 25, no. 3, pp. 1093-1097, 2007.
M.R. Contarino, M. Sergi, A.E. Harrington, A. Lazareck, J. Xu, and I. Chaiken, "Modular, self-assembling peptide linkers for stable and regenerable carbon nanotube biosensor interfaces," J. of Mol.
Recognit., vol. 19, no. 4, pp. 363-371, 2006.
A.D. Lazareck, S.G. Cloutier, T.-F. Kuo, B.J. Taft, S.O. Kelley, and J.M. Xu, "Opto-electrical characterization of zinc oxide nanorods grown by DNA directed assembly on vertically aligned carbon
nanotube tips," Appl. Phys. Lett., vol. 89, art. no. 10309, 2006.
T.F. Kuo and J.M. Xu, "Growth and application of highly ordered array of vertical nanoposts," J. Vac. Sci. Technol. B, vol. 24, no. 4, pp. 1925-1933, 2006.
A. Yin and J. Xu, ?œFabrication of highly ordered and densely packed silicon nanoneedle arrays for biosensing applications,??Mater. Res. Soc. Symp. Proc., art. no. O11-02, 2006.
A. Yin, M. Tzolov, D. Cardimona, and J. Xu, ?œTemplate-growth of highly ordered carbon nanotube arrays on silicon,??IEEE Trans. Nanotechnol., vol. 5, no. 5, pp. 564-567, 2006.
S.G. Cloutier, J.N. Eakin, R.S. Guico, G.P. Crawford, and J.M. Xu, "Molecular self-organization in cylindrical nanocavities," Phys. Rev. E, vol. 73, art. no. 051703, 2006.
A.D. Lazareck, S.G. Cloutier, T.F. Kuo, B.J. Taft, S.O. Kelley, and J.M. Xu, "DNA-directed synthesis of zinc oxide nanowires on carbon nanotube tips," Nanotechnology, vol. 17, pp. 2661??664, 2006.
(Featured as Nanotechnology Journal Highlights, May 10, 2006, nanotechweb.org)
S. Cloutier, C.H. Hsu, P. Kosseyrev, and J.M. Xu, ?œEnhancement of radiative recombination in silicon via phonon localization and selection-rule breaking,??Adv. Mater., vol. 18, no. 7, pp. 841-844,
2006. (featured as the cover article in Advanced Materials, April 4, 2006 issue).
M.D. Stewart, Z. Long, J.M. Valles, A. Yin, and J.M. Xu, ?œMagnetic-flux periodic response of nanoperforated ultrathin superconducting films,??Phys. Rev. B., vol. 73, art. no. 092509, 2006.
D. Levner, M. Fay, and J.M. Xu, ?œProgrammable spectral design using a simple binary Bragg-diffractive structure,??IEEE J. Quantum Electron., vol. 42, no. 4, pp. 410-417, 2006.
S. Cloutier, A. Lazareck, and J.M. Xu, ?œDetection of nano-confined DNA using surface-plasmon enhanced fluorescence,??Appl. Phys. Lett., vol. 88, art. no. 013904, 2006.
J.F. Waters, P. R. Guduru, and J.M. Xu, ?œNanotube mechanics ??recent progress in shell buckling mechanics and quantum electromechanical coupling,??J. Composites Sci. Technol., vol. 66, no. 9, pp.
1141-1150, 2006.
N. Kouklin, W Kim, A. Lazareck, and J.M. Xu, "Carbon nanotube probes for single cell experimentation and assays," Appl. Phys. Lett., vol. 87, art. no. 172901, 2005.
S. Cloutier, P. Kossyrev, and J.M. Xu, ?œOptical gain and stimulated emission in periodic nanopatterned crystalline silicon,??Nature Materials, vol. 4, pp. 887-891, 2005.
P.A. Kossyrev, A. Yin, S.G. Cloutier, D.A. Cardimona, D. Huang, P.M. Alsing, and J.M. Xu, ?œElectric field tuning of plasmonic response of nanodot array in liquid crystal matrix,??Nano Letters, vol.
5, no. 10, pp. 1978-1981, 2005.
N. Pavenayotin, M.D. Stewart, J.M. Valles, and J.M. Xu, ?œspontaneous formation of ordered nano-crystal arrays in films evaporated onto nanopore array substrates,??Appl. Phys. Lett., vol. 87, art.
no. 193111, 2005.
D.P. Wang, B.R. Perkins, A.J. Yin, A. Zaslavsky, J.M. Xu, R. Beresford, and G.L. Snider, ?œCarbon nanotube gated lateral resonant tunneling field-effect transistors,??Appl. Phys. Lett., vol. 87, art.
no. 152102, 2005.
S. Cloutier, R. Guico, and J.M. Xu, ?œPhonon-localization in periodic uniaxially-nanostructured silicon,??Appl. Phys. Lett., vol. 87, art. no. 222104, 2005.
J.F. Waters, P.R. Guduru, and J.M. Xu, "Shell buckling of individual multiwalled carbon nanotubes using nanoindentation," Appl. Phys. Lett., vol. 87, art. no. 103109, 2005.
B.R. Perkins, D.P. Wang, D. Soltman, A.J. Yin, J.M. Xu, and A. Zaslavsky, ?œDifferential current amplification in three-terminal Y-junction carbon nanotube devices,??Appl. Phys. Lett., vol. 87, art.
no. 123504, 2005.
R. Beresford and J.M. Xu, ?œResponse to ?˜Comment on a growth pathway for highly ordered quantum dot arrays,'??Appl. Phys. Lett., vol. 86, art. no. 206102, 2005.
W. Guo, R.S. Guico, R. Beresford, and J.M. Xu, ?œgrowth of highly ordered relaxed InAs/GaAs quantum dots on non-lithographically patterned substrates by molecular beam epitaxy,??J. Cryst. Growth,
vol. 287, pp. 509-513, 2005.
G. Withey, J. Yeh, A. Lazareck, M. Tzolov, P. Aich, A. Yin, and J.M. Xu, ?œUltra-high redox enzyme signal transduction using highly ordered carbon nanotube array electrodes,??Biosens. Bioelectron.,
vol. 21, pp. 1560-1565, 2005.
S.D. Hersee, X.Y. Sun, X. Wang, N.M. Fairchild, J. Liang, and J.M. Xu, "Nanoheteroepitaxial growth of GaN on Si nanopillar arrays," J. Appl. Phys., vol. 97, art. no. 124308, 2005.
J. Liang, H. Luo, R. Beresford, and J.M. Xu, ?œA growth pathway for highly ordered quantum dot arrays,??Appl. Phys. Lett., vol. 85, no. 24, pp. 5974-5976, 2004.
H. Chick, J. Liang, S.G. Cloutier, N. Kouklin, and J.M. Xu, "Periodic array of uniform ZnO nanorods by second-order self-assembly," Appl. Phys. Lett., vol. 84, no. 17, pp. 3376-3378, 2004.
A.Z. Hartman, M. Jouzi, R.L. Barnett, and J.M. Xu, "Theoretical and experimental studies of carbon nanotube electromechanical coupling," Phys. Rev. Lett., vol. 92, art. no. 236804, 2004.
J.F. Waters, L. Riester, M. Jouzi, P.R. Guduru, and J.M. Xu, "Buckling instabilities in multiwalled carbon nanotubes under uniaxial compression," Appl. Phys. Lett., vol. 85, no. 10, pp. 1787-1789,
C. Papadopoulos, A.J. Yin, and J.M. Xu, "Temperature-dependent studies of Y-junction carbon nanotube electronic transport," Appl. Phys. Lett., vol 85, no. 10, pp. 1769-1771, 2004.
R. Gasparac, B.J. Taft, M.A. Lapierre-Devlin, A.D. Lazareck, J.M. Xu, and S.O. Kelley, "Ultrasensitive electrocatalytic DNA detection at two- and three-dimensional nanoelectrodes," J. Am. Chem. Soc.,
vol. 126, no. 39, pp.12270-12271, 2004.
B.J. Taft, A.D. Lazareck, G.D. Withey, A.J. Yin, J.M. Xu, and S.O. Kelley, "Site-specific assembly of DNA and appended cargo on arrayed carbon nanotubes," J. Am. Chem. Soc., vol. 126, no. 40, pp.
12750-12751, 2004.
N. Kouklin, M. Tzolov, D. Straus, A. Yin, and J.M. Xu, "Infrared absorption properties of carbon nanotubes synthesized by chemical vapor deposition," Appl. Phys. Lett., vol. 85, no. 19, pp.
4463-4465, 2004.
M. Tzolov, B. Chang, A. Yin, D. Straus, J.M. Xu, and G. Brown, "Electronic transport in a controllably grown carbon nanotube-silicon heterojunction array," Phys. Rev. Lett., vol. 92, art. no. 075505,
A.J. Yin, H. Chik, and J. Xu, "Postgrowth processing of carbon nanotube arrays - Enabling new functionalities and applications," IEEE Trans. Nanotechol., vol. 3, no. 1, pp. 147-151, 2004.
Z. Xia, L. Riester, B.W. Sheldon, W.A. Curtin, J. Liang, A. Yin, and J.M. Xu, "Mechanical properties of highly ordered nanoporous anodic alumina membranes," Rev. Adv. Mater. Sci., vol. 6, no. 2, pp.
131-139, 2004.
Z. Xia, L. Riester, W.A. Curtin, H. Li, B.W. Sheldon, J. Liang, B. Chang, and J.M. Xu, "Direct observation of toughening mechanisms in carbon nanotube ceramic matrix composites," Acta Materialia,
vol. 52, no. 4, pp. 931-944, 2004.
H. Chik and J.M. Xu, "Nanometric superlattices: non-lithographic fabrication, materials, and prospects," Mater. Sci. Eng. R, vol. 43, no. 4, pp. 103-108, 2004.
N. Kouklin, H. Chik, J. Liang, M. Tzolov, J.M. Xu, J.B. Heroux, and W.I. Wang, "Highly periodic, three-dimensionally arranged InGaAsN : Sb quantum dot arrays fabricated nonlithographically for
optical devices," J. Phys. D, vol. 36, no. 21, pp. 2634-2638, 2003.
J. Xu, (Invited) "Nanotube electronics: Non-CMOS routes," Proc. IEEE, vol. 91, no. 11, pp. 1819-1829, 2003.
J. Liang, S.K. Hong, N. Kouklin, R. Beresford R, and J.M. Xu, "Nanoheteroepitaxy of GaN on a nanopore array Si surface," Appl. Phys. Lett., vol. 83, no. 9, pp. 1752-1754, 2003.
U. Welp, Z.L. Xiao, J.S. Jiang, V.K. Vlasko-Vlasov, S.D. Bader, G.W. Crabtree, J. Liang, H. Chik, and J.M. Xu, "Vortex pinning in Nb films patterned with nano-scale hole-arrays," Physica B, vol. 329,
pp. 1338-1339, 2003.
A.J. Bennett and J.M. Xu, "Simulating the magnetic susceptibility of magnetic nanowire arrays," Appl. Phys. Lett., vol. 82, no. 19, pp. 3304-3306, 2003.
A.J. Bennett and J.M. Xu, "Simulating collective magnetic dynamics in nanodisk arrays," Appl. Phys. Lett., vol. 82, no. 15, pp. 2503-2505, 2003.
C. Papadopoulos, A. Rakitin, and J.M. Xu, "Carbon nanotube self-doping: calculation of the hole carrier concentration," Phys. Rev. B., vol. 67, art. no. 033411, 2003.
G.W. Crabtree, U. Welp, Z.L. Xiao, J.S. Jiang, V.K. Vlasko-Vlasov, S.D. Bader, J. Liang, H. Chik, and J.M. Xu, "Vortices in dense self-assembled hole arrays," Physica C, vol. 387, no. 1-2, pp. 49-56,
D. Hoppe, B. Hunt, M. Hoenk, F. Noca, and J.M. Xu, "Carbon nanotube arrays enable ultra-miniature RF circuits," NASA Tech Briefs, 30207, 2003
J.M. Xu, (Invited) "Carbon nanotube IR properties and applications," Proc. SPIE, vol. 4823, pp. 88-95, 2002.
U. Welp, Z.L. Xiao, J.S. Jiang, V.K. Vlasko-Vlasov, S.D. Bader, G.W. Crabtree, J. Liang, H. Chik, and J.M. Xu, "Superconducting transition and vortex pinning in Nb films patterned with nanoscale hole
arrays," Phys. Rev. B., vol. 66, art. no. 212507, 2002.
C. Papadopoulos, B. Chang, A. Yin, and J.M. Xu, (Invited) "Engineering carbon nanotubes via template growth," Intl. J. Nanoscience, vol. 1, no.3/4, pp. 205-212, 2002.
J. Liang, H. Chik, and J.M.. Xu, (Invited) "Non-lithographic fabrication of lateral superlattices for nanometric electromagnetic-optic applications," IEEE J. Sel. Top. Quantum Electron., vol. 8, no.
5, pp. 998-1008, 2002.
I. Levisky, J. Liang, and J.M. Xu, "Highly ordered arrays of organic-inorganic nanophotonic composites," Appl. Phys. Lett., vol. 81,? no. 9, pp.1696-1698, 2002.
J. Liang, H. Chik, A. Yin, and J.M. Xu, "Two-dimensional superlattices of nanostructures: nonlithographic formation by anodic alumina membrane template," J. Appl. Phys., vol. 91, no. 4, pp.
2544-2546, 2002.
L. Chen, A. Yin, J.S. Im, A.V. Nurmikko, J.M. Xu, and J. Han, "Fabrication of 50-100nm patterned InGaN blue light emitting heterostructures," Phys. Stat. Sol. A, vol. 188, No.1, pp. 135-138, 2001.
G.P. Wiederrecht, D.M. Tiede, G.A. Wurtz, L.X. Chen, T. Liu, and J.M. Xu, "Unique Structures of Nanoconfined Liquid Crystals Doped with Photoinduced Charge Transfer Compound," Adv. Photon Source,
A.J. Yin, J. Li, W. Jian, A.J. Bennett, and J.M. Xu, "Fabrication of highly ordered metallic nanowire arrays by electrodeposition," Appl. Phys. Lett., vol. 79, no. 7, pp. 1039-1041, 2001.
J.M. Xu, (Invited) "Highly ordered carbon nanotube arrays and IR detection," Infrared Phys. Technol., vol. 42, pp. 485-491, 2001.
A. Rakitin, P. Aich, C. Papadopoulos, Y. Kobzar, A.S. Vedeneev, J.S. Lee, and J.M. Xu, "Metallic conduction through engineered DNA: DNA nanoelectronic building blocks," Phys. Rev. Lett., vol. 86, pp.
3670-3673, 2001.
P.H. Siegel, A. Fung, H. Manohara, J. Xu, and B. Chang, "Nanoklystron: a monolithic tube approach to THz power generation," 12th Int. Conf. Space THz Tech. Proc., 2001.
J.M. Xu, (Invited) "Plastic electronics and future trends in microelectronics," Synth. Met., vol. 115, pp. 1-3, 2000.
C. Papadopoulos, A. Rakitin, J. Li, A.S. Vedeneev, and J.M. Xu, "Electronic transport in Y-junction carbon nanotubes," Phys. Rev. Lett., vol. 85, pp. 3476-3479, 2000.
P.H. Siegel, T.H. Lee, and J. Xu, "The Nanoklystron: A new concept for THz power generation," JPL New Technology Report, NPO 21014, March 21, 2000.
J. Li, C. Papadopoulos, and J.M. Xu, "Growing Y-junction carbon nanotubes," Nature, vol. 402, pp. 253-254, 2000.
A. Rakitin, C. Papadopoulos, and J.M. Xu, "Electronic properties of amorphous carbon nanotubes," Phys. Rev. B, vol. 60, pp.5793-5796, 2000.
R. Gordon and J. Xu, "Lateral mode dynamics of semiconductor lasers," IEEE J. Quantum Electron., vol. 35, no. 12, pp. 1904-1911, 1999.
J. Li, C. Papadopoulos, and J. M. Xu, "Highly-ordered carbon nanotube arrays for electronics applications," Appl. Phys. Lett., vol. 75, no. 3, pp. 367-369, 1999.
A.A. Tager, R. Gaska, I.A. Avrutsky, M. Fay, H. Chik, A.J. SpringThorpe, S. Eicher, J.M. Xu, and M. Shur, "Ion-implanted GaAs-InGaAs lateral current injection laser," IEEE J. Sel. Top. Quantum
Electron., vol. 5, no. 3, pp. 64-672, 1999.
I.A. Avrutsky, M. Fay, and J.M. Xu, "Multiwavelength diffraction and apodization using binary superimposed gratings," IEEE Photon. Tech. Lett., vol. 10, no. 6, pp. 839-841, 1998.
J. Haruyama, D.N. Davydov, D. Routkevitch, D. Ellis, B.W. Statt, M. Moskovits, and J.M. Xu, "Coulomb blockade in nano-junction array fabricated by nonlithographic method," Solid-State Electron., vol.
42, no. 7-8, pp. 1257-1266, 1998.
A. Othonos, J. Bismuth, M. Sweeny, A. Kevorkian, and J.M. Xu, "Superimposed grating wavelength division multiplexing in Ge-doped SiO2/Si planar waveguides," Opt. Eng., vol. 37, no. 2, pp. 717-720,
E.H. Sargent, G.L. Tan, and J.M. Xu, "Lateral current injection lasers: underlying mechanisms and design for improved high-power efficiency," IEEE J. Lightwave Tech., vol. 16, no. 10, pp. 1854-1864,
E.H. Sargent, D.A. Suda, A. Margittai, F.R. Shepherd, M. Cleroux, G. Knight, N. Puetz, T. Makino, A.J. SpringThorpe, G. Chik, and J.M. Xu, "Experimental study of LCI lasers fabricated by single MOCVD
overgrowth followed by selective dopant diffusion," IEEE Photon. Tech. Lett., vol. 10, no. 11, pp. 1536-1538, 1998.
G.L. Tan, and J.M. Xu, "Modeling of gain, differential gain, index change and linewidth enhancement factor for strain-compensated InGaAs-GaAsP-InGaP QWs," IEEE Photon. Tech. Lett., vol. 10, no. 10,
pp/ 1386-1388, 1998.
D.N. Davydov, J. Haruyama, D. Routkevitch, B.W. Statt, D. Ellis, M. Moskovits, and J.M. Xu, "Nonlithographic nanowire-array tunnel device: fabrication, zero-bias anomalies and Coulomb blockade,"
Phys. Rev. B., vol. 57, no. 21, pp. 13550-13553, 1998.
E.H. Sargent, J.M. Xu, C. Caneau, and C.-E., Zah, "Experimental investigation of physical mechanisms underlying lateral current injection laser operation," Appl. Phys. Lett., vol. 73, no. 3, pp.
285-287, 1998.
X. Fu, R. Gordon, G.L. Tan, and J.M. Xu, "Third-order nonlinearity induced lateral-mode frequency locking and beam instability in the high-power operation of narrow-ridge semiconductor lasers," IEEE
J. Quantum Electron., vol. 34, no. 8, pp. 1447-1454, 1998.
E.H. Sargent, K. Oe, C. Caneau, and J.M. Xu, "OEIC-enabling LCI lasers with current guides: Combined theoretical-experimental investigation of internal operating mechanisms," IEEE J. Quantum
Electron., vol. 34, no. 7, pp. 1280-1287, 1998.
I.A. Avrutsky, D. Ellis, A. Tager, H. Anis, and J.M. Xu, "Design of widely tunable semiconductor lasers and the concept of binary superimposed gratings (BSG)," IEEE J. Quantum Electron., vol. 34, no.
4, pp. 729-741, 1998.
A.J. Bennett, R.D. Clayton, and J.M. Xu, "Above-threshold longitudinal profiling of carrier nonpinning and spatial modulation in asymmetric cavity lasers," J. Appl. Phys., vol. 83, no. 7, pp.
3784-3788, 1998.
G.L. Tan, E.H. Sargent, and J.M. Xu, "A tilted mirror for lateral mode discrimination and higher kink-free power in fiber pump lasers," J. Quantum Electron., vol. 34, no. 2, pp. 353- 365, 1998.
X. Fu, M. Fay, and J.M. Xu, "1 x 8 supergrating wavelength-division demultiplexer in a silica planar waveguide," Optics Lett., vol. 22, no. 21, pp. 1627-1629, 1997.
I. Avrutsky, R. Gordon, R. Clayton, and J.M. Xu, "Investigations of the spectral characteristics of 980nm InGaAs/GaAs/AlGaAs lasers," J. Quantum Electron., vol. 33, no. 10, pp. 1801-1809, 1997.
E. Sargent, G.-L. Tan, and J.M. Xu, "Physical model of OEIC-compatible lateral current injection lasers," IEEE J. Sel. Top. Quantum Electron., vol. 3, no. 2, pp. 507-512, 1997.
D. Ellis and J.M. Xu, "Electro-opto-thermal modeling of threshold current dependence on temperature," IEEE J. Sel. Top. Quantum Electron., vol. 3, no.2, pp. 640-648, 1997.
X. Fu and J.M. Xu, "A novel grating-in-etalon WDM device and a proof-of-concept demonstration of a 1x40 polarization insensitive WDM," IEEE Photon. Tech. Lett., vol. 9, no. 6, pp. 779-781, 1997.
A.A. Tager, M. Moskovits, and J.M. Xu, "Spontaneous charge polarization in single-electron tunneling through coupled nanowires," Phys. Rev. B., vol. 55, no. 7, pp. 4530-4538, 1997.
G.L. Tan, R. Mand, and J.M. Xu, "Self-consistent modeling of beam instabilities in 980nm fiber pump lasers," J. Quantum Electron., vol. 33, no. 8, pp. 1384-1395, 1997.
Q.M. Zhang, J. Sitch, J. Hu, R. Surridge, and J.M. Xu, "A new large signal HBT model," IEEE Trans. Microwave Theory Tech., vol. 44, no. 11, pp. 2001-2009, 1997.
M. Xu, R. Clayton, G.L. Tan, and J.M. Xu, "Increased threshold for the first-order lateral mode lasing in low-ridge waveguide high power QW lasers," IEEE Photon. Tech. Lett., vol. 8, no. 11, pp.
1444-1446, 1996.
D. Routkevitch, T. Bigioni, M. Moskovits, and J.M. Xu, "Electrochemical fabrication of CdS nanowire arrays in porous anodic aluminum oxide templates," J. Phys. Chem., vol. 100, no. 33, pp.
14037-14047, 1996.
K. Lee, H. Anis, T. Makino, and J.M. Xu, "The effect of field-interference in modelling gain-coupled DFB lasers," Can. J. Phys., vol. 74, pp. S20-S40, 1996.
M.S. Shur, S.M. Sze, and J.M. Xu, "Present and future trends in device science and technologies - Preface," IEEE Trans. Electron Dev., vol. 43, no. 10, pp. 1618-1620, 1996.
D. Routkevitch, A.A. Tager, J. Haruyama, D. Almawlawi, M. Moskovits, and J.M. Xu, "Nonlithographic nano-wire arrays: fabrication, physics, and device applications," IEEE Trans. Electron Dev., vol.
43, no. 10, pp. 1646-1658, 1996.
H. Anis, T. Makino, and J.M. Xu, "Effect of gain coupling on the intermodulation distortion in DFB laser," IEEE Photon. Tech. Lett., vol.8, no. 8, pp. 995-997, 1996.
E.H. Sargent, D. Pavlidis, H. Anis, N. Golinescu, J.M. Xu, R. Clayton, and H.B. Kim, "Longitudinal carrier density profiling in semiconductor lasers via spectral analysis of side spontaneous
emission," J. Appl. Phys., vol. 80, no. 3, pp. 1904-1906, 1996.
S. Medhi Fakhraie, J.M. Xu, and K.C. Smith, "Design of CMOS quadratic neural networks," IEEE Pacific Rim Conf. Communications, Computers, and Signal Processing, pp. 493-496, 1995.
D.A. Suda, H. Lu, T. Makino, and J.M. Xu, "An investigation of lateral current injection laser internal operation mechanisms," IEEE Photon. Tech. Lett., vol. 7, no. 10, pp. 1122- 1124, 1995.
M.L. Xu, G.L. Tan, J.M. Xu, M. Ohkubo, T. Fukushima, M. Irikawa, and R.S. Mand, "Ultra-high differential gain in GaInAs/AlGaInAs quantum wells: experiment and modeling," IEEE Photon. Tech. Lett.,
vol. 7, no. 9, pp. 947-949, 1995.
N. Bewtra, D.A. Suda, G.L. Tan, F. Chatenoud, and J.M. Xu, "Modeling of quantum well lasers with electro-opto-thermal interaction," IEEE J. Sel. Top. Quantum Electron., vol. 1, no. 2, pp. 331-340,
H. Anis, T. Makino, and J.M. Xu, "Mirror effects on dynamic response of surface emitting lasers," IEEE Photon. Tech. Lett., vol. 7, no. 3, pp. 232-234, 1995.
A.I. Akhtar and J.M. Xu, "Differential gain in coupled quantum well lasers," J. Appl. Phys., vol. 78, no. 5, pp. 2962-2969, 1995.
J. Guthrie, G.L. Tan, M. Irikawa, R.S. Mand, and J.M. Xu, "Beam instability of 980nm power laser: experiment and analysis," IEEE Photon. Tech. Lett., vol. 6, no. 12, pp. 1409-1411, 1994.
R.Q. Yang, J.M. Xu, and M. Sweeny, "Selection rules of intersubband transitions in conduction band quantum wells," Phys. Rev. B., vol. 50, no. 11, pp. 7474-7482, 1994.
R.Q. Yang, J.M. Xu, and H.C. Liu, "Effect of nonparabolicity on the lifetime limited frequency in resonant tunneling diodes," Superlatt. Microstruct., vol.14, no. 2-3, pp. 195-198, 1994.
J. Hu, Q.M. Zhang, R.K. Surridge, J.M. Xu, and D. Pavlidis, "A new emitter design of InGaP/GaAs HBT's for high frequency applications," IEEE Electron Device Lett., vol. 14, no. 12, pp. 563-565, 1993.
V. Minier and J.M. Xu, "Coupled-mode analysis of superimposed phase grating guided-wave structures and intergrating coupling effects," Opt. Eng., vol. 32, no. 9, pp. 2054-2063, 1993.
G.L. Tan, J.M. Xu, and M.S. Shur, (Invited) "A GaAs/AlGaAs double-heterojunction lateral PIN ridge waveguide laser," Opt. Eng., vol. 32, no. 9, pp. 2042-2045, 1993.
J. Hu, Q.M. Zhang, R.K. Surridge, J.M. Xu, and D. Pavlidis, "A new emitter design of InGaP/GaAs HBTs for high-frequency applications," IEEE Electron Dev. Lett., vol. 14, no. 12, pp. 563-565, 1993.
V. Minier, A, Kévorkian, and J.M. Xu, "Superimposed phase gratings in planar optical waveguides for wavelength demultiplexing applications," IEEE Photon. Tech. Lett., vol. 5, no. 3, pp. 330-333,
E. Abbott, M. Lee, R. Mand, M. Sweeny, and J.M. Xu, "A quantum gate current model," IEEE Trans. Electron Dev., vol. 40, no. 5, pp. 1022-1024, 1993.
J. Lu, R. Surridge, G. Paulski, H. van Driel, and J.M. Xu., "Studies of high speed metal-semiconductor-metal photodetector with a GaAs/AlGaAs heterostructure," IEEE Trans. Electron Dev., vol. 40, no.
6, pp. 1087-1092, 1993.
C. Wu, C. Rolland, F. Shepherd, C. Laroque, N. Puetz, K.D. Chik, and J.M. Xu, "InGaAsP/InP vertical directional coupler filter with optimally designed wavelength tunability," IEEE Photon. Tech. Lett.
, vol. 5, no. 4, pp. 457-459, 1993.
A.I. Akhtar, C.-Z. Guo, and J.M. Xu, "Effect of well-coupling on the optical gain of multi-quantum-well lasers," J. Appl. Phys., vol. 73, no. 9, pp. 4579-4585, 1993.
D.J. Day, R.Q. Yang, J. Lu, and J.M. Xu, "Experimental demonstration of resonant interband tunnel diode with room temperature peak-to-valley ratio over 100," J. Appl. Phys., vol. 73, no. 3, pp.
1542-1544, 1993.
L.V. Iogansen, V. Malov, and J.M. Xu, "Time dependent theory of double quantum well resonant interband tunnel transistor," Semicond. Sci. Technol., vol. 8, no. 4, pp. 568-574, 1993.
J.M. Xu, V.V. Malov, and L.V. Iogansen, "Comparison of resonant tunneling in a double-quantum-well three-barrier system and a single-quantum-well double-barrier system," Phys. Rev. B, vol. 47, no.
12, pp. 7253-7259, 1993.
C.M. Tan, J.M. Xu, and S. Zukotynski, "Electronic properties of n-i-n-i doping superlattices," J. Appl. Phys., vol. 73, no. 6, pp. 2921-2933, 1993.
R.Q. Yang and J.M. Xu, "Analysis of transmission in polytype interband tunneling heterostructures," J. Appl. Phys., vol. 72, no. 10, pp. 4714-4726, 1992.
G.L. Tan, N. Bewtra, K. Lee, and J.M. Xu, "A two-dimensional non-isothermal finite-element simulation of laser diodes," IEEE J. Quantum Electron., vol. 29, no. 3, pp. 822-835, 1993.
G.L. Tan, K. Lee, and J.M. Xu, "Finite-element light-emitter simulator (FELES): a new 2D software design tool for optoelectronic devices," Jpn. J. Appl. Phys., vol. 32, no. 1B, part 1, pp. 583-589,
R.Q. Yang and J.M. Xu, "'Leaky' quantum wells: a basic theory & applications," Can. J. Phys., vol. 70, pp. 1153-1158, 1993.
D.J. Day, R.Q. Yang, J. Lu, and J.M. Xu, "Investigation of the influence of the barrier thickness in double-well resonant tunnel diodes," Can. J. Phys., vol. 70, pp. 1013-1016, 1993.
C.Y. Wu, C.Z. Guo, and J.M. Xu, "Microwave-assisted high speed modulation of semiconductor lasers," Can. J. of Phys., vol. 70, pp. 1057-1063, 1993.
R.Q. Yang and J.M. Xu, "A calculation of the center wavelength in waveguide directional couplers," Opt. Commun., vol. 94, pp. 71-75, 1992.
V. Minier, A. Kévorkian, and J.M. Xu, "Diffraction characteristics of superimposed holographic gratings in planar optical waveguides," IEEE Photon. Tech. Lett., vol. 4, no. 10, pp. 1115-1118, 1992.
Q.M. Zhang, G.L. Tan, W.T. Moore, and J.M. Xu, "Effects of displaced P-N-junction of heterojunction bipolar-transistors," IEEE Trans. Electron Dev., vol. 39, no. 11, pp. 2430-2437, 1992.
Q.M. Zhang, K. Lee, G.L. Tan, and J.M. Xu, "Analysis of the emitter-down configuration of double heterojunction bipolar transistors," IEEE Trans. Electron. Dev., vol. 39, no. 10, pp. 2220-2228, 1992.
Q.M. Zhang, G.L. Tan, W.T. Moore, and J.M. Xu, "Effects of displaced P-N junction on heterojunction bipolar transistors," IEEE Trans. Electron. Dev., vol. 39, no. 11, pp. 2430-2437, 1992.
R.Q. Yang and J.M. Xu, "Bound and quasi-bound states in 'leaky' quantum wells," Phys. Rev. B., vol. 46, no. 11, pp. 6969-6974, 1992.
Z.-M. Li, D. Landheer, M. Veilleux, D.R. Conn, R. Surridge, J.M. Xu, and R.I. McDonald, "Analysis of a resonant-cavity enhanced GaAs/AlGaAs MSM photodetector," IEEE Photon. Tech. Lett., vol. 4, no.
5, pp. 473-476, 1992.
J.M. Xu, A.G. MacDonald, L.V. Iogansen, D.J. Day, and M. Sweeny, "Study of peak and valley currents in double quantum well resonant interband tunnel diodes," J. Semicond. Sci. Technol., vol. 7, pp.
1097-1102, 1992.
J.M. Xu, R. Mand, A. SpringThorpe, and K.D. Chik, "Demonstration of novel quantum well gate-controlled photodetector switch," Electron. Lett., vol. 28, no. 5, pp. 501-503, 1992.
A.G. MacDonald, L.V. Iogansen, M. Sweeny, D.J. Day, and J.M. Xu, "Well width dependence of tunneling current in double-quantum-well resonant interband tunnel-diodes," IEEE Electron Dev. Lett., vol.
13, no. 3, pp. 155-157, 1992.
R.Q. Yang and J.M. Xu, "Self-focusing and coupling of electron waves in effective mass waveguides," Solid State Comm., vol. 81, no. 1, pp. 31-34, 1992.
R.Q. Yang, and J.M. Xu, "Energy dependent electron wave coupling between two asymmetric quantum well waveguides," Appl. Phys. Lett., vol. 59, no. 3, pp. 1-3, 1991.
R.Q. Yang and J.M. Xu, "Population inversion through resonant interband tunneling," Appl. Phys. Lett., vol. 59, no. 2, pp. 181-182, 1991.
G.L. Tan, Q.M. Zhang, and J.M. Xu, "Computation of field and charge transport in compound semiconductor devices - some new features and methods," IEEE Trans.? Magn., vol. 27, no. 5, pp. 4158-4161,
S.G. Zhou, M. Sweeny, J.M. Xu, and O. Berolo, "Chaotic behavior of quantum resonant tunneling diodes," Physica D, vol. 52, pp. 544-550, 1991.
R.Q. Yang, M. Sweeny, D. Day, and J.M. Xu, "Interband tunneling in heterostructure tunnel diodes," IEEE Trans. Electron Dev., vol. 38, no. 3, pp.442-446, 1991.
R.Q. Yang and J.M. Xu, "Analysis of guided electron waves in coupled quantum wells," Phys. Rev. B, vol. 43, no. 2, pp. 1699-1706, 1991.
J.M. Xu, T.J. Moore, and M. Sweeny, "Characteristics of displaced PN- and hetero- junctions," Solid State Electron., vol. 34, no. 4, pp. 423-425,1991.
W.A. Hagley, D.J. Day, D.S. Malhi, and J.M. Xu, "High speed noise immune AlGaAs/GaAs HBT logic for static memory application," IEEE Trans. Electron Dev., vol. 38, no. 4, pp. 932-934, 1991.
C. Wu, C. Rolland, N. Puetz, R. Bruce, K.D. Chik, and J.M. Xu, "A vertically coupled InGaAsP/InP directional coupler filter of ultranarrow bandwidth," IEEE Photon. Tech. Lett., vol. 3, no. 6, pp.
519-521, 1991.
Z.Y. Zhao, Q.M. Zhang, G.L. Tan, and J.M. Xu, "A new preconditioner for CGS iteration in solving large sparse nonsymmetric linear equations in semiconductor device simulation," IEEE Trans. CAD, vol.
10, no. 11, pp. 1432-1440, 1991.
C.M. Tan and J.M. Xu, "Assessment of the performance potential of quantum resonant tunneling structure under current density constraint," Intl. J. Electron., vol. 70, no. 4, pp. 703-712, 1991.
D.J. Day, Y. Chung, C. Webb, J.N. Eckstein, J.M. Xu, and M. Sweeny, "Double quantum well resonant tunnel diodes," Appl. Phys. Lett., vol. 57, no. 12, pp. 1260-1261, 1990.
Q.M. Zhang, G.L. Tan, J.M. Xu, and D.J. Day, "Current gain and transit time effects in HBT's with graded emitter and base regions," IEEE Electron Dev. Lett., vol. 11, no. 11, pp. 508-510, 1990.
C.C. Sun, J.M. Xu, A. Hagley, R. Surridge, and A. SpringThorpe, "Electron mobility measurement in short-channel FET's using the cutoff frequency method," IEEE Electron Dev. Lett., vol. 11, no. 9, pp.
382-384, 1990.
C.C. Sun, J.M. Xu, A. Hagley, R. Surridge, and A. SpringThorpe, "Measurement of source and drain series resistances of HIGFETs using a bias-scan method," Solid State Electron., vol. 33, no. 10, pp.
1279-1282, 1990.
C.C. Sun, J.M. Xu, A. Hagley, R. Surridge, and A. SpringThorpe, "Observation of electron velocity overshoot in AlxGa1-xAs/GaAs heterostructure insulated gate field- effect transistors," Appl. Phys.
Lett., vol. 57, no. 6, pp. 566-568, 1990.
D.J. Day, Y. Chung, C. Webb, J.N. Eckstein, J.M. Xu, and M. Sweeny, "Heterostructure p-n junction tunnel diodes," Appl. Phys. Lett., vol. 57, no. 11, pp. 1140-1142, 1990.
C.C. Sun and J.M. Xu, "An experimental investigation of the transverse electric field in CCD," J. Opt. Quantum Electron., vol. 22, pp. 55-63, 1990.
C.M. Tan, J.M. Xu, and S. Zukotynski, "Study of resonant tunneling structures: a hybrid incremental airy function-plane wave approach," J. Appl. Phys., vol. 67, no. 6, pp. 3011-3017, 1990.
P.M. Smith, D.R. Conn, and J.M. Xu, "The limits of resonant tunneling diode- subharmonic mixer performance," J. Appl. Phys., vol. 66, no. 3, pp. 1454-1458, 1990.
M. Sweeny and J.M. Xu, "Hole energy levels in zero-dimensional quantum balls," Solid State Comm., vol. 72, no. 3, pp. 301-304, 1989.
Y. Okada, J.M. Xu, H.C. Liu, D. Landheer, M. Buchanan, and D.C. Houghton, "Noise characteristics of a Si/SiGe resonant tunneling diode," Solid State Electron., vol. 32, no. 9, pp. 797-800, 1989.
M. Sweeny and J.M. Xu, "On photon-assisted tunneling in quantum well structures," IEEE J. Quantum Electron., vol. 25, no. 5, pp. 885-888, 1989.
M. Sweeny and J.M. Xu, "Resonant interband tunnel diodes," Appl. Physics Lett., vol. 54, no. 6, pp. 546-548, 1989.
C.C. Sun and J.M Xu, "The stepped-gate-oxide structure - a new approach for speed enhancement in the gate-controlled photodetector," Appl. Phys. Lett., vol. 54, no. 14, pp. 1335- 1337, 1989.
C.C. Sun and J.M. Xu, "Observation of capacitive modulation of bipolar current in crystalline silicon gated P-I-N structures," Appl. Phys. Lett., vol. 54, no. 19, pp. 1875-1877, 1989.
M. Sweeny, J.M. Xu, and M. Shur, "Hole subbands in one-dimensional quantum well wires," Superlatt. Microstruct., vol. 4, pp. 623-626, 1988.
J. Zou, J.M. Xu, and M. Sweeny, "Effects of asymmetric barriers on resonant tunneling current," J. Semicond. Sci. Tech., vol. 3, no. 8, pp. 819-822, 1988.
J.M. Xu and M. Shur, "Temperature dependence of electron mobility and peak velocity in compensated GaAs," Appl. Phys. Lett., vol. 52, no. 11, pp. 922-924, 1988.
J.M. Xu and M. Shur, "Double base hot electron transistors," Superlatt. Microstruct., vol. 4, no. 3, pp. 329-332, 1988.
J.M. Xu and M. Shur, "Double Ridley-Watkins-Hilsum-Gunn effect in compensated GaAs," Solid State Electron., vol. 31, no. 3-4, pp. 607-610, 1988.
J.M. Xu and M. Shur, "Velocity-field characteristics with two maxima in compensated GaAs," Phys. Rev. B, vol. 36, no. 2, pp. 1352-1354, 1987.
J.M. Xu and M. Shur, "Ballistic transport in hot electron transistors," J. Appl. Phys., vol. 62, no. 9, pp. 3816-3820, 1987.
J.M. Xu, M. Shur, and M. Hack, "Amplification of bipolar current flow by charge induced from an insulated gate electrode," J. Appl. Phys., vol. 62, no. 3, pp. 1108-1111, 1987.
J.M. Xu and M. Shur, "Velocity-field dependence in GaAs," IEEE Trans. Electron Dev., vol. 34, no. 8, pp. 1831-1832, 1987.
J.M. Xu and M. Shur, "A tunneling emitter bipolar transistor," IEEE Electron Dev. Lett., vol. EDL-7, no. 7, pp. 416-418, 1986.
J.M. Xu, B.A. Bernhart, M. Shur, C.H. Chen, and A. Peczalski, "Electron mobility and velocity in compensated GaAs," Appl. Phys. Lett., vol. 49, no. 6, pp. 342-344, 1986.
A. Van der Ziel, J.B. Anderson, A.M. Birbas, W.C. Chen, P. Fang, and J.M. Xu, "Shot noise in solid state diodes," Solid State Electron., vol. 29, no. 10, pp. 1069-1071, 1986.
D.K. Arch, J.K. Abrokwah, P.J. Vold, A.M. Fraasch, R.R. Daniels, M. Shur, and J.M. Xu, "Modulation doped field-effect transistors utilizing superlattice AlGaAs/n+-GaAs charge control layers," IEEE
Trans. Electron Dev, vol. 33, no. 11, 1986.
Y.S. Ling, M. Zhang, J.M. Xu, M. Vallierses, R. Gilmore, and D.H. Feng, "The multi-J nuclear dynamical supersymmetry U(6/20)," Phys. Lett., vol. 148b, no. 1-3, pp. 13-19, 1984. | {"url":"http://opto.brown.edu/publications/publications.php","timestamp":"2014-04-21T10:50:39Z","content_type":null,"content_length":"63332","record_id":"<urn:uuid:b3f15cd9-6248-48c1-9fb3-3afbf6d55e81>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
of Statistics
Issues in Teaching Statistical Thinking with Spreadsheets
John C. Nash and Tony K. Quon
University of Ottawa
Journal of Statistics Education v.4, n.1 (1996)
Copyright (c) 1996 by John C. Nash and Tony K. Quon, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent
from the author and advance notification of the editor.
Key Words: Spreadsheet software; Computing paradigm; Statistical functions; Audit files.
Spreadsheet software is widely used and now includes statistical functionality. This paper discusses the issues raised in teaching statistics with spreadsheet software. The principal concerns relate
to aspects of the spreadsheet view of computation that make it difficult to keep track of what calculations have actually been carried out or to control the spreadsheet by means of a script. We also
discuss a number of other advantages and deficiencies of spreadsheets for teaching statistics.
1. Introduction
1 Recent spreadsheet software has begun to include quite extensive statistical functionality, thereby inviting its use for statistical analysis. Because spreadsheet software is particularly common in
business use, there has been a strong interest from students and some faculty members to use spreadsheets in teaching statistics in business schools and elsewhere. Countering this pull towards
spreadsheets are a number of conceptual, pedagogical, and technical issues that are the basis of this paper.
2 Our treatment of the support provided by spreadsheets for statistics and statistics education considers capabilities offered by the products, features or capabilities lacking, incorrect or
misleading features, and treatment of missing values. It is worth emphasizing that it is dangerous to have tools that conduct incorrect analyses of data. For the professor, there is the added danger
of having lesson preparation wasted, student confidence eroded, and the general confusion and loss of class time involved with attempting a correction.
2. Advantages of Spreadsheets for Teaching Statistics
3 Before we present the body of our argument, it is important to recognize that spreadsheet software offers a number of important conveniences to both teachers and students.
• The widespread use and knowledge of spreadsheet software saves the costs of acquiring, teaching, and learning the mechanics of a new software tool.
• Such software is often taught and supported by staff other than those who teach statistics.
• Teachers can prepare templates in advance for students to follow and carry out particular computations.
• With some exceptions, the spreadsheet calculation paradigm offers immediate updating of results when data are changed.
• Spreadsheets are a fairly general computational tool, so they can often be "programmed" to perform non-standard calculations.
• As mentioned, spreadsheet software now offers tools for many common statistical calculations.
• Spreadsheets are a handy tool for data entry, editing, and manipulation prior to input to a standard statistics package for analysis.
3. Are Spreadsheets Conducive to Teaching and Learning?
3.1. Convenience for Students
4 The claim for the convenience of the spreadsheet interface and its suitability for a large range of "real" applications must be tempered with the realization that
• We should not expect to find "everything" in a single tool (see Nash 1994 for a related discussion), and
• It may be difficult to discover the right syntax of the @ function or other construct to use. (@ functions are expressions involving spreadsheet cells that are updated when cell values change.)
5 These two points have parallels with regard to most types of software. Our point here is simply to underline the fact that spreadsheet processors are not free from these difficulties simply because
of their widespread usage.
6 The size of spreadsheet packages with statistical features is such that they require large amounts of memory (RAM) and disk space. (Excel 5.0 takes up over 13MB of disk space, considerably more
than Excel 4.0 or Quattro Pro 5.0.) These constraints, together with their high prices (with the exception of Quattro Pro which is relatively inexpensive), make these packages unattractive as
compulsory materials, particularly in environments where the available computers may be of varying capability. By contrast, several statistical packages exist in inexpensive student editions that do
not require hardware upgrades.
7 In our experience, the argument of convenience and expediency in favor of teaching statistics with spreadsheets is most strongly made in connection with MBA courses. The case is made that since MBA
students are likely to have spreadsheets on their office computers, they will be more inclined to perform statistical analyses later when the need arises. That is, learning to use spreadsheets for a
data analysis course may have side benefits in terms of their use in other courses and upon students' return to the workplace. We believe this is a hypothesis that remains to be tested.
8 An argument is also made that widespread knowledge of spreadsheets makes them more readily accessible to students. However, the differences in functionality and use among spreadsheet packages are
at least as great as between the spreadsheets and specialized statistical packages, especially since Minitab, Stata, Systat, and others now have menu-driven options and spreadsheet data entry and
edit facilities. Unless teaching institutions are prepared to say there is only one "true religion" of spreadsheets, the argument for a particular spreadsheet software as the interface that students
will meet in the real world is moot. In a world where software choices seem limitless, it is strange to argue that students should be introduced to only one, albeit general-purpose, software tool.
9 As one referee has noted, spreadsheets are particularly useful for data entry, editing, and manipulation prior to input to a standard statistics package for analysis. We agree with this observation
and have often used spreadsheets in this way. (Excel 5.0 is particularly good for importing ASCII data, whether qualitative or quantitative, that are not separated by blanks or commas.)
Unfortunately, the transfer of data from general spreadsheet software formats to statistical packages has, in our experience, been a task novice students find troublesome. A similar comment applies
generally to their understanding of the difference between binary and text form storage of data, even within a single software package. The use of spreadsheet data entry by statistical packages goes
some way toward easing this concern.
3.2. Convenience for Teachers
10 The presentation of material in an ordered and/or structured fashion is central to introductory lessons in any subject. That developers of statistical packages are aware of this need is
illustrated by features such as the "slideshow" tools in version 5 of Data Desk. Spreadsheets lack such facilities explicitly, but one could consider use of presentation software such as Microsoft
Power Point along with the on-line embedding (OLE) facilities of Microsoft Windows to arrange the appropriate sequencing of displays. We believe that this offers an answer to the needs for lecture or
overview presentations. However, there are important configuration and data security issues if one wants students to easily incorporate their own data or additional calculations. For more general
audiences, we have so far been unable to link spreadsheets to World Wide Web (HTML) pages. By contrast, this is relatively easy to do with some statistical packages, for example, Lisp-Stat.
11 Teachers also want to prepare calculations in advance so they can be rerun easily with controlled and verifiable inputs (e.g., Minitab EXEC and JOURNAL, Stata DO and ADO files, Systat CMD files).
Actions on data in a spreadsheet are recorded within the spreadsheet or as macro commands. Even the latter are commonly stored within the spreadsheet and are usually not accessible for external
storage or editing; Excel is an exception. We claim, however, that macro languages in current spreadsheets do not provide sufficient reward in functionality for the heavy effort involved in their
preparation to make them a serious option for preparing pedagogical material.
12 Third parties could offer to do the work of preparing the macros. There are at least two products available that offer spreadsheet templates and macros for teaching statistics (see Siegel 1990;
Berk and Carey 1995). These have the common failing that they cannot lead the student through the steps of an analysis, revealing the parts one by one; however, the tree structure of the menus does
reflect the logic required to select the appropriate test.
13 When macros are used, it is important to recognize how computations are ordered. The spreadsheet model of computations presumes immediate action on data throughout the spreadsheet. Calculations
are only sequential in the sense of the dependencies of cells one upon the other, unless special calculation ordering is specified (O'Leary 1989, p. 365). Changes in data are manifested "immediately"
throughout the spreadsheet as a result of the cell formulas. From a pedagogical perspective, this immediacy can be a great advantage. We can examine the effect of outliers or misrecorded numbers on
our statistical results, including graphs. Unfortunately, an important exception may be "analysis boxes," which do not get updated and are a potential source of misleading output. Immediate updating
is not, however, the preserve of spreadsheets. Notable among statistical packages are Data Desk and Lisp-Stat.
4. Auditing Calculations
14 When calculations are presented or the professor must review or mark student results, it is important to be able to quickly discern what calculations have been carried out. Professors have just as
much difficulty as students in realizing which spreadsheet cells have been altered and in what ways. The ability to log results (Minitab uses the OUTFILE or JOURNAL command and Stata uses the LOG
USING to keep a record of the commands and outputs) does not exist to our knowledge in any common spreadsheet, making an audit of activity next to impossible. The selection and operation on cells in
a spreadsheet manifests itself on the screen anywhere in the array of cells. This does not lend itself naturally to being logged sequentially. Moreover, unless the affected cells are somehow
highlighted and such highlighting maintained during and after actions, it can be quite difficult for passive viewers such as students to observe what has happened and where.
15 A particular example concerns the presentation of a spreadsheet showing results of calculations for exponential smoothing on a forecasting examination. For example, Winter's exponential smoothing
requires a column for data, one for the smoother, one for the smoothed slope, and one for the smoothed seasonal factors. In addition there will be residuals and possibly squares of residuals. The
alignment of these with respect to the time index (yet another column) can be critical to correct answers. The array of numbers in the output has proved very difficult for students to quickly
interpret effectively. We note that some statistical packages, including the Student Edition of Minitab for Windows, offer several such forecasting techniques in a form that is easy to understand,
although for Winter's seasonal exponential smoothing the components of the computations are not presented. That is, spreadsheets present too much detail for effective learning, while statistical
packages may present too little. A temporary compromise one of us (JN) has adopted is to have students learn how such methods work by having them build the appropriate spreadsheet in an assignment
project, whereby they are more directly and actively involved with the operations on the numbers, but then use a statistical package (currently Minitab) for subsequent analyses of data.
16 Of particular concern for tracking activity are the statistical analysis tools that use dialog boxes. From the student's point of view, the use of analysis tools (Quattro Pro and Excel) in the
form of dialog boxes is user-friendly. From the point of view of verification of results, we are convinced that dialog boxes are an invitation to disaster. Unlike @ functions which follow the
spreadsheet convention and update themselves whenever previously referenced data cells are altered, the dialog boxes perform an analysis only at the time of the dialog. It is therefore very easy (and
we ourselves have fallen into the trap) to perform an analysis, edit an observation, then fail to rerun the analysis. While this is true of non-spreadsheet statistics software too, one normally
expects to have to rerun analyses outside the spreadsheet world.
17 The inclusion of dialog boxes in spreadsheets is, we feel, a serious design failure since it mixes two very different ways of viewing computations -- the "all at once" spreadsheet view and the
sequential step-by-step procedural approach. From the professor's perspective, checking whether the student has properly carried out statistical computations is only possible by fully duplicating the
output because there is no complete trace of what was input into the procedure. We are not arguing that @ functions are easy to learn or teach, but that they provide a consistent approach to
computations for spreadsheets. A dialog box could be used to develop the appropriate syntax for such functions, thereby preserving such consistency without imposing the learning cost and syntactic
nuisance of conventional @ functions.
18 Beyond the immediate issue of tracing the computations that have been carried out, logging what has gone on during a computational session poses some difficult philosophical questions. As pointed
out by Paul Velleman (in Goldstein 1993), such matters are complicated by the increasing popularity of graphical interfaces, of which spreadsheets can be considered one form.
5. Spreadsheet Support of Statistics
19 Above we have considered some of the general features of spreadsheet software in relation to teaching statistics. We now turn to specifics. Clearly, comments about the features in software reflect
the particular versions of products considered. In the present investigation, we looked at the following spreadsheet packages, all of which run under Microsoft Windows:
Lotus 1-2-3 for Windows 4.0 (June 11, 1993)
Borland Quattro Pro for Windows 5.0 (August 11, 1993)
Microsoft Excel version 4.0 (April 1, 1992)
Microsoft Excel version 5.0 (December 15, 1993)
We have included the date of the major executable file in each package to avoid confusion over software releases in case there are multiple releases within a labelled "version."
5.1. Features Included
20 Overall, current spreadsheet statistical features correspond to topics covered in a more traditional introductory statistics course that does not emphasize exploratory data analysis. As such, they
seem out of step with more modern approaches in that they do not allow quick and easy explorations of the data. For example, there is a constant need to copy or move data to create multiple columns
or contiguous blocks of data and a continuing need to decide where to place the output. This is ironic, given that spreadsheets are frequently the tool of choice for "try it and see" explorations of
business and administrative data.
21 Nevertheless, many statistical tools are available:
• All the basic hypothesis tests are available, including all the common t-tests and chi-square tests.
• One-way and two-way analysis of variance methods are available in Excel and Quattro Pro but not in Lotus; however, there are no estimated means or confidence intervals and no boxplot comparisons.
• Simple and multiple regression analyses are possible, but with varying degrees of accessibility to residual analysis.
• There is quite good support for a variety of probability and related functions that are useful in some aspects of elementary statistics courses.
• Random numbers can be generated to allow for simulation calculations.
5.2. Deficiencies
22 Spreadsheet software is generally considered to provide strong support for graphical analysis of data. In fact, we find that spreadsheets can be very useful in creating presentation graphs after
the analysis has been completed. However, we need to distinguish presentation from analytical graphics. The latter need not be pretty, nor very high resolution, but they must allow the user to
extract useful information. While character plots have fallen from fashion with the rise of high-resolution printers, they can be output as part of the log file so they do not get detached from the
data and commands that created them. Despite fairly persistent haranguing, students continue to label files with such uninformative names as "graph1" or "plotb." We regard training in documenting
computer analysis as an informal part of a statistics course. Unfortunately, the ubiquitous labelling of a variable as X, or less commonly Y, in most textbooks does not encourage students to use
meaningful names.
23 From the point of view of statistics, some desirable graphs are
• Histograms, possibly with unequal class width,
• Stem and leaf diagrams,
• Boxplots, especially multiple boxplots on the same scale,
• Quality-control charts, and
• p-p and q-q plots for distributions, especially the Gaussian distribution.
Of the graphs above, only histograms with non-contiguous equal-width bars can be displayed easily, although one can specify unequal class widths in the frequency table. This can be misleading. To
prepare a histogram in Quattro Pro, one must first create a frequency table summary (with the option of specifying bins using the left endpoints) and then apply the graphing tools to the resulting
summary, a sequence of operations that is, in our opinion, too complicated for a commonly needed tool. Excel 5.0 is an improvement in that "Chart Output" is an option; however, the resulting
histogram bars are marked by the values of the right endpoints in the middle of the intervals where one would like to see midpoints. Again, this may lead to confusion and misinterpretation. (See Berk
and Carey 1995 for macros that correctly draw histograms and normal probability plots. Since the histograms there are dynamically linked to the frequency table, one can adjust the bins to see the
changes in the histogram. The only limitation is that no fewer than ten bins are allowed initially to facilitate the drawing of a normal frequency curve, which, incidentally, is not redrawn well when
the bins are changed.)
24 A graphic that statisticians and their clients often desire is a line or curve with error bars (the mean plus and minus the standard deviation). This does not appear to be offered in any current
25 Computational deficiencies of spreadsheets also exist; for example, there is only a limited selection of time-series smoothing methods. However, the professor, student, or other user can redress
such deficiencies by means of templates (example spreadsheets) or macros. The spreadsheet developer could add computational features without large-scale interaction with existing design features of
the software. Thus, to add a computational function (an @ function) adds program code that is more or less independent of the rest of the package. For example, a useful additional function
well-suited to the spreadsheet environment would be one that calculated weighted means and variances. In contrast, adding graphics along with computations, such as we mention in the next paragraph,
requires the menu structure, and hence the user interface, to be altered.
26 Some statistical computations have come to be almost automatically followed by graphical output. The most obvious case is regression analysis where residual plots of various types are a natural
part of a proper treatment of data. As far as residual plots go, only Excel gives residual plots against each independent variable as an option in the regression module. (Excel also provides for
scatterplots of the dependent variable and the predicted values against each independent variable and calculates the residuals and predicted values.) Quattro Pro only calculates the residuals and
predicted values as an option, and Lotus calculates only the summary statistics and does not have automated residual calculations.
5.3. Errors or Misleading Output
27 Sometimes software is described as doing one thing but is actually doing something else, or it may use a different convention than that presumed by the user. This can be especially confusing for
novice users of the software or for users who are unfamiliar with the application area. We have already mentioned problems with histogram displays.
28 An example of such a problem occurs in the regression dialog box in Quattro Pro and in both versions of Excel. One of the Excel options is a normal probability plot; however, whereas one might
expect a diagnostic tool to check the normality of the residuals, instead one gets a cumulative distribution plot of the dependent variable. Similarly, Quattro Pro produces a cumulative frequency
table if one specifies "Probability Output."
29 Another example is Quattro Pro's two-sample t-test assuming unequal variances which ends up calculating a pooled variance (as if one assumed equal variances); the same is true of Excel 4.0. Both
software packages also have an entry for a "Pearson correlation" for the two-sample t-test assuming unequal variances; Quattro Pro will calculate it if the samples sizes are equal, whereas Excel 4.0
always returns a value of "#N/A." Excel 5.0 has gotten rid of both of these superfluous entries.
30 The expression -2^4 is evaluated by Excel as 16, implying that the sign has precedence over exponentiation in contrast to the traditional order of calculation. While there are other occasions when
spreadsheet software gives incorrect computational results in the form of the wrong output numbers for specified inputs (see, for example, Nash 1989), we believe that errors are more likely to arise
when unreasonable inputs are provided.
31 While the utility of the F-test for equality of variances may be questioned, it remains a part of many introductory textbooks and courses. Using the speedbar selection to open up a dialog box,
Quattro Pro performs adequately if the first array has a larger sample variance than the second. However, if the arrays are exchanged so the first has the smaller sample variance, the F-statistic is
still greater than one, but the degrees of freedom are reversed, and the corresponding one-sided critical value given is for double the alpha level of .05. Excel 4.0 is similar, except that the
degrees of freedom are correct, but the corresponding critical value given is also for double the alpha-value. Excel 5.0 always calculates the F-statistic using the first sample variance over the
second; however, when the first array has the smaller variance, the critical value is based on the reversed degrees of freedom. The use of the @FTEST function in Quattro Pro and Excel, however,
results in correct (two-sided) p-values. Lotus only has the @FTEST option and this returns the two-sided p-value. The advantage of the @ function is that it is recalculated when the data change,
whereas the more detailed output from Quattro Pro or Excel obtained through the dialog box results in numbers that are not subject to recalculation.
5.4. Treatment of Missing Values
32 The issue of missing values has been a perennial concern in conventional statistical software. A simple search of the 1993 Current Index of Statistics using the search BATch file provided therein
by Gerry Dallal for "missing value" yielded 104 references. A typical example is Donner (1982). Missing values are particularly dangerous in the context of spreadsheets, where a missing cell and one
containing blank text may be presented identically on the screen or printouts. Worse, the software may take blank OR empty cells as zeros, when in fact they represent data that are truly missing.
Students often have very little understanding of the critical difference between zero and missing; textbooks could be more helpful in this regard.
33 Quattro Pro illustrates the difficulty in the analysis of variance calculation (and correspondingly in the two-sample t-tests using either the dialog box or the @TTEST function). For example,
consider the case of a one-way analysis of variance with three groups where one sample has only five elements, and where the other two have six. The means and standard deviations are computed
correctly for the three samples, but the degrees of freedom for the analysis of variance summary seem to treat the missing value as a "zero." The Quattro Pro "Help" facility does state that the
one-way analysis of variance should only be done with equal sample sizes, but this is an unnecessarily restrictive condition. Furthermore, the fact that the software does not trap the empty cell is
an open invitation to serious errors. Excel 4.0 does not allow any empty cell to occur in the input block of equal size columns when performing the analysis of variance, but allows and treats
properly empty cells in the two-sample t-tests (using either the dialog boxes or the @TTEST function). Excel 5.0 has been adjusted to allow for empty cells in a block of equal size columns. Lotus
does not handle the analysis of variance per se, but treats empty cells properly in the two-sample t-tests. None of the three packages will allow a regression analysis to proceed if there are any
missing values. This is obviously an inconvenience that forces the user to filter the data before proceeding.
6. Assessment
34 Spreadsheet vendors must be encouraged to do better. Closer attention to statistical issues would result in tools better suited to data exploration and analysis, and cleaner software design would
avoid some obvious sources of errors. For example, because spreadsheet users are accustomed to instant recalculation, the use of dialog boxes resulting in output that is not dynamically linked to the
data can lead to problems.
35 While we fully expect a continuing debate over the use of spreadsheet software for statistics (and other) teaching along with the evolution of the software itself, it is our opinion that
professors are currently better served by traditional statistics packages for computational support of their courses.
The referees of this paper made a number of sensible and constructive comments that have been incorporated here.
Berk, K. N., and Carey, P. (1995), Data Analysis with Microsoft Excel 5.0 for Windows, Cambridge: Course Technology Inc.
Donner, A. (1982), "The Relative Effectiveness of Procedures Commonly Used in Multiple Regression Analysis for Dealing With Missing Values," The American Statistician, 36, 378-381.
Goldstein, R. (1993), Statistical Computing: Editor's Notes, The American Statistician, 47, 46-47.
Nash, J. C. (1989), "Letter to the Editor," SIGNUM Newsletter, 24(4), October 1989, p. 16.
----- (1994), "Obstacles to Having Software Packages Cooperate on Problem Solving," in Computer Science and Statistics - Volume 25, Proceedings of the 25th Symposium on the Interface, eds. M. E.
Tarter and M. D. Lock, Interface Foundation of North America, pp. 80-85.
O'Leary, Timothy J. (1989), The Student Edition of Lotus 1-2-3, Reading, MA: Addison-Wesley.
Siegel, A. F. (1990), Practical Business Statistics with Statpad, Boston: Irwin.
John C. Nash jcnash@aix1.uottawa.ca Tony K. Quon quon@profs.admin.uottawa.ca Faculty of Administration
136 Jean-Jacques Lussier Private
University of Ottawa
Ottawa, Ontario K1N 6N5
Return to Table of Contents | Return to the JSE Home Page | {"url":"http://www.amstat.org/publications/jse/v4n1/nash.html","timestamp":"2014-04-18T08:10:33Z","content_type":null,"content_length":"31422","record_id":"<urn:uuid:c7216147-1b4d-4a61-8482-0a07d9c7bd8d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 812.05032
Autor: Bertram, E.; Erdös, Paul; Horák, P.; Sirán, J.; Tuza, Zs.
Title: Local and global average degree in graphs and multigraphs. (In English)
Source: J. Graph Theory 18, No.7, 647-661 (1994).
Review: The global average degree of a graph G, denoted by t[G], is defined as the arithmetic mean of the degrees of all vertices of G. For a vertex v in G, the local average degree of v, denoted by
t[v], is defined as the arithmetic mean of the degrees of its neighbors. A vertex v of a graph G is called a groupie if t[v] \geq t[G]. It was conjectured by the authors that every simple graph with
at least two vertices contains at least two groupie vertices. In this paper, the authors show that this conjecture holds for several special families of graphs, such as biregular graphs and P[4]-free
graphs. They also study the function f(n) = maxmax[v in G](t[v]/ t[G]), where the maximum is taken over all graphs G on n vertices, and prove that f(n) = ^1/[4] \sqrt {2n}+O(1). The corresponding
result for multigraphs is discussed. The authors also characterize the trees in which the local average degree t[v] is constant.
Reviewer: Z.Chen (Indianapolis)
Classif.: * 05C35 Extremal problems (graph theory)
Keywords: global average degree; groupie; groupie vertices; trees; local average degree
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/81205032.htm","timestamp":"2014-04-17T01:00:50Z","content_type":null,"content_length":"4198","record_id":"<urn:uuid:edb6ec3d-6556-4ae9-8c51-8104d768cdb0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peg Solitaire with Diagonal Jumps
Last modified March 1st, 2014
In "ordinary" peg solitaire diagonal jumps are not allowed, and the game seems to lose much of its elegance if they are allowed. It should be noted that triangular solitaire is equivalent to
solitaire on a square grid with the addition of diagonal jumps along one diagonal only. Beasley [B1] calls ordinary peg solitaire on a square grid "4-move solitaire", while triangular solitaire is
"6-move solitaire". Solitaire on a square grid where moves are allowed along both diagonals is called "8-move solitaire". We will use this notation (unfortunately, our distinction between moves and
jumps confuses the issue here, and perhaps we should use 4-jump, 6-jump, and 8-jump solitaire).
It is important to realize that the addition of diagonal jumps changes the position classes, and therefore the possible single vacancy to single survivor problems that can be solved. In 8-move
solitaire, there is only one position class! All boards are null-class and in general it is possible to play from an initial vacancy to a single peg anywhere on the board.
Table of Contents:
The 33-Hole English Board
We note first a few interesting facts. Although this board is still null-class in 8-move or 6-move solitaire, it is no longer gapless. In this respect the diamond boards below seem better suited for
8-move solitaire.
In 8-move solitaire it is possible to start with a single vacancy anywhere on the board and play to finish anywhere else on the board.
8-Move Solitaire
In 8-move solitaire on the 33-hole English board, Beasley [B1] gives the following 16-move solution to the central game (p. 241, note that there is a typo in his solution as stated):
Beasley states with this solution "I suspect this can be improved." This solution, with 6 diagonal and 10 "normal" moves, gives an idea of the bizarre and rather non-intuitive moves that can prove
In 2007, I took on the challenge of finding the shortest possible solution with diagonal jumps allowed. With a computer, it seemed it should be easy to beat this solution. However this problem is not
as easy as I initially thought. When searching by levels, the number of boards explodes with alarming speed. Also, I came to realize that the above solution is quite good.
However after running my program over night, I was surprised when my program found a 15-move solution that looked almost the same as the one above. In fact, note that the first eight moves and the
final move are the same in the solution below:
My program also completed an exhaustive search for a 14-move solution, and found none. Hence the 15-move solution is the shortest possible, and in fact up to symmetry and move order, there are only
four 15-move solutions. One of these four is easily obtained by substituting for moves 12 & 13 above the moves c7-c5-c3 and e7-c5. The other two are not simple modifications of the above solution,
and do not finish with the move shown above.
My program also found a 13-move solution to the c3-complement problem (or equivalently the (1,1) complement problem):
And it was also found that this is the shortest possible. It is interesting that all of these solutions make use of the diagonal loop move: d6-b4-d2-f4-d6.
There does not exist any solution on the 33-hole board in under 13 moves. This is a conclusion reached from exhaustive computer search.
6-Move Solitaire
In 6-move solitaire, there are 4 position classes. Let us suppose the the diagonal moves that are added are [up and right] or [down and left]. On a null-class board with an initial vacancy at (x[0],y
[0]), we can finish with a single peg at board positions (x[1],y[1]) where the quantity (x[1]-x[0])+(y[1]-y[0]) is a multiple of 3. Besides the usual locations, the central vacancy problem can now
finish at (1,2), (2,1), (-1,-2), (-2,-1), (-1,1) or (1,-1).
What is the shortest solution to the central game when moves are allowed in one diagonal direction only? Based on the results above, the answer is somewhere between 15 and 18 moves. In this case the
square symmetry of the board is lost, the problem is now diagonally symmetric about both diagonals. Note that we can also view this problem as a problem in triangular solitaire. If we put the 33-hole
board on a triangular (staggered) grid it has a strange appearance:
Nonetheless one can see it is geometrically equivalent, where the original up/down and left/right moves are diagonal moves on the above board, and the diagonal move that has been added is a
horizontal move.
My program found that the central game under 6-move solitaire can be solved in a minimum of 17 moves, so only one less than normal (4-move) solitaire. There are a large number of 17 move solutions,
413 different sequences of moves (not counting solutions with the same moves in a different order or equivalent by symmetry). The longest possible sweep in a solution is an 8-sweep, and the diagram
below shows one such solution on the normal board:
Alternately, the same solution may be presented on a triangular grid, and it looks like:
The 37-Hole French Board
The central game on the French Board cannot be solved in ordinary (4-move) solitaire, nor can it be solved in 6-move solitaire. However the central game is solvable on this board in 8-move solitaire
(both diagonals).
Although this board is larger than the 33-hole board, it is possible to solve it in fewer moves. The figure below shows a solution in only 13 moves. Note that 17 pegs are captured in the first 12
moves, and then 18 pegs are captured in the final 2 moves!
A exhaustive search for a 12-move solution was completed, so 13 moves is the shortest possible. There are 1,778 different 13-move solutions, not counting solutions equivalent by symmetry or move
A resource count which is useful computationally is the one shown below. Unlike most 4-move resource counts, the one below is still valid even with diagonal moves. For the central game in the English
or French boards, this resource count starts at 8 and finishes at 0. On the English game, the first and last jumps lose 4, leaving only 4 to be lost by the rest of the solution. On the French board,
the solution above stays at 8 until the very last move, which loses all 8!
-1 0 -1
-1 1 0 1 0 1 -1
-1 1 0 1 0 1 -1
-1 0 -1
It is possible to solve some problems on the 37-hole French Board in only 12 moves, but none in fewer. The c1 and c3 complement problems can be solved in 12 moves:
Let C9 be the board position with only the center 9 holes filled. It is possible to play from this position to any finishing hole on the board. It is also possible to play from the complement of C9
to C9 (this is not possible on the standard 33-hole board), thus proving elegantly that one can begin from any vacancy and end at any other board location. The shortest solution from Complement(C9)
to C9 has 13 moves, and an example of a minimal solution is given below.
The C9 complement problem is greately restricted by the resource count above, because we cannot make any jump that loses anything by this resource count. It is quite easy to prove that 13 moves is
minimal: for we must clear the 8 corners, and this can only fill the holes at {c3, e3, c5, e5}. This leaves us with 5 more vacancies, each of which requires a move, for 13 moves total. In fact, the
only way to clear a corner is to end at one of {c3, e3, c5, e5} so for any 13-move solution all the moves must end in the central 9 holes.
The 41-Hole Diamond Board
What is the shortest solution to the central game on the 41-hole diamond board under 8-move solitaire? The resource count shown above isn't valid on this board, and the last move can be a long sweep
which removes as many as 25 of the pegs in a single move.
Shown below is an 11 move solution to the central game. Note the preference for diagonal jumps, especially at the start. The first 8 moves in fact consist only of diagonal jumps. A preference for
diagonal jumps at the start seems to be true of all 11-move solutions, the reason for this is unknown.
An exhaustive search for a 10 move solution came up empty, so 11 is the shortest possible.
If C9 is again the board position with only the 9 center pegs occupied, then it is possible to play from the complement of C9 to C9. An example of such a solution is shown below. The solution shown
below shows that it is even possible to reach the corners from C9. The first row of the solution goes from the top corner vacant to the complement of C9, the middle row goes from the complement of C9
to C9, and the final row goes from C9 to the top corner.
What is the shortest possible solution to the C9 complement? We can prove that at least 13 moves are required: to clear the 4 corners requires 4 moves, and these four moves, can at best fill only one
of the central 9 holes. In fact, even to fill the middle hole from a corner will require we supply another peg to the central 9, and we must also fill the remaining 8 empty spots, for 13 moves total.
However it seems we cannot even get close to 13, because the shortest solution has 17 moves:
It is somewhat surprising that the central game itself takes only 11 moves, yet the C9 complement, which captures 23 pegs instead of 39, takes so many more moves.
The following resource count is useful in considering the C9 complement:
-1 1 0 1 1 1 0 1 -1
Note that for the C9 complement problem, the resource count begins at 12 and finishes at 9, so we can lose only 3. We can also rotate this resource count 90 degrees and obtain a tighter bound. The
moves that loses resource count are shown in the above diagram in red (using either the resource count or its 90 degree rotation).
The resource count isn't useful for single vacancy to single survivor problems because the value of the full board is 21, so there is a lot of slack. However, a "Merson Regions" constraint is quite
useful in finding minimal length solutions to such problems. It is not hard to demonstrate via exhaustive search with such a constraint, that there is no single vacancy to single survivor problem on
this board solvable in less than 11 moves.
The 13-Hole Diamond Board (Hoppers Board)
ThinkFun. If you rotate this board 45 degrees, it is easy to see that it is a 13 hole Diamond Board, or draughtsboard created from a 5x5 square board, with the addition of diagonal moves along both
diagonals. Without the addition of diagonal moves, no single vacancy to single survivor problem on this board is solvable. However with the diagonal jumps many problems become solvable, and its small
size makes solutions easier to work out. An interesting advanced problem is to try to solve the central game on this board in as few as 7 moves (recall that a move is one or more consecutive jumps by
the same frog!).
In the diagrams below the board is rotated by 45 degrees, this may be confusing if you have played the ThinkFun puzzles. I find it easier to play the puzzle in this way, because all jumps move a frog
by two columns, two rows, or both. In the usual orientation it seems a little odd that some jumps move a frog by two "columns", others by four (which is why the ThinkFun puzzles have lines on the
board to help denote legal jumps).
c1 A If we use standard 5x5 notation on this board (central hole is "c3") then the following single vacancy to single survivor problems turn out to be
b2 c2 d2 F D B solvable:
a3 b3 c3 d3 e3 K I G E C
b4 c4 d4 L J H 1. Vacate c1 (A), finish at any hole except {b3, d3} ({I, E})
c5 M 2. Vacate b2 (F), finish at any hole except {c2, b3, c3, d3, c4} ({D, E, G, I, J})
3. Vacate c2 (D), finish at {c1, c5} ({A, M})
Standard 5x5 notation "Hoppers" notation 4. Vacate c3 (G), finish at {c1, a3, e3, c5, c3} ({A, C, K, G, M})
-1 The resource count to the left is very useful when playing the Hoppers game. Conceptually, we can think of it as specifying that the removal of a corner frog at {c1, a3, e3, c5} ({A,
0 1 0 C, K, M}) requires removal of a frog at {c2, b3, d3, c4} ({D, E, I, J}), and the absence of a frog at the center c3 (G). This is a very useful observation and makes the puzzle much
-1 1 0 1 -1 easier. We can prove that certain single vacancy to single survivor problems are not solvable using this argument. For example, if we begin with c2 (D) vacant, the starting value of
0 1 0 the resource count is -1, so we can only finish at a hole labeled -1, a corner. Remember, by definition, no jump can increase the resource count.
The photo below shows one of the "expert" cards from the game. Again the starting value of the resource count is -1, which tells us that
1. If we are to finish with one frog, it must end at a corner {c1, a3, e3, c5} ({A, C, K, M}).
2. If we make a jump that reduces the resource count, we will be unable to finish with one frog (all jumps must preserve the resource count).
A good strategy in general is to first concentrate on removing the corner frogs. On the board above, let the starting red corner frog be at c1 (A). Then we play d2-b4 (B-L) (to clear the center),
then c5-c3, c2-c4 (M-G, D-J) (to remove the corner frog at c5 (M)). We then must move the frog at a3 (K) to the center in order to remove him, so we play a3-c5-c3 (K-M-G). The final moves should
now be clear: c1-a3, d4-b2, a3-c1 (A-K, H-F, K-A). These last three jumps are the simplest of "block removal moves", the 3-removal or 3-purge.
Over the years, ThinkFun (formerly Binary Arts) has produced several versions of this puzzle. In 2004, they made a version with executives jumping one another called "Downsize". Around 2007,
penguins were the game pieces in "Cool Moves". As of 2012 the "Hoppers" version of the puzzle is the only one that is readily available. "Cool Moves" was sold to Discovery Toys, but they no
longer seem to make it.
Surprisingly, the cental game (start and finish at the center) does not appear in the Hoppers card set (however it does appear on the final card in "Cool Moves"). It is easy to prove that the
central game cannot be solved in fewer than 7 moves. At the start, we have 4 pegs at c1, a3, e3 and c5. To remove them, we must move them to the center c3 and then jump over the center. All four
must be moved to the center, which takes 4 separate moves, and three of them must be removed, which takes 3 additional moves. This reasoning can also be used to find a solution in 7 moves. We
must have 4 moves starting at each of the 4 corners and ending at the center (two of which are the first and last moves), and 3 moves which jump over the center. An example solution is shown
In fact the same reasoning shows that no single vacancy to single survivor problem on this board can be solved in less than 7 moves.
The 25-Hole Diamond Board
This board can be obtained from the standard 33-hole board by removing the 8 corner holes. The central game can be solved in a minimum of 10 moves. A 10-move solution with the same finishing move
as on the 33-hole board is shown below:
Note that from the board position before the penultimate move, we can do a single move that finishes at the hole immediately left of the center. This shows that some problems on this board can be
solved in 9 moves.
A "Merson region" analysis can be used to prove that no solution to the central game can contain fewer than 10 moves. For this we use 8 regions: the four corners plus four edge pairs. There must
be one move originating from each region, and the first and last moves cannot be among these 8.
The 61-Hole Diamond Board
Here is a 15-move solution to the g6-complement problem. Can any problem on this board be solved in 14 moves? Can the central game be solved in 15 moves? Both these questions are unresolved.
┃ Central Game ┃
┃ Shortest Possible Solutions ┃
┃ Board │ Game │ Moves │ Number of │ Longest │ Comments ┃
┃ │ │ │ Solutions │ Sweep │ ┃
┃ │ 33-Hole │ 4-Move │ │ │ │ An 18-move solution was discovered by E. Bergholt in 1912. ┃
┃ │ Board │ (Normal) │ 18 │ 2 │ 5 │ Proved analytically to be the shortest possible by J. Beasley in 1962. Verified computationally. ┃
┃ │ (Standard) │ │ │ │ │ ┃
┃ ├─────────────┼──────────┼───────┼───────────┼─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨
┃ │ 33-Hole │ │ │ │ │ 33-hole board with moves along one diagonal only, ┃
┃ │ Board │ 6-Move │ 17 │ 413 │ 8 │ or on a triangular grid. Computational result. ┃
┃ │ (Standard) │ │ │ │ │ ┃
┃ ├─────────────┼──────────┼───────┼───────────┼─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┨
┃ │ 33-Hole │ │ │ │ │ 33-hole board with moves along both diagonals. ┃
┃ │ Board │ 8-Move │ 15 │ 4 │ 5 │ Computational result. I have been able to prove that this problem cannot be solved in less than 14 moves. ┃
┃ │ (Standard) │ │ │ │ │ ┃
┃ │ 37-Hole │ │ │ │ │ The central game is not solvable in 4 or 6-move solitaire, because this board is not null-class. It is not hard to prove that the ┃
┃ │ Board │ 8-Move │ 13 │ 1,778 │ 12 │ central game in 6-move solitaire cannot be solved in less than 13 moves, and no single vacancy to single survivor problem can be ┃
┃ │ │ │ │ │ │ solved in fewer than 12 moves. ┃
┃ │ 13-Hole │ │ │ │ │ This board is null-class in 4-move solitaire, however no complement problem is solvable. The central game is not solvable in 6-move ┃
┃ │ Diamond │ 8-Move │ 7 │ 7 │ 4 │ solitaire. For 8-move solitaire, it is not hard to prove that no single vacancy to single survivor problem cannot be solved in fewer ┃
┃ │ Board │ │ │ │ │ than 7 moves. ┃
┃ │ 25-Hole │ │ │ │ │ This board is null-class in 4-move solitaire, however no complement problem is solvable. The central game is solvable in 6-move ┃
┃ │ Diamond │ 8-Move │ 10 │ 71 │ 10 │ solitaire. ┃
┃ │ Board │ │ │ │ │ ┃
┃ │ 41-Hole │ │ │ │ │ ┃
┃ │ Diamond │ 8-Move │ 11 │ Lots │ ≥18 │ The central game is not solvable in 4 or 6-move solitaire, because this board is not null-class. ┃
┃ │ Board │ │ │ │ │ ┃
┃ │ 61-Hole │ │ │ │ │ ┃
┃ │ Diamond │ 8-Move │ ≥15 │ │ │ No solution on this board can be shorter than 14 moves. The shortest solution I have found has 15 moves (g6-complement). ┃
┃ │ Board │ │ │ │ │ ┃
"Number of Solutions" is the number of distinct move sequences possible, not counting the order of moves or solutions equivalent by symmetry (rotations or reflections). "Longest Sweep" is the
longest sweep possible in any solution of minimum length.
An analytical proof that a solution has minimum length is ideal (if a length is proven minimal analytically, it is listed in bold).
Copyright © 2007 by George I. Bell
Peg Solitaire Main Page ... | {"url":"http://home.comcast.net/~gibell/pegsolitaire/diagonal/index.html","timestamp":"2014-04-16T08:31:12Z","content_type":null,"content_length":"49270","record_id":"<urn:uuid:89edc93a-d976-4251-9a2d-6565152d5d61>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Major Requirements
Major Requirements
undergraduate program
Physics B.S. majors are required to complete coursework in physics as well as co-requisite coursework in mathematics and an additional approved science.
Among these courses, the following ten are required:
Required Courses:
PHYS2200 (or PHYS2100*) & PHYS2050 Introductory Physics I with Lab PHYS2201 (or PHYS2101*) & PHYS2051 Introductory Physics II with Lab
PHYS3100 Wave and Vibrations with Lab PHYS3300 Intro to Modern Physics
PHYS4100 Mechanics PHYS4200 Electricity and Magnetism
PHYS4400 Quantum Physics I PHYS4401 Quantum Physics II
PHYS3510 Contemporary Electronics Lab PHYS4600 Statistical Mechanics & Thermodynamics
*Physics majors are strongly recommended to take the PHYS2200-2201 sequence. PHYS2100-2101 is typically for Biology, Pre-med, and students fulfilling science requirements.
Choose one of the following:
PHYS4350 Experiments in Physics PHYS4300 Computing in Physics*
PHYS4951 Senior Thesis** Honors Program Thesis***
*PHYS4300 requires the prerequisite, Introduction to Scientific Computation.
**Senior Thesis is recommended for students planning graduate work in Physics.
*** For students in A&S Honors program doing a Physics Thesis.
Choose at least two elective courses (course offerings vary from year to year):
Elective Courses
PHYS4505 Nuclei and Particles PHYS4545 Condensed Matter Physics
PH440 Applied Fluid Mechanics PHYS4555 Optics
PH480 Intro. to Mathematical Physics PH510 Stellar Astrophysics
PH515 Semiconductor Devices and Lasers PH525 Plasma Physics
PHYS4565 Cosmology and Astrophysics PH545 Intro. Chaos and Nonlinear Dynamics
PHYS4575 Physics of Nanomaterials
The following mathematics courses are required:
Mathematics Courses
MATH1102 Calculus I MATH3305 Advanced Calculus*
MATH1103 Calculus II or MATH1105 Calculus II AP MATH2202 Multivariable Calculus
*Students entering with Math AP Placement are advised to substitute MATH3305 with both Linear Algebra (MATH2210) and Differential Equations (MATH4410)
The final requirement is two approved courses in a science other than physics, normally CHEM1109-1110 General Chemistry along with the associated laboratories.
Choose two additional science courses with lab:
Science Courses with Lab
CHEM1109 & 1111 General Chemistry I with Lab CHEM1110 & 1112 General Chemistry II with Lab
Other Approved Science I with Lab Other Approved Science II with Lab | {"url":"http://www.bc.edu/schools/cas/physics/undergraduate/major-description.html","timestamp":"2014-04-19T12:31:28Z","content_type":null,"content_length":"14122","record_id":"<urn:uuid:d740fb7a-6be4-4fea-9c96-a3557b53f5b5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Writing “Either Or” formula in Excel [Formula Howtos]
Posted on March 2nd, 2010 in
Excel Howtos
Learn Excel
- 11 comments
We all know the AND, OR & NOT formulas in Excel using which you can perform simple logical operations And, Or & Negate. But what if you are the chief of HR at ACME Company, where they have a strange
rule on extra allowance like this:
Now, to calculate the dates in a month that meet this clause, we need an “exclusive OR” formula or what geeks call as “XOR” operation.
The logical operation … exclusive or (symbolized XOR, EOR), … results in a value of true if exactly one of the operands has a value of true. A simple way to state this is “one or the other but
not both.”
Now, XOR or exclusive Or is a fairly common logical test, but there is no straight forward formula to test this. Instead we have to use a lengthy combination or AND, OR and NOT formulas to arrive at
For eg. assuming you want TRUE only when one of the two logical conditions A or B is TRUE,
you have to write,
=OR(AND(NOT(A),B),AND(A,NOT(B))) [Afterall, that is how XOR operation is defined to begin with]
Now, that seems like an awful formula. May be there is a better formula after all?!? One that is less crazier than the HR clause of ACME Co.
Well, there is.
If you observe closely, XOR is nothing but <> (not equal to sign). So, instead of going nuts writing the lengthy ANDORNOT combination, you can simplify the formula to,
=A<>B and it gives the same outcome.
So, the formula to find whether a given date (in cell A1) qualifies for bonus allowance,
=IF((WEEKDAY(A1)=6)<>(MOD(DAY(A1),5)=0),"Pay Bonus","Pay Regular")
More about logical formulas in Excel
AND Formula | OR Formula | NOT Formula | 51 common excel formulas
Do you XOR in real life?
There have been few occasions when I had to XOR in my worksheets. I found that writing the correct formula can be a bit tricky depending on how crazy the rule is. But almost always a combination of
<>, NOT, AND and OR worked for me well.
What about you? Do you write formulas that involve complex IF clauses?
Annual Goals Tracker Sheet [awesome ways to use excel] Exploring Profit & Loss Reports [Part 4 of 6]
Written by Chandoo
Tags: and(), day(), Excel 101, Excel Howtos, if() excel formula, Learn Excel, Microsoft Excel Formulas, MOD(), NOT(), OR() excel formula, spreadsheets, weekday, XOR
Home: Chandoo.org Main Page
? Doubt: Ask an Excel Question
11 Responses to “Writing “Either Or” formula in Excel [Formula Howtos]”
1. What if I have to write a formula for XOR with 3 entries? By that I mean that I need a formula that receives 3 boolean inputs and returns TRUE only when only one of the inputs is also TRUE.
□ @Pedro.. You can use SUM formula then, like this =SUM(A,B,C)=1 to test whether only of the 3 is true.
2. Been there… done that
Create a long complex IF incorporating an AND or OR and then after re-looking at it, change it to incorporate a and suddenly the function is more legible, more understandable and more elegant.
The poor is often overlooked.
3. sorry… my NOT EQUAL sign “”s did not display on my comment.
4. [...] Check for Either Or conditions in Excel [...]
5. doesn’t work in a SUMPRODUCT formula. However, an XOR calculation seems to be easy enough by using ‘=1′…
Assuming A2:A6={“France”;”France”;”France”;”Other”;”Other”} and B2:B6={“France”;”Other”;”France”;”France”;”Other”} …
Not quite OR (counts all occurences of “France”):
‘Proper’ OR:
6. Dear All,
I,ve got some difficulties to make formula in excel chart as below
Pls see below table
1 a
2 b
3 c
4 d
5 e
6 f
. .
. .
. .
18 r
I mean, I want if I write “1″ in cell A1 then in cell C1 refer to “a”.
Do you have an idea for this?
Thanks for your big help
7. Everyone seems so knowledgable here! I have an issue that I’ve been stuck on for the past 3 hrs trying to figure out: a formula for this:
I want cell I38 to reflect one of these two values:
If H38 reads “n” or left blank, then perform this function:multiply F38x0.1, then add F38 to that product.
If F38 is not “n” or left blank, But reads “y”, then perform this function: I38=F38
Can I even assign two functions like this (an either/or) to one cell? I’m going cross-eyed and perhaps, a little insane trying to work this. Thanks for your help!
□ @Bets
To solve your problem use:
=IF(OR(H38=”", H38=”n”),1.1,1)*F38
It is quite possible using the If() function to have lots of nested and totally different formulas in one cell which will do different things when the circumstances are right
8. Hi!
I want to know if it’s posssible for me to combine below formulas into one formula using OR.
=IF(P42<="00:45", "A", IF(P42<="01:00", "B", "C"))
=IF(ISNUMBER(SEARCH("|", K42, 5)), "C", "A")
□ @Adhita
But it depends on the order of the if’s and or’s
The general format is
=If( Or(Condition 1 = Value 1, Condition 2 = Value 2, Condition 3 = Value 3), Value if True, Value if False)
The Or means that it will be true of any of the 3 conditions are met
If you need them all to be True to trigger the If, use And()
=If( And(Condition 1 = Value 1, Condition 2 = Value 2, Condition 3 = Value 3), Value if True, Value if False)
You can nest results as you have done above as well
Can you post a table of what should happen if various combinations of results are met?
Annual Goals Tracker Sheet [awesome ways to use excel] Exploring Profit & Loss Reports [Part 4 of 6] | {"url":"http://chandoo.org/wp/2010/03/02/either-or-formula-in-excel/","timestamp":"2014-04-21T15:52:00Z","content_type":null,"content_length":"50840","record_id":"<urn:uuid:824acb95-a0c7-48dc-9739-ea682641c786>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
A circle is the set of all points equal in distance from a center point. If you pinned a piece of yarn down, then tied a pencil to the other end of the yarn and pulled the pencil around, you'd
roughly draw a circle.
Here are some more definitions that you need to know
• Chord: a line segment connecting two points on a circle.
• Diameter: the distance across the center of a circle.
• Radius: the distance from the center of a circle to a point on the circle.
• Secant: a line intersecting a circle at two points.
• Tangent: a line intersecting a circle at exactly one point.
All circles are similar, meaning that they have the same shape but may be different sizes.
We are not going to do any exercises with circles right now, but we will be working with them when we get into area and perimeter. | {"url":"http://www.shmoop.com/basic-geometry/circles-help.html","timestamp":"2014-04-19T02:05:42Z","content_type":null,"content_length":"31980","record_id":"<urn:uuid:e68529ff-133f-481d-b02d-684048198d30>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Descriptive analysis of data
The degree of data processing and analysis varies by type of statistical products prepared by the national statistical offices. (See Box 4.1 for types of statistical products that may include gender
statistics.) Typically, tables constructed to disseminate data collected in censuses or surveys involve minimum data processing and analysis. A large amount of data is provided, often as absolute
frequencies or counts of observations, making difficult to discern the main differences between women and men. Additional processing and analysis are developed when more analytical reports or
articles focused on specific topics are prepared. In this case, the differences between women and men have a chance of becoming more visible.
Gender statistics require at least two statistical variables cross-tabulated: sex and the main characteristic that is studied, such as educational attainment or labour force participation. Ideally,
additional variables are used in further crosstabulation of data (for example, by age group or geographic areas) in three- or multiple-way tables. Although statistics on individuals have been
traditionally disseminated as totals with no further information on women and men, data are increasingly disaggregated by sex in dissemination materials. Still, one limitation in producing gender
statistics persists. Sex is often used as only one of the breakdown variables for the data presented. As explained in chapter 1 and shown in chapter 2, gender statistics and a meaningful gender
analysis commonly require disaggregation by sex and other characteristics at the same time. For example, gender segregation in the labour market is partially determined by the gender gap in
education, therefore data on occupations should be further disaggregated by level of educational attainment.
Basic descriptive analysis of data involves calculation of simple measures of composition and distribution of variables by sex and for each sex that facilitate straightforward gender-focused
comparisons between different groups of population. Depending on the type of data, these measures may be proportions, rates, ratios or averages, for example. Furthermore, when necessary, such as in
the case of sample surveys, measures of association between variables can be used to decide whether the differences observed for women and men are statistically significant or not.
Percentages, ratios, rates or averages are the basis for calculation of gender indicators. Indicators, in general, are used to “indicate” how differently one group performs by comparison to a norm or
a reference group. Gender indicators should show how women perform by comparison to men, what is their status relative to men’s status, in areas such as education, formal work, access to resources,
health or decision-making. In this regard, gender indicators are important tools for planners and policy makers in monitoring progress toward gender equality.
The sections following present the type of data involved in gender statistics, measures of composition and distribution used in gender statistics, and the types of gender indicators that can be
constructed using those measures.
• + Box 4.1 Types of statistical products disseminating gender statistics
□ Gender statistics are made available by national statistical offices through various types of dissemination products. Some of the dissemination products are part of the regular production of
a statistical office and aimed at making available data collected in censuses, sample surveys or compiled from administrative sources. They usually concern one type of data source or one
statistical field and are intended to specialists who wish to further analyse the results of censuses or surveys or carry out research on specific topics. The data disseminated in this type
of products can be detailed, organized in large tables, and often presented as absolute values or raw data that would give specialists more flexibility in doing their own analysis. A gender
perspective can be integrated in these products by systematic sex-disaggregation of data and systematic coverage of data needed to address gender issues.
Other dissemination products that may include gender statistics are analytical reports or articles focused on specific topics. Data and other information may be compiled from more than one
source and different statistical fields may be covered. Policy concerns are usually taken into account. These publications are intended for larger audience, not only statisticians, but also
research and policy specialists in the topic or topics covered. Data disseminated in this type of product is presented in small summary tables and charts and discussed in the accompanying
text. Large tables with more detailed data may be provided in annexes. A gender perspective can be integrated in these products through three elements: data-based analysis of gender issues
specific to the selected topic; illustrations with gender-sensitive tables and charts; and systematic sex-disaggregation of data presented in annexes of the publication.
Statistical publications focused on gender issues are one type of analytical reports. The typical example is the “Women and men” publications produced by many national statistical offices.
These publications contain data from different statistical fields and from different sources; cover multiple policy areas and gender issues; and are addressed to a large audience, including
persons with limited or no experience in statistics. They are an important tool for non-statisticians, gender specialists, gender advocates and policy makers. Instead of presenting data and
let the reader analyze them and draw their own conclusions, these publications are focused on presenting the main results of data analysis and their interpretation, including implications for
policymaking. They are usually designed to be user friendly, based on easily comprehended language, with simple tables and charts, and attractive presentation.
Finally, gender statistics are disseminated through dedicated databases or through more comprehensive databases such as those focused on social indicators, development indicators or human
development indicators. Data disseminated in this format usually cover several areas of concern and several points in time or time periods. Data are usually presented ready processed into
indicators that facilitate comparisons over time or between various groups of population. Information on calculation of indicators included in the database, underlying definitions or concepts
used, and sources of data used, are sometimes made available along with the database. This type of dissemination product is usually targeted to specialists interested to analyze themselves
statistical information, including for monitoring purposes.
Hedman, Birgitta, Francesca Perucci and Pehr Sundström, 1996. Engendering Statistics. A Tool for Change. Statistics Sweden.
United Nations, 1997. Handbook for Producing National Statistical Reports on Women and Men. DESA, United Nations Statistics Division, New York.
United Nations Economic Commission for Europe and World Bank Institute, 2010. Developing Gender Statistics: A Practical Tool. Geneva.
• + Type of data involved in gender statistics: qualitative and quantitative variables
□ Statistical variables are classified into two broad classes based on their measurement level: qualitative variables, also called categorical variables (for example, sex, marital status,
ethnicity, educational attainment); and quantitative variables (for example, age, income, and time spent on paid or unpaid activities). Categorical variables are of two major types: nominal
variables (such as sex and marital status) and ordinal variables (such as educational attainment). Nominal variables do not imply any continuum or sequence of their categories. Typical
examples include sex or ethnicity. The categories can be arranged in any order without inconvenience in the analysis. However, for the convenience in presentation, they can be arranged
alphabetically, in order of their relative size in the population, or in order of relative focus of the publication (for example, first women, followed by men). Ordinal variables imply an
underlying continuum. When dealing with ordinal variables, the categories must be arranged in the order implied by the continuum to facilitate analysis of the data. A typical example is
“level of educational attainment”. The categories can be order in ascending or descending order of level of education. For example: no education, primary education, secondary education,
post-secondary non-tertiary education, and tertiary education. Some continuous variables tend to be coded into a few categories and treated as ordinal variables. For example, age in single
years can be recoded in 5-year age groups and displayed from the youngest to the oldest ages.The distinction between types of variables is important because specific statistical measures can
be applied to each category, as shown in the paragraphs following.
• + Measures of composition or distribution for qualitative variables
□ Computation of proportions, percentages, ratios and rates are basic statistical procedures in describing the categorical composition or distribution of qualitative variables, and useful tools
for standardization of the statistics compared. It is important to keep in mind that the measures of composition or distribution should not be calculated for small number of observations. In
that case, actual numbers (absolute frequencies) should be preferred.
Proportions and percentages
A proportion is defined as the relative number of observations in a given category of a variable relative to the total number of observations for that variable. It is calculated as the number
of observations in the given category divided by the total number of observations. The sum of proportions of observations in each category of a variable should equal to unity, unless the
categories of the variable are not mutually exclusive. Most often, proportions are expressed in percentages. Percentages are obtained from proportions multiplied by 100. Percentages will add
up to 100 unless the categories are not mutually exclusive.
In gender statistics, proportions can be calculated as relative measures of (a) distributions of each sex by selected characteristics; and (b) sex distributions within the categories of a
characteristic. These two types of proportions are presented in the Table 4.1. In the first case of distribution, the proportions are calculated as relative frequencies of the categories of a
characteristic for each sex, with women’s and men’s respective totals used as the denominators. For example, in the third column of data in Table 4.1 it can be observed that employed
represents 39 per cent of all women. This is calculated as the number of women employed divided by women’s total population in the corresponding age group and multiplied by 100. In
comparison, employed represents 73 per cent of all men, as shown in the fourth column of data. This is calculated as the number of men employed divided by men’s total population in the
corresponding age group and multiplied by 100.
In gender-related analysis, proportions calculated as percentage distributions can be used to compare women and men with regard to various social or economic characteristics. A simple measure
of the gender gap is the differential prevalence, where percents in the distribution of a characteristic within the female population are subtracted from corresponding percents in the
distribution of the characteristic within the male population. The resulting percentage-point difference indicates the gender gap in the characteristic considered. In our case, the proportion
of women employed is lower than the proportion of men employed by 34 percentage points.
The percentage distribution of the categories of a characteristic for each sex is the basis of most of the gender indicators. A few examples are the labour force participation rate, literacy
rate, school attendance rate, or contraceptive use. Based on the proportions calculated in the data columns 3 and 4 in Table 4.1, two indicators of the status of women and men on the labour
market can be directly figured out. For example, the proportion of women who are employed (39 per cent in our case) is actually the indicator employment-to-population ratio, one of the
indicators for the first Millennium Development Goal, on eradication of poverty and hunger. Furthermore the proportion of women who are employed or unemployed give the labour force
participation rate (in our case, the labour force participation for women is 39 +2 =41 per cent). Based on the data presented in the table two other indicators can be calculated: unemployment
rate (which is the proportion of unemployed in the total of employed and unemployed); and employment rate (which is the proportion of employed in the total of employed and unemployed).
Table 4. 1 Economic activity status for population 15-64 years old, Peru, 2007
Percentage distribution Sex distribution (per cent)
Women Men Women Men Women Men Total
Employed 3460389 6186103 39 73 36 64 100
Unemployed 154781 301469 2 4 34 66 100
Not economically active population 5156664 2030531 59 24 72 28 100
Total population 8771834 8518103 100 100
Source: United Nations Statistics Division, DYB, Census data sets (accessed January 2012).
Sex distribution within the categories of a characteristic are shown in the data columns 5 and 6 in Table 4.1. In this case the proportions are calculated by raw, as opposed to the previous
type of proportions, calculated by columns. For example, 36 per cent of the employed are women and the rest 64 per cent are men. The share of women in employed is calculated as the number of
women employed divided by the total number of women and men employed and multiplied by 100.
Among the gender indicators constructed based on sex distribution within a category of population are the proportion of seats in parliament held by women, share of girls among the children
out-school, share of women among agricultural workers, and share of women among older population living alone.
This type of indicator is often used for population groups known to have an overrepresentation of women or men. The selected groups are often linked to a policy concern. For example, in many
countries women represent a minority of parliament members, ministries, chief executives of corporations, mayors, or researchers. Policies based on gender quota are used by some of the
countries to increase the participation of women in those groups.
The percentage of women and the percentage of men in a group, always add up to one hundred per cent. Because of that, often only one of the indicators (share of women usually) is presented in
tables or graphs.
Particular compositional aspects of a population can be made explicit by use of ratios. A ratio is a single number that expresses the relative size of two numbers. The ratio of one number A
to another number B is defined as A divided by B. Ratios can take values greater than unity. Because of the way they are calculated, proportions can be considered a special type of ratio in
which the denominator includes the numerator. However, ordinarily, the term ratio is used to refer to instances in which the numerator (A) and the denominator (B) represent separate and
distinct categories. Ratios can be expressed in any base that happened to be convenient, however, often used is the base of 100.
A well-known example of ratio based on qualitative variables is the sex ratio – the number of males per 100 females, used to state the degree to which members of one sex outnumber those of
the other sex in a population or subgroup of a population. A variation of this indicator is the sex ratio of birth, defined as the number of male live births per 100 female live births.
Other gender indicators based on sex ratios may involve the standardization of the variables used. For example, gender parity index calculated for participation at various levels of education
is intended to reflect the surplus of girls or boys enrolled in school. The indicator can be calculated simply by dividing the number of girls enrolled to the number of boys enrolled. This
gives a good estimation of the distribution by sex in enrolment. However, it gives a poor measure of gender differences in access to education, because the differences in the number of girls
and number of boys that should be in school (the school-age population) are not taken into account. An alternative calculation of the indicator that controls for the sex composition of the
school-age population uses the ratio of net enrolment rates (or gross enrolment ratios) for girls to net enrolment rates (or gross enrolment ratios) for boys.
In general, proportions and ratios are useful for analysis of the composition of a population or of a set of events. Rates, in contrast, are used to study the dynamics of change. Most often
used in gender statistics are rates of incidence. A rate of incidence is usually defined as the number of events that occur within a given time interval (usually a year) divided by the number
of members of the population who were exposed to the risk of the event during the same time interval. Rates can be considered a special type of ratio, in the sense that they are obtained by
dividing a number (of events) to another number (of population exposed to the event). In calculating rates, it is usually assumed that the events are evenly distributed throughout the year,
while the population at risk is approximated as the midyear population. Demographic rates such as fertility rates and mortality rates are typical examples of rates calculated in gender
statistics. By convention, some ordinary percentage figures showing the composition of a population group are called rates. For example, what is called literacy rate is actually a simple
percentage of the population that is literate.
When data on population exposed to risk are not easily available, a close approximation of that population is used as denominator to summarize the incidence of the events considered. The
indicator obtained is not considered a rate anymore, but a ratio. For example, in the case of maternal mortality, when the originating population – that is the number of pregnant women – is
not available, the indicator is calculated on the number of live births, and is more accurately called maternal mortality ratio.
Data used for the numerator and data used for the denominator in calculating rates come sometimes from different sources. For example, in the case of mortality rates, data on deaths used for
the numerator may come from the civil registration system, while data on population used for the denominator may come from population censuses. When data from different sources are to be
combined, it is essential to ascertain whether they are comparable in terms of the coverage of all groups of population and geographic areas, and time period(see Box 4.2).
A probability is similar to a rate, with one important difference: the denominator is composed of all those persons in a given population at the beginning of the period of observation.
Typical examples are infant mortality rate and under-five mortality rate. The numerators are infant and child deaths respectively. The denominator used is the number of births, which
represents the population at risk of dying at the beginning of the period of observation.
Measures of composition or distribution for quantitative variables
In gender statistics, the measures of central tendency and dispersion commonly used to analyse continuous variables are the median and quantiles, the arithmetic mean and the standard
Medians and quantiles
The median is the value that divides a set of ranked observations into two groups of observations of equal size. Examples of indicators based on the median are the median age of the
population and the median income in population. The concept of median can be generalized, obtaining quantiles, which divide a ranked distribution into groups of equal number of observations.
Examples of quantiles are quartiles, quintiles, deciles and percentiles. Quartiles divide the ranked distribution into four equal groups, quintiles in five groups, deciles in ten groups,
while percentiles in one hundred groups. These measures are often used in presenting the distribution of income or wealth scores.
Means and standard deviation
The arithmetic mean (or average) is defined as the sum of values recorded for a quantitative variable divided by the total number of observations. Examples of indicators based on arithmetic
mean are the average time use for unpaid work by sex, the average size of land owned by sex of the owner, mean age at first marriage by sex and mean age of mother at first child. Some gender
indicators are calculated as ratios between the averages calculated for women and for men. For example, one of the indicators commonly used to show the gender pay gap is the ratio of female
to male earnings in manufacturing. It is calculated by dividing the average earnings gained by women employed in manufacturing by average earnings gained by men employed in manufacturing.
Deviations from the mean are differences between the values of each observation for a particular variable and the mean of all values observed for that variable. Values of some observations
are greater than the mean, therefore their deviations from the mean are positive; while values of other observations are smaller than the mean, therefore their deviations from the mean are
negative. When the deviations from the mean are squared, all the negative deviations become positive. The sum of all squared deviations divided by the number of observations (or by the number
of observations minus 1 in the case of data from sample-based surveys) is called variance. Variance is a measure of variability in the distribution of a variable. It represents the degree to
which individuals differ from a mean value of a variable. The greater the spread of observations, the greater the variance. Because the variance is measured in squared units of the variable,
it is difficult to interpret its values. Taking the square root of the deviance returns the measure to the original unit of the variable. This measure is called standard deviation. The size
of the standard deviation relative to that of the mean is called coefficient of variation.
Although measures of dispersions such as standard deviation and coefficient of variation are not often presented in gender statistics, they have an important role in measuring the degree of
association between variables and in making inferences about a population based on data collected from a sample of that population.
• + Box 4.2 Using data from different sources
□ When data from different sources are to be combined, it is essential to ascertain whether they are comparable in terms of the coverage, time period, definitions and concepts. Statistics from
different government sources may differ in arrangement, detail and choice of derived figures. Moreover, what appear to be comparable figures may not be, due to errors or variations in
classification or data-processing procedures. Lack of comparability can also be a problem with time-series data, if concepts or methods have changed from one period to another.
Checks for consistency and comparability between different sources should be made any time different sources are to be combined. Obtaining comparable data for the period covered by a study or
for completing time series should be a paramount concern. It is most problematic when different sources are used for the same indicator (say, if missing years require supplementary data). Any
variations in concepts from different sources and even different years within the same source should be thoroughly checked.
In most cases these checks can be made by reviewing the source’s documentation. It is also a good idea to consult specialists in different fields who may themselves supply or use the data.
These specialists often have additional information on availability of data (which may not be well publicized). They often understand special considerations of specific types of data, and
know of existing evaluations.
Source: Excerpt from United Nations 1997, Handbook for Producing National Statistical Reports on Women and Men. | {"url":"http://unstats.un.org/unsd/genderstatmanual/Descriptive-analysis-of-data.ashx","timestamp":"2014-04-19T05:10:18Z","content_type":null,"content_length":"54354","record_id":"<urn:uuid:8d92c7cf-ddc6-4822-b25f-9a7801e91ce0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ninth Grade (Grade 9) Geometry and Measurement Tests and Worksheets
Ninth Grade (Grade 9) Geometry and Measurement Questions
Create printable tests and worksheets from Grade 9 Geometry and Measurement questions. Select questions to add to a test using the checkbox above each question. Remember to click the add selected
questions to a test button before moving to another page.
Show Geometry and Measurement questions in
All Grades | {"url":"http://www.helpteaching.com/questions/Geometry_and_Measurement/Grade_9","timestamp":"2014-04-20T05:44:42Z","content_type":null,"content_length":"84567","record_id":"<urn:uuid:1c3d8450-c6b0-4f08-be4e-84389c0c50c1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
SICStus Prolog 4.1 released and My SICStus Prolog page
SICStus Prolog
version 4.1 was released yesterday (2009-12-11). Congratulations to the SICStus team! See
for a summary of the news in this version.
For the past weeks I have tested the beta version of this release. In these tests I focused mainly on the following:
• the do-loops (for loops)
• support for MiniZinc version 1.0
• and somewhat SPIDER (the new IDE)
I did report some findings in the beta version, but most of the time I have been creating SICStus models and just learning the system, especially the
part. These models can be downloaded at
My SICStus Prolog page
. See below for more about these models.
SICStus Prolog's
support for do-loops
is based on Joachim Schimpf's paper
Logical Loops
(link to CiteSeer). These constructs where first implemented in the
ECLiPSe Constraint Programming System
(which I wrote about in
Constraint programming in ECLiPSe Constraint Programming System
The following constructs are implemented in SICStus version 4.1:
IterSpec1, IterSpec2
An example:
is a simple model that demonstrates the
loop, the constraints makes it a latin square given the hints in the Problem matrix:
solve(ProblemNum) :-
problem(ProblemNum, Problem),
format('\nProblem ~d\n',[ProblemNum]),
length(Problem, N),
append(Problem, Vars),
domain(Vars, 1, N),
( foreach(Row, Problem)
transpose(Problem, ProblemTransposed),
( foreach(Column, ProblemTransposed)
labeling([], Vars),
problem(1, [[1, _, _, _, 4],
[_, 5, _, _, _],
[4, _, _, 2, _],
[_, 4, _, _, _],
[_, _, 5, _, 1]]).
Some other examples are from the Bin Loading model
. First is an example of a constraint that requires a (reversed) ordered array.
% load the bins in order:
% first bin must be loaded, and the list must be ordered
element(1, BinLoads, BinLoads1),
BinLoads1 #> 0,
( fromto(BinLoads,[This,Next|Rest],[Next|Rest],[_])
This #>= Next
Next is how to sum occurrences of the loaded bins, using
to count the number of loaded bins (
is a reified binary variable):
% calculate number of loaded bins
% (which we later will minimize)
( foreach(Load2,BinLoads),
Loaded in 0..1,
Load2 #> 0 #<=> Loaded #= 1,
Out #= In + Loaded
A slighly more general version of ordering an array is the predicate
my_ordered(P,List) :-
( fromto(List, [This,Next | Rest], [Next|Rest],[_]),
where a comparison operator is used in the call, e.g.
to sort in reversed order.
Most of my models use these do loop constructs. As you can see of from the models I'm not a Prolog guy by profession, even though the language has fascinated me for a long time. SICStus' (and also
ECLiPSe's) new approach of using for do-loops has made me appreciate Prolog in a new way. Very inspiring!
Two missing construct in SICStus (compared to the ECLiPSe implementation) are
for(...) * for(...)
. When writing the ECLiPSe models I used these two quite often, for example in the ECLiPSe model
. Also, SICStus don't support the syntactic sugar of array/matrix access with
(note, this feature is not related to do-loops). When I first realized that they where missing in SICStus, I thought that it would be a hard job of converting my
ECLiPSe models
to SICStus. Instead, these omissions was - to my surprise - quite a revelation.
My ECLiPSe models was mostly just a translation of the MiniZinc/Comet models I've written before, i.e. array/matrix based models where array access and loops such as
forall(i in 1..n) ()
is used all the time. Not surprising, my first SICStus models was then just a conversion of these array/matrix access to
constraints to simulate the ECLiPSe constructs. But it was not at all satisfactory; in fact, it was quite ugly sometimes, and also quite hard to understand. Instead I started to think about the
problem again and found better ways of stating the problem, thus used much less of these constructs. Compare the above mentioned model with the SICStus version
which in my view is more elegant than my ECLiPSe version.
Here is a simple comparison of the assignment models mentioned above.
version (using []-indices):
% exacly one assignment per row, all rows must be assigned
( for(I,1,Rows), param(X,Cols) do
sum(X[I,1..Cols]) #= 1
% zero or one assignments per column
( for(J,1,Cols), param(X,Rows) do
sum(X[1..Rows,J]) #=< 1
% calculate TotalCost
(for(I,1,Rows) * for(J,1,Cols),
param(X,Cost) do
Out #= In + X[I,J]*Cost[I,J]
As we see, this is heavily influenced by the MiniZinc/Comet way of coding.
Here is the corresponding code in
% exacly one assignment per row, all rows must be assigned
( foreach(Row, X) do
% zero or one assignments per column
transpose(X, Columns),
( foreach(Column, Columns) do
% calculate TotalCost
append(Cost, CostFlattened),
(Almost the same code could be written in ECLiPSe as well; this is more about showing how my approach of coding has changed.)
However, I have not been able to completely free myself from the matrix approach; sometimes it was simply too hard, and sometimes I was just lazy.
A final note: The example directory of
now contains many examples of do-loops. Some of the older examples has also been rewritten using these constructs.
Support for MiniZinc/FlatZinc verion 1.0
One other quite large part of the testing was of the
support of MiniZinc/FlatZinc
version 1.0 (the earlier SICStus version supported just version 0.9). The solver is often very fast, but I did not do any systematic comparison with other solvers using the beta version.
I am especially happy about that the FlatZinc solver now shows statistics, e.g.
% runtime: 80 ms
% solvetime: 10 ms
% solutions: 1
% constraints: 22
% backtracks: 1
% prunings: 64
This makes it easier to compare it to other solvers.
There are a number of ways of running a MiniZinc model or FlatZinc file (see the
). Here is how I normally run the MiniZinc models from the command line (via my Perl wrapper program):
mzn2fzn -G sicstus minesweeper_3.mzn && sicstus --goal "use_module(library(zinc)), on_exception(E,zinc:fzn_run_file('minesweeper_3.fzn',[solutions(1),statistics(true),timeout(30000)]), write
(user_error,E)), halt."
• statistics(true): show the statistics
• solutions(1): just show one solution. Use solutions(all) for all solutions
• timeout(30000): time out in milliseconds (here 30 seconds)
• the on_exception wrapper, so errors don't give the prompt. Note: There may be some strange results if the timeout from library(timeout) is used at the same time.
Note: "set vars" and "float vars" are not supported in this version.
I did just briefly tested
, the Eclipse-based development environment for SICStus, and I honestly don't know how much I will use it instead of good old Emacs. However, where I a professional SICStus Prolog programmer, I would
surely use it all the time since it has a lot of very good features especially crafted for SICStus.
One very useful feature is that it is quite good at detecting variables not properly declared in the do-loops (and singletons elsewhere); one has to use
for using a variable in inner loops. SPIDER is very good at detecting errors when such param values are missing (as well as other errors). For many models I have just started SPIDER to check if there
where such bugs.
Another nice feature is the "balloon help", where the SICStus documentation is presented (click in the yellow area for more information).
My SICStus Prolog models
As always when starting learning a new constraint programming system, I started with my
learning models
, this time mostly by translating the
ECLiPSe models
. It first took quite a long time, since I missed some constructs I was used to in ECLiPSe (see above). But after a time it was rather easy to do it more like a SICStus approach, and now I really
like writing in SICStus. Still, there are some models where this direct translations shows (and I'm not very proud of these models).
Also, I would like to thank Mats Carlsson and Magnus Ågren of the SICStus team for some great help with some of the first models.
So, I have now done almost all my ECLiPSe models in SICStus, except for those using floats or some where the SICStus' distributed examples are much better. SICStus don't support "set vars", so I used
boolean lists instead, see for example
. Also, there are some 30-40 models not done in written ECLiPSe, and hopefully more is to come. All these models can be seen at
My SICStus Prolog page
These models contains comments of the problem, e.g. references and also links to other of my models implementing the problem (in other constraint programming systems). | {"url":"http://www.hakank.org/constraint_programming_blog/2009/12/sicstus_prolog_41_released_and_1.html","timestamp":"2014-04-18T00:41:24Z","content_type":null,"content_length":"35591","record_id":"<urn:uuid:996e21ea-8a6e-40e2-863d-5c2df84c6e23>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Formalization Thesis
Andre.Rodin@ens.fr Andre.Rodin at ens.fr
Mon Jan 7 10:58:03 EST 2008
William Messing wrote:
> I do not understand. A category which possesses a final object, does
> not necessarily possess an unique final object. For example, any one
> element set, is a final object in the category of sets. Any inverse
> limit (or for that matter any direct limit), if it exists, is defined
> only up to canonical isomorphism.
Perhaps this is problematic in general but I cannot see any *specific* problem
concerning uniqueness of (co)limits in the given case. It seems that defining a
formal theory up to canonical isomorphism is perfectly fine. One may think of
such canonical isos as exchanges of symbols for different symbols. By the way,
do you now any situation where one could reasonably distinguish between
*different* terminal objects (or other (co)limits)? In other words - where the
usual identification of objects "up to canonical isomorphism" would cause a
real problem?
>I continue to be perplexed by the
> idea of using category theory to formulate the Formaization Thesis. The
> fact that faithfullness of a functor is a concept of category theory is,
> it seems to me, a linguistic coincidence with the idea of seeking a
> "faithful translation " into ZFC (or any other formal theory).
It's hardly a *complete" coincident because when people invented the term
"faithful functor" they likely had similar intuitions in their minds. But, of
course, I'm agree that any conclusion based only upon terminology would be
ungrounded. My point was different: to stress that the notion of "faithful
translation" or "faithful expression" repeatedly mentioned during the current
discussion allows for a more rigorous mathematical treatment with Category
> since the notion of ismomorphism betweeen categories is (obviously) too
> strict for any serious mathematical use, are two formalizations,
> expressed in the language of category theory to be regarded as "the
> same" if the categories are equivalent, and if so, is a specification of
> the equivalence between the two categorries to be considered part of the
> given data? If C and D are categories and F:C ---> D is an equivalence,
> is the choice of a quasi-inverse G:D ---> C to be part of the given
> data. If so, is the choice of a natural transformation t:GF ===> Id_C,
> which is such that, for every object, c, of C t_c:GF(c) ---> c is an
> isomorphism, to be part of the given data. Such specifications are
> frequently necessary in actually using category theoretic notions in
> other areas of matiematics. See, for example, my paper with Larry
> Breen, The Differential Geometry of Gerbes, Math ArXiv math.AG/0106083.
I would rather call them *equivalent* (not the same) and, indeed, consider the
choice of natural transformation you mention as a part of the data. But perhaps
I miss again your point since I cannot see any direct link between the issue of
formalisation and the problem about what counts as the "same" in categories. I
have a paper about this latter problem: http://arxiv.org/abs/math/0509596 but
I don't discuss formalisation there.
> Can one be more explicit about the "interesting view on the
> relationships between syntax and semantics" provided by the use of
> category theory?
I mean the following. A "logical" category, say, a topos, may be viewed as a
model of its internal language. But the usual view, which somewhat "naturally"
suggests itself and motivates the terms "internal", is different: topos "comes
with" its language to begin with, or perhaps the langauge of a given topos is
"revealed" as a result of some additional effort. I think this changes
traditional views (dating back to Tarski) on relationships between formal
systems and their models quite a lot. I tried to develop this issue in this
paper: http://arxiv.org/abs/0707.3745
Thank you very much for the interesting reference (I learnt something about
Gerbes from Larry Breen following his course in IHP last year.)
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012468.html","timestamp":"2014-04-20T16:38:01Z","content_type":null,"content_length":"6740","record_id":"<urn:uuid:6d142f70-aec9-413f-9378-fe7a9168ec52>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the largest Twin Primes of this form you can get?
I see you went up to 7919 and found none. I thought you have found one, I think it would be impossible to find the others as the largest twin primes which differ by 2 so far is having 200,700 digits
. So, finding a twin primes which differ by two and also a summation of consecutive multiplied primes +- 1 could be impossible, I guess.
Last edited by Stangerzv (2013-04-16 12:14:01) | {"url":"http://mathisfunforum.com/viewtopic.php?id=19241","timestamp":"2014-04-21T12:09:39Z","content_type":null,"content_length":"16664","record_id":"<urn:uuid:1525565b-967d-4ee7-883f-2b98ba2edbde>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mysteries of Pyramidal Energy - Unexplained - IN SEARCH FOR TRUTH
Unexplained / Ancient civilizations / Ancient Egypt / Mysteries of Pyramidal Energy /
In 1993, during a visit to the Keops pyramid in Egypt, a French scientist named Voguis noticed the existence of live insects and rodents close to the passage leading to the monumental complex's royal
chamber. However, when he entered the chamber, similar species were dead or preserved. Shortly after, the researcher constructed a pyramid proportional to that of Keops and began a series of
experiments. He placed pieces of flesh outside the construction and others at a level of one third from the base, corresponding to the location of the royal chamber.
The following day he discovered that the pieces of meat placed outside the pyramid had rotted and those inside had been preserved.
This proved the existence of an unknown energy -some call it pyramidal- which provokes such effects.
Ten year later, a Czech geologist placed used razor blades under a steel pyramid - positioned north to south and with the blades at a level of one third from the base - which he later recovered.
Engineer Nils Ponce M.A., head of the environmental geology group affiliated to the Institute of Geology and Paleontology (IGP), Cuba, gave a talk recently about Radiesthesia*.
Just as minerals provide an energy halo, living beings present around them another known as their bio-energetic field, whose real existence is demonstrated by specialized electronic photos through
operating little metallic bars, the so-called radiestesia measurement.
"This responds to a type of energy whose origin and nature are still unexplained," Ponce stated.
The Cuban geologist explained that in order to research the effects of this energy, he carried out some experiments using six minerals, among them quartz. He built a steel pyramid on a square base,
whose sides measured 20 centimeters.
Soon after, he wrapped each of the minerals and measured them.
"I later placed each one below the pyramid and gave each one 30 minutes. Then I measured them in eight directions -north, south, northeast, southeast, west, northwest, east and west- and the
measurements differed when they were outside the pyramid," he noted. He added that the minerals exposed to the pyramidal action "were energized, and had increased their radio and magnetic camp."
Ponce concluded his research by affirming that pyramids generate a type of energy unknown to date, sensitive to measurement and at the same time of being exploited and applied in therapeutic,
industrial and domestic spheres.
He gave the practical example of a dentist working in a dental clinic in the Havana municipality of San Miguel del Padrón. "She placed the spent rotors of the drill under a pyramid she had made
herself, and managed to totally restore them."
Ponce highlighted that from the economic point of view, the pyramids have several advantages since "they can be built in different sizes depending on their intended use." He gave the example of how
in the agricultural sphere, water tanks for organic farms are placed under a pyramid and, after a period ranging from 24 to 48 hours, "the energized water is used for irrigation, subsequently
provoking an increased and vigorous growth in fresh and root vegetables."
In the case of medicine, he explained that the pyramidal method can help to cure arthritis and rheumatism if the patient sleeps underneath a pyramid.
He specified that the fundamental requisite for a pyramid is to construct it of wood or any type of metal -aluminum, copper, steel, iron- with a square base (all equal sides and the edges in the
style of the Egyptian Keops pyramid), facing north-south and to place the object one third of the way upward from the base because the energy is concentrated at that point.
"This method is not at all costly and could have great economic effects."
Rating : 3113 Comments Discuss in forum | {"url":"http://istina.rin.ru/eng/ufo/text/356.html","timestamp":"2014-04-18T15:39:01Z","content_type":null,"content_length":"64510","record_id":"<urn:uuid:356a06f5-a21e-4a9f-b7de-46eb86110bee>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does Space Have More Than 3 Dimensions?
The intuitive notion that the universe has three dimensions seems to be an irrefutable fact. After all, we can only move up or down, left or right, in or out. But are these three dimensions all we
need to describe nature? What if there are, more dimensions ? Would they necessarily affect us? And if they didn't, how could we possibly know about them?
Some physicists and mathematicians investigating the beginning of the universe think they have some of the answers to these questions. The universe, they argue, has far more than three, four, or five
dimensions. They believe it has eleven! But let's step back a moment. How do we know that our universe consists of only three spatial dimensions? Let's take a look at two of these "proofs."
Proof 1: There are five and only five regular polyhedra. A regular polyhedron is defined as a solid figure whose faces are identical polygons - triangles, squares, and pentagons - and which is
constructed so that only two faces meet at each edge. If you were to move from one face to another, you would cross over only one edge. Shortcuts through the inside of the polyhedron that could get
you from one face to another are forbidden. Long ago, the mathematician Leonhard Euler demonstrated an important relation between the number of faces (F), edges (E), and corners (C) for every regular
polyhedron: C - E + F = 2. For example, a cube has 6 faces, 12 edges, and 8 corners while a dodecahedron has 12 faces, 30 edges, and 20 corners. Run these numbers through Euler's equation and the
resulting answer is always two, the same as with the remaining three polyhedra. Only five solids satisfy this relationship - no more, no less.
Not content to restrict themselves to only three dimensions, mathematicians have generalized Euler's relationship to higher dimensional spaces and, as you might expect, they've come up with some
interesting results. In a world with four spatial dimensions, for example, we can construct only six regular solids. One of them - the "hypercube" - is a solid figure in 4-D space bounded by eight
cubes, just as a cube is bounded by six square faces. What happens if we add yet another dimension to space? Even the most ambitious geometer living in a 5-D world would only be able to assemble thee
regular solids. This means that two of the regular solids we know of - the icosahedron and the dodecahedron - have no partners in a 5-D universe.
For those of you who successfully mastered visualizing a hypercube, try imagining what an "ultracube" looks like. It's the five- dimensional analog of the cube, but this time it is bounded by one
hypercube on each of its 10 faces! In the end, if our familiar world were not three-dimensional, geometers would not have found only five regular polyhedra after 2,500 years of searching. They would
have found six (with four spatial dimension,) or perhaps only three (if we lived in a 5-D universe). Instead, we know of only five regular solids. And this suggests that we live in a universe with,
at most, three spatial dimensions.
All right, let's suppose our universe actually consists of four spatial dimensions. What happens? Since relativity tells us that we must also consider time as a dimension, we now have a space-time
consisting of five dimensions. A consequence of 5-D space-time is that gravity has freedom to act in ways we may not want it to.
Proof 2: To the best available measurements, gravity follows an inverse square law; that is, the gravitational attraction between two objects rapidly diminishes with increasing distance. For example,
if we double the distance between two objects, the force of gravity between them becomes 1/4 as strong; if we triple the distance, the force becomes 1/9 as strong, and so on. A five- dimensional
theory of gravity introduces additional mathematical terms to specify how gravity behaves. These terms can have a variety of values, including zero. If they were zero, however, this would be the same
as saying that gravity requires only three space dimensions and one time dimension to "give it life." The fact that the Voyager space- craft could cross billions of miles of space over several years
and arrive vithin a few seconds of their predicted times is a beautiful demonstration that we do not need extra-spatial dimensions to describe motions in the Sun's gravitational field.
From the above geometric and physical arguments, we can conclude (not surprisingly) that space is three-dimensional - on scales ranging from that of everyday objects to at least that of the solar
system. If this were not the case, then geometers would have found more than five regular polyhedra and gravity would function very differently than it does - Voyager would not have arrived on time.
Okay, so we've determined that our physical laws require no more than the three spatial dimensions to describe how the universe works. Or do they? Is there perhaps some other arena in the physical
world where multidimensional space would be an asset rather than a liability?
Since the 1920s, physicists have tried numerous approaches to unifying the principal natural interactions: gravity, electromagnetism, and the strong and weak forces in atomic nuclei. Unfortunately,
physicists soon realized that general relativity in a four-dimensional space-time does not have enough mathematical "handles" on which to hang the frameworks for the other three forces. Between 1921
and 1927, Theodor Kaluza and Oskar Klein developed the first promising theory combining gravity and electromagnetism. They did this by extending general relativity to five dimensions. For most of us,
general relativity is mysterious enough in ordinary four-dimensional space-time. What wonders could lie in store for us with this extended universe?
General relativity in five dimensions gave theoreticians five additional quantities to manipulate beyond the 10 needed to adequately define the gravitational field. Kaluza and Klein noticed that four
of the five extra quantities could be identified with the four components needed to define the electromagnetic field. In fact, to the delight of Kaluza and Klein, these four quantities obeyed the
same types of equations as those derived by Maxwell in the late 1800s for electromagnetic radiationl Although this was a promising start, the approach never really caught on and was soon buried by
the onrush of theoretical work on the quantum theory of electromagnetic force. It was not until work on supergravity theory began in 1975 that Kaluza and Klein's method drew renewed interest. Its
time had finally come.
What do theoreticians hope to gain by stretching general relativity beyond the normal four dimensions of space-time? Perhaps by studying general relativity in a higher-dimensional formulation, we can
explain some of the constants needed to describe the natural forces. For instance, why is the proton 1836 times more massive than the electron? Why are there only six types of quarks and leptons? Why
are neutrinos massless? Maybe such a theory can give us new rules for calculating the masses of fundamental particles and the ways in which they affect one another. These higher-dimensional
relativity theories may also tell us something about the numbers and properties of a mysterious new family of particles - the Higgs bosons - whose existence is predicted by various cosmic unification
schemes. (See "The Decay of the False Vacuum," ASTRONOMY, November 1983.)
These expectations are not just the pipedreams of physicists - they actually seem to develop as natural consequences of certain types of theories studied over the last few years. In 1979, John Taylor
at Kings College in London found that some higher- dimensional formalisms can give predictions for the maximum mass of the Higgs bosons (around 76 times that of the proton.) As they now stand,
unification theories can do no more than predict the existence of these particles - they cannot provide specific details about their physical characteristics. But theoreticians may be able to pin
down some of these details by using extended theories of general relativity.
Experimentally, we know of six leptons: the electron, the muon, the tauon, and their three associated neutrinos. The most remarkable prediction of these extended relativity schemes, however, holds
that the number of leptons able to exist in a universe is related to the number of dimensions of space-time. In a 6-D space-time, for example, only one lepton - presumably the electron - can exist.
In a 10-D space-time, four leptons can exist - still not enough to accommodate the six we observe. In a 12-D space- time, we can account for all six known leptons - but we also acquire two additional
leptons that have not yet been detected. Clearly, we would gain much on a fundamental level if we could increase the number of dimensions in our theories just a little bit.
How many additional dimensions do we need to consider in order to account for the elementary particles and forces that we know of today? Apparently we require at least one additional spatial
dimension for every distinct "charge" that characterizes how each force couples to matter. For the electromagnetic force, we need two electric charges: positive and negative. For the strong force
that binds quarks together to form, among other things, protons and neutrons, we need three "color" charges - red, blue, and green. Finally, we need two "weak" charges to account for the weak nuclear
force. if we add a spatial dimension for each of these charges, we end up with a total of seven extra dimensions. The properly extended theory of general relativity we seek is one with an 11
-dimensional space-time, at the very least. Think of it - space alone must have at least 10 dimensions to accomodate all the fields known today.
Of course, these additional dimensions don't have to be anything like those we already know about. In the context of modern unified field theory, these extra dimensions are, in a sense, internal to
the particles themselves - a "private secret," shared only by particles and the fields that act on them! These dimensions are not physically observable in the same sense as the three spatial
dimensions we experience; they'stand in relation to the normal three dimensions of space much like space stands in relation to time.
With today's veritable renaissance in finding unity among the forces and particles that compose the cosmos, some by methods other than those we have discussed, these new approaches lead us to
remarkably similar conclusions. It appears that a four-dimensional space-time is simply not complex enough for physics to operate as it does.
We know that particles called bosons mediate the natural forces. We also know that particles called fermions are affected by these forces. Members of the fermion family go by the familiar names of
electron, muon, neutrino, and quark; bosons are the less well known graviton, photon, gluon, and intermediate vector bosons. Grand unification theories developed since 1975 now show these particles
to be "flavors" of a more abstract family of superparticies - just as the muon is another type of electron. This is an expression of a new kind of cosmic symmetry - dubbed supersymmetry, because it
is all-encompassing. Not only does it include the force-carrying bosons, but it also includes the particles on which these forces act. There also exists a corresponding force to help nature maintain
supersymmetry during the various interactions. It's called supergravity. Supersymmetry theory introduces two new types of fundamental particles - gravitinos and photinos. The gravitino has the
remarkable property of mathematically moderating the strength, of various kinds of interactions involving the exchange of gravitons. The photino, cousin of the photon, may help account for the
"missing mass" in the universe.
Supersymmetry theory is actually a complex of eight different theories, stacked atop one another like the rungs of a ladder. The higher the rung, the larger is its complement of allowed fermion and
boson particle states. The "roomiest" theory of all seems to be SO(8), (pronounced ess-oh-eight), which can hold 99 different kinds of bosons and 64 different kinds of fermions. But SO(8) outdoes its
subordinate, SO(7), by only one extra dimension and one additional particle state. Since SO(8) is identical to SO(7) in all its essential features, we'll discuss SO(7) instead. However, we know of
far more than the 162 types of particles that SO(7) can accommodate, and many of the predicted types have never been observed (like the massless gravitino). SO(7) requires seven internal dimensions
in addition to the four we recognize - time and the three "every day" spatial dimensions. If SO(7) at all mirrors reality, then our universe must have at least 11 dimensions! Unfortunately, it has
been demonstrated by W. Nahm at the European Center for Nuclear Research in Geneva, Switzerland that supersymmetry theories for space-times with more than 11 dimensions are theoretically impossible.
SO(7) evidently has the largest number of spatial dimensions possible, but it still doesn't have enough room to accommodate all known types of particles.
It is unclear where these various avenues of research lead. Perhaps nowhere. There is certainly ample historical precedent for ideas that were later abandoned because they turned out to be conceptual
dead-ends. Yet what if they turn out to be correct at some level? Did our universe begin its life as some kind of 11-dimensional "object" which then crystallized into our four- dimensional cosmos?
Although these internal dimensions may not have much to do with the real world at the present time, this may not always have been the case. E. Cremmer and J. Scherk of I'Ecole Normale Superieure in
Paris have shown that just as the universe went through phase transitions in its early history when the forces of nature became distinguishable, the universe may also have gone through a phase
transition when mensionality changed. Presumably matter has something like four external dimensions (the ones we encounter every day) and something like seven internal dimensions. Fortunately for us,
these seven extra dimensions don't reach out into the larger 4-D realm where we live. If they did, a simple walk through the park might become a veritable obstacle course, littered with wormholes in
space and who knows what else!
Alan Chocos and Steven Detweiler of Yale University have considered the evolution of a universe that starts out being five- dimensional. They discovered that while the universe eventually does evolve
to a state where three of the four spatial dimensions expand to become our world at large, the extra fourth spatial dimension shrinks to a size of 10^-31 centimeter by the present time. The fifth
dimension to the universe has all but vanished and is 20 powers of 10 - 100 billion billion times - smaller than the size of a proton. Although the universe appears four- dimensional in space-time,
this perception is accidental due to our large size compared to the scale of the other dimensions. Most of us think of a dimension as extending all the way to infinity, but this isn't the full story.
For example, if our universe is really destined to re-collapse in the distant future, the three- dimensional space we know today is actually limited itself - it will eventually possess a maximum,
finite size. It just so happens that the physical size of human beings forces us to view these three spatial dimensions as infinitely large.
It is not too hard to reconcile ourselves to the notion that the fifth (or sixth, or eleventh) dimension could be smaller than an atomic nucleus - indeed, we can probably be thankful that this is the | {"url":"http://www.astronomycafe.net/cosm/dimens.html","timestamp":"2014-04-18T13:06:24Z","content_type":null,"content_length":"27459","record_id":"<urn:uuid:278b2799-f675-490c-81ed-a497720c9b6f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Table of Contents
Understanding Biostatistics
The 2x2 Table
Most of biostatistics can be understood in the context of the 2x2 table. It is absolutely crucial to understand, and that plus a general definition of what you need to calculate will usually allow
you to derive the formula. The 2x2 table generally has the "truth" along the top, in columns, with whether or not the person has the disease or whether or not they improved from a treatment. Along
the side, in rows, is the exposure status or test result. This defines four categories of people. For the example of a screening test, there are people who have the disease and tested positive (true
positives, box A), who don't have the disease but tested positive (false positives or type I error, box B), who have the disease but tested negative (false negatives or type II error, box C), and who
don't have the disease and tested negative (true negatives, box D). Marginals, or the total in a row or column, can be found by adding across a row or down a column, and the total number of people
observed is found by adding together the marginals from
the rows or columns.
Example Problem
We can incorporate the concepts discussed above by considering this problem: 1% of a population of 10,000 people have disease X. If the sensitivity of a screening test for disease X is 95% and the
specificity is 80%, what is the positive predictive value?
The easiest way to work out the problem is to draw out a 2x2 table. The total population is 10,000 and 1% have the disease, so the + disease marginal is 100, and the - disease column marginal is
9,900 (everyone else). The sensitivity is 95%, meaning we will detect 95 out of 100 of the people with the disease, so 95 goes in box A, and the 5 we missed (false negatives) go in box C. Specificity
is 80%, meaning that we will correctly get a negative result in 80% of the 9,900 without disease - alternately, we will have 20% false positives. This means that 7920 people go in box D, true
negatives, and 1980 go in box B, false positives.
We have now filled in the 2x2 square and can calculate the PPV. Remember, PPV is the likelihood that someone who tests positive actually has the disease. That is the number of people in box A, true
positives, over the total number of people who tested positive, the marginal for the positive test row. 95/(1980+95)=4.6%. This means that less than 5% of people with a positive test result actually
have the disease. Surprised? These numbers are still very good for a screening test - it detects 19/20 people with the disease, and the false positives can hopefully be weeded out with (more
expensive) diagnostic tests and procedures. Other factors like the cost of the test, progression and potential treatments of the disease would determine whether the test was overall useful as a
screening test.
For Risk Factors and Treatments
• Relative Risk = [A/(A+B)]/[C/(C+D)] = [A•(C+D)]/[C•(A+B)]
□ Rate of disease among the exposed / rate of disease among the unexposed
□ Significance is in figuring out to what extent exposure to a factor increases or decreases risk of a disease.
• Odds Ratio = (A/C)/(B/D) = AD/BC
□ Used in case control studies as an estimate for relative risk, because the "prevalence" of a disease in a case-control study is set by the researchers
□ Odds Ratio is also important for understanding how bias will affect study results (especially of case-control studies).
□ Figure out which cells the bias would increase or decrease and plug them into this formula to figure out whether the result would increase or decrease the odds ratio found.
☆ For example, a higher rate of patients remembering an exposure in the case vs control groups would shift patients from cell C to cell A, making the odds ratio higher than it should be.
(This is an example of recall bias)
• Attributable Risk/Benefit = A/(A+B) – C/(C+D)
□ Rate of disease among the exposed – rate of disease among the unexposed
For Screening Tests
• Sensitivity = A/(A+C)
□ How often the test detects a disease when it is present
• Specificity = D/(D+B)
□ How often a test detects the absence of a disease when it is absent
• Positive Predictive Value = A/(A+B)
□ How often individuals w/ a positive test truly have the disease
□ Higher prevalence elevates PPV, lower prevalence lowers PPV
• Negative Predictive Value = D/(D+C)
□ How often individuals w/ a negative test truly don't have the disease
□ Lower prevalence elevates NPV, higher prevalence lowers NPV
• Positive Likelihood Ratio
□ Sensitivity/1-Specificity (or Sensitivity/False Negatives)
□ A measure of the overall accuracy of a diagnostic test (higher number = better test)
□ PLR can be multiplied by the pre-test odds (not probability) to determine the post-test odds - that is, how much more or less likely the patient is to have the disease given the test result.
For Errors in Statistical Studies
Editable Tables | {"url":"http://denversom.wikispaces.com/M2M+2x2+Tables?responseToken=eaef1e206c67e47df868b5fd9e5be720","timestamp":"2014-04-24T03:13:30Z","content_type":null,"content_length":"64381","record_id":"<urn:uuid:fe81639c-e180-43b0-b14b-108f56944e29>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Denver Math Tutor
Find a Denver Math Tutor
I am an expert in the LSAT, GMAT and GRE for graduate school admissions and the SAT and ACT for college admissions. This is my full-time career and I work hard to provide the best tutoring
possible. I have an honors degree from Brown University and an MBA from CU-Boulder with the number 1 rank in my class, as well as 99th percentile scores on these tests.
14 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...With each student I've worked with, my goal has been the same: to work myself out of a job. It's my goal to get a student through their current and specific difficulties. Also, I recognize that
things won't necessarily be "smooth sailing" simply because a student has gotten beyond the current sticking spot.
9 Subjects: including trigonometry, algebra 1, algebra 2, geometry
...In the past I have been educated or have worked in fields relating to: computer science, business, public speaking, DJ Entertainment, Video production, photography, algebra, geometry,
statistics, physics, auto repair, construction, and more. I have been a part of the Boy Scouts of America (rank ...
22 Subjects: including algebra 1, ACT Math, SAT math, geometry
...Thus, I have some idea of how people actually apply math in the real world. During my time as a Product Developer, I have sporadically tutored high school level math. In my experience tutoring,
I have tutored first-generation, disabled, and low income college students, student athletes, and high schoolers in all levels of math, statistics, and economics.
20 Subjects: including discrete math, differential equations, drawing, drums
...When things get seriously off track, I work to help students to see where they went wrong and to identify any misconceptions they may have that are causing them problems. My goal is to help my
students to reach a point where my support is no longer needed.I have one and a half years of experienc...
10 Subjects: including geometry, algebra 1, algebra 2, calculus | {"url":"http://www.purplemath.com/denver_co_math_tutors.php","timestamp":"2014-04-18T10:59:32Z","content_type":null,"content_length":"23795","record_id":"<urn:uuid:ab5ce63c-d01a-4aa2-af76-d2be5f02da6d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gravity, Kepler's law
All orbit problems should begin with Fc = Fg (the centripetal force is provided by the gravitational force). Putting in the details,
mv^2/R = GmM/R^2 where m is the mass of the satellite, M of the Earth.
or R*v^2 = GM
This is a Kepler's Law for circular orbits. You can use it to answer question 1.
For #2, if the force of gravity was negligible the satellite would move in a straight line rather than in circular motion. Being in a circular orbit means that the force of gravity is exactly right
to provide the centripetal force necessary to remain in circular motion. The force accelerates the satellite (and the people inside) toward the center of the Earth with a = v^2/R. This is exactly the
same thing that happens when you jump out of an airplane with no parachute (and no significant air resistance). Except that there is a tangential velocity that makes the satellite avoid hitting the
Earth as it falls.
Is it true that 4 of M are canceled because they are opposite ? So the only mass that affect is M which is right above m right ?
No. That would apply to 2 of the M's if they are at the extremities of the semicircular arc. The others all have a component that doesn't cancel out. You have to do some serious work on this, using F
= GmM/R^2 for each M. Don't forget the angles as you work out the horizontal and vertical components for each F. | {"url":"http://www.physicsforums.com/showthread.php?p=2128555","timestamp":"2014-04-19T09:41:03Z","content_type":null,"content_length":"48881","record_id":"<urn:uuid:994d0427-e26e-4472-903c-9826c52532a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: ENO related techniques in 2D
In the 80's, ENO (essentially non oscillatory) schemes have been de-
veloped by A. Harten. These schemes produce smooth reconstruction of
functions from discrete data, e.g. point values or cell averages of a function.
While similar linear schemes only allow good reconstructions in smooth re-
gions, sufficiently far away from singularities of the underlying function, the
nonlinear ENO procedure works well in a much larger region. an additional
technique, also proposed by Harten, is the so called subcell resolution tech-
nique, wich enlarges the region of good approximation for an ENO scheme to
an optimal extent (but, on the other hand, also brings up serious questions
of stability). This good resolution of singularities makes ENO and subcell
resolution techniques especially interesting for the use in numerical schemes
for conservation laws (mainly in connection with Ţnite volume schemes).
Very much in the spirit of wavelet theory, ENO schemes can also be
used to construct nonlinear multiresolution anlyses. Then, the interesting
features of ENO and subcell resolution promise an even smaller number of
relevant datail coefficients close to the singularities of a function. Equipped
with efficient coding strategies, these ENO multiresolution analyses seem to
be very interesting also for the compression of images, since the edges in an
image correspond to curves of singularities of a function in 2D. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/225/2859422.html","timestamp":"2014-04-16T23:23:03Z","content_type":null,"content_length":"8575","record_id":"<urn:uuid:5ce8d5ed-dd99-4541-abac-949e5d7e533f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Master of Arts
Master of Arts (MAM)
The Master of Arts Program in Mathematics was established in 2002. Classes in this program were first offered in fall 2003.
The primary purpose of the program is to prepare students to teach mathematics at the secondary school and junior/community college levels. The program also provides advanced degree credentials for
teachers who wish to take on supervisory positions in mathematics or in mathematics administration.
All pertinent regulations set forth in the Graduate Studies Bulletin and the Bulletin of the College of Natural Sciences and Mathematics must be observed. The student must consult the departmental
Director of Graduate Studies prior to beginning his/her graduate program so that proper records may be established within the department.
It is the student’s responsibility to be informed about current degree requirements. It is the joint responsibility of the student and the student’s advisor to maintain communications and to track
the student’s progress toward meeting those requirements.
All of the courses in the program are offered online, and the entire program can be completed in that format. It is also be possible to take approved on-campus courses as alternatives to online
There is a regular schedule of online classes each semester, including the summer sessions. As a result, there is little difficulty in combining a full-time teaching position with the program’s
course work.
To be admitted to the program, a student must have completed a bachelor’s degree with a 3.0 GPA over the last 60 hours of all course work and should have a good background in mathematics. A student
need not have majored in mathematics to be admitted. However, it is expected that the student has completed a standard 3-semester calculus sequence and has had at least 9 semester hours of
mathematics at the junior or senior level, preferably in courses such as abstract algebra, linear algebra, advanced calculus, differential equations, or geometry.
The Graduate Record Examination (GRE) is required for admission.
The program requires 33 semester hours of course work with at least 24 semester hours at the 5000 level or above, and including:
• A minimum of 21 semester hours in mathematics with at least 15 semester hours at the 5000 level or above.
• A 3-semester hour Master’s tutorial.
• A maximum of 9 semester hours of approved elective course work.
The program requires 33 semester hours of course work with at least 24 semester hours at the 5000 level or above, and including:
• A minimum of 21 semester hours in mathematics with at least 15 semester hours at the 5000 level or above.
• A 3-semester hour Master’s tutorial.
• A maximum of 9 semester hours of approved elective course work.
MATH 5310: HISTORY OF MATHEMATICS. Prerequisites: Three semesters of calculus, or consent of instructor. Mathematics of the ancient world, classical Greek mathematics, the development of calculus,
notable mathematicians and their accomplishments.
MATH 5330: ABSTRACT ALGEBRA. Prerequisites: Three semesters of calculus, or consent of instructor. The theory of groups is used to discuss the most important concepts and constructions in abstract
MATH 5331: LINEAR ALGEBRA WITH APPLICATIONS. Prerequisites: Three semesters of calculus, or consent of instructor. Systems of linear equations, matrices, vector spaces, linear independence and linear
dependence, determinants, eigenvalues; applications of the linear algebra concepts will be illustrated by a variety of projects.
MATH 5332: DIFFERENTIAL EQUATIONS. Prerequisites: MATH 5331 or consent of instructor. Linear and nonlinear systems of ordinary differential equations; existence, uniqueness and stability of
solutions; initial value problems; higher dimensional systems; Laplace transforms. Theory and applications illustrated by computer assignments and by projects.
MATH 5333: ANALYSIS. Prerequisites: Three semesters of calculus, or consent of instructor. A survey of the concepts of limit, continuity, differentiation and integration for functions of one variable
and functions of several variables; selected applications are used to motivate and to illustrate the concepts.
MATH 5334: COMPLEX ANALYSIS. Prerequisites: MATH 5333 or consent of instructor. Complex numbers, holomorphic functions, the Cauchy theorem and the Cauchy integral formula, the calculus of residues.
MATH 5336: DISCRETE MATHEMATICS. Prerequisites: Three semesters of calculus, or consent of instructor. Discrete Mathematics and Its Applications, Kenneth H. Rosen, fifth edition. McGraw Hill,
Chapters 1, 3, 7, plus the Zermelo Fraenkel Axioms and equivalence of sets.
MATH5337: MODELS OF COMPUTATION. Prerequisites: Three semesters of calculus, or consent of instructor. The algebra of Boolean functions, logic gates, languages and grammars, finite state machines,
the Kleene algebra of regular sets, Turing machines and the halting problem.
MATH 5350: INTRODUCTION TO DIFFERENTIAL GEOMETRY. Prerequisites: Three semesters of calculus, or consent of instructor. Multi-variable calculus, linear algebra, and ordinary differential equations
are used to study the geometry of curves and surfaces in 3-space. Topics include: Curves in the plane and in 3-space, curvature, Frenet frame, surfaces in 3-space, the first and second fundamental
form, curvature of surfaces, Gauss’s theorem egregium, and minimal surfaces.
MATH 5379: AXIOMATIC GEOMETRY. Prerequisites: Three semesters of calculus, or consent of instructor. A review of the axiomatic approach to Euclidean Geometry and an introduction to non-Euclidean
Geometries. Some finite geometries, Hyperbolic Geometry and Spherical Geometry are introduced. A student version of The Geometer’s Sketchpad is required for the homework assignments.
MATH 5382: PROBABILITY. Prerequisites: Three semesters of calculus and one semester of linear algebra, or consent of instructor. Sample spaces, events and axioms of probability; basic discrete and
continuous distributions and their relationships; Markov chains, Poisson processes and renewal processes; applications.
MATH 5383: NUMBER THEORY. Prerequisite: Three semesters of calculus, or consent of instructor. Divisibility and factorization, linear Diophantine equations, congruences and applications, solving
linear congruences, primes of special forms, the Chinese remainder theorem, multiplicative orders, the Euler function, primitive roots, quadratic congruences, representation problems and continued
MATH 5385: STATISTICS. Prerequisites: Three semesters of calculus, or consent of instructor. Data collection and types of data, descriptive statistics, probability, estimation, model assessment,
regression, analysis of categorical data, analysis of variance. Computing assignments using a prescribed software package (e.g., EXCEL, Minitab) will be given.
MATH 5386: REGRESSION AND LINEAR MODELS. Prerequisites: Three semesters of calculus, one semester of linear algebra, and MATH 5385, or consent of instructor. Simple and multiple linear regression,
linear models, inferences from the normal error model, regression diagnostics and robust regression, computing assignments with Matlab, R, Minitab, or SAS.
MATH 5389: SURVEY OF UNDERGRADUATE MATHEMATICS: Prerequisites: Three semesters of calculus, or consent of instructor. A review and consolidation of undergraduate courses in linear algebra,
differential equations, analysis, probability, and abstract algebra. Students may not receive credit for both MATH 4389 and MATH 5389.
Courses under development:
MATH 53xx: MATHEMATICAL MODELING
MATH 53xx: TECHNOLOGY IN MATHEMATICS CLASSES
MATH 53xx: ANALYSIS II
MATH 53xx: SCIENTIFIC COMPUTING WITH EXCEL
Course Groups:
I. Algebra Courses
MATH 5330: Abstract Algebra
MATH 5331: Linear Algebra
MATH 5336: Discrete Mathematics
MATH 5383: Number Theory
II. Analysis Courses
MATH 5333: Analysis
MATH 5350: Introduction to Differential Geometry
MATH 5334: Complex Analysis
MATH 53XX: Analysis II
III. Probability & Statistics
MATH 5382: Probability
MATH 5385: Statistics
MATH 5386: Regression Analysis
IV. Applied Mathematics
MATH 5332: Differential Equations
MATH 5337: Models of Computation
MATH 53XX: Mathematical Modeling
MATH 53XX: Technology in Mathematics Classes
MATH 53XX: Scientific Computing with Excel
V. Other Courses
MATH 5310: History of Mathematics
MATH 5379: Axiomatic Geometry
MATH 5389: Survey of Mathematics
The program requires 33 semester hours of course work with at least 24 semester hours at the 5000 level or above, and including:
• A minimum of 21 semester hours in mathematics with at least 15 semester hours at the 5000 level or above.
• Completion of at least one course in each of the groups: Algebra, Analysis, Probability & Statistics, and Applied Mathematics.
• A 3-semester hour Master’s tutorial.
• A maximum of 9 semester hours of approved elective course work.
The Department of Mathematics typically offers four or five of these courses each semester during the academic year, and at least four courses during the summer sessions. It is possible to complete
the program in two years by taking two courses each semester for two academic years and one course in the corresponding summer session. | {"url":"http://www.mathematics.uh.edu/graduate/master-programs/master-of-arts/index.php","timestamp":"2014-04-19T01:47:27Z","content_type":null,"content_length":"37142","record_id":"<urn:uuid:f9f5f4ae-6001-4084-9b2e-d0d368e93e1c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
In econometrics, generalized method of moments (GMM) is one estimation methodology that can be used to calculate instrumental variable (IV) estimates. Performing this calculation in R, for a linear
IV model, is trivial. One simply uses the gmm() function in the excellent gmm package like an lm() or ivreg() function. The gmm() function will estimate the regression and return model coefficients
and their standard errors. An interesting feature of this function, and GMM estimators in general, is that they contain a test of over-identification, often dubbed Hansen’s J-test, as an inherent
feature. Therefore, in cases where the researcher is lucky enough to have more instruments than endogenous regressors, they should examine this over-identification test post-estimation.
While the gmm() function in R is very flexible, it does not (yet) allow the user to estimate a GMM model that produces standard errors and an over-identification test that is corrected for
clustering. Thankfully, the gmm() function is flexible enough to allow for a simple hack that works around this small shortcoming. For this, I have created a function called gmmcl(), and you can find
the code below. This is a function for a basic linear IV model. This code uses the gmm() function to estimate both steps in a two-step feasible GMM procedure. The key to allowing for clustering is to
adjust the weights matrix after the second step. Interested readers can find more technical details regarding this approach here. After defining the function, I show a simple application in the code
gmmcl = function(formula1, formula2, data, cluster){
library(plyr) ; library(gmm)
# create data.frame
data$id1 = 1:dim(data)[1]
formula3 = paste(as.character(formula1)[3],"id1", sep=" + ")
formula4 = paste(as.character(formula1)[2], formula3, sep=" ~ ")
formula4 = as.formula(formula4)
formula5 = paste(as.character(formula2)[2],"id1", sep=" + ")
formula6 = paste(" ~ ", formula5, sep=" ")
formula6 = as.formula(formula6)
frame1 = model.frame(formula4, data)
frame2 = model.frame(formula6, data)
dat1 = join(data, frame1, type="inner", match="first")
dat2 = join(dat1, frame2, type="inner", match="first")
# matrix of instruments
Z1 = model.matrix(formula2, dat2)
# step 1
gmm1 = gmm(formula1, formula2, data = dat2,
vcov="TrueFixed", weightsMatrix = diag(dim(Z1)[2]))
# clustering weight matrix
cluster = factor(dat2[,cluster])
u = residuals(gmm1)
estfun = sweep(Z1, MARGIN=1, u,'*')
u = apply(estfun, 2, function(x) tapply(x, cluster, sum))
S = 1/(length(residuals(gmm1)))*crossprod(u)
# step 2
gmm2 = gmm(formula1, formula2, data=dat2,
vcov="TrueFixed", weightsMatrix = solve(S))
# generate data.frame
n = 100
z1 = rnorm(n)
z2 = rnorm(n)
x1 = z1 + z2 + rnorm(n)
y1 = x1 + rnorm(n)
id = 1:n
data = data.frame(z1 = c(z1, z1), z2 = c(z2, z2), x1 = c(x1, x1),
y1 = c(y1, y1), id = c(id, id))
summary(gmmcl(y1 ~ x1, ~ z1 + z2, data = data, cluster = "id"))
Within Group Index in R
There are many occasions in my research when I want to create a within group index for a data frame. For example, with demographic data for siblings one might want to create a birth order index.
The below illustrates a simple example of how one can create such an index in R.
# two families/groups 1 and 2
# with random ages
data = data.frame(group = c(rep(1,5),rep(2,5)), age = rpois(10,10))
# birth order
# use rank function with negative age for descending order
data$bo = unlist(by(data, data$group,
function(x) rank(-x$age, ties.method = "first")))
Detecting Weak Instruments in R
Any instrumental variables (IV) estimator relies on two key assumptions in order to identify causal effects:
1. That the excluded instrument or instruments only effect the dependent variable through their effect on the endogenous explanatory variable or variables (the exclusion restriction),
2. That the correlation between the excluded instruments and the endogenous explanatory variables is strong enough to permit identification.
The first assumption is difficult or impossible to test, and shear belief plays a big part in what can be perceived to be a good IV. An interesting paper was published last year in the Review of
Economics and Statistics by Conley, Hansen, and Rossi (2012), wherein the authors provide a Bayesian framework that permits researchers to explore the consequences of relaxing exclusion restrictions
in a linear IV estimator. It will be interesting to watch research on this topic expand in the coming years.
Fortunately, it is possible to quantitatively measure the strength of the relationship between the IVs and the endogenous variables. The so-called weak IV problem was underlined in paper by Bound,
Jaeger, and Baker (1995). When the relationship between the IVs and the endogenous variable is not sufficiently strong, IV estimators do not correctly identify causal effects.
The Bound, Jaeger, and Baker paper represented a very important contribution to the econometrics literature. As a result of this paper, empirical studies that use IV almost always report some measure
of the instrument strength. A secondary result of this paper was the establishment of a literature that evaluates different methods of testing for weak IVs. Staiger and Stock (1997) furthered this
research agenda, formalizing the relevant asymptotic theory and recommending the now ubiquitous “rule-of-thumb” measure: a first-stage partial-F test of less than 10 indicates the presence of weak
In the code below, I have illustrated how one can perform these partial F-tests in R. The importance of clustered standard errors has been highlighted on this blog before, so I also show how the
partial F-test can be performed in the presence of clustering (and heteroskedasticity too). To obtain the clustered variance-covariance matrix, I have adapted some code kindly provided by Ian Gow.
For completeness, I have displayed the clustering function at the end of the blog post.
# load packages
library(AER) ; library(foreign) ; library(mvtnorm)
# clear workspace and set seed
# number of observations
n = 1000
# simple triangular model:
# y2 = b1 + b2x1 + b3y1 + e
# y1 = a1 + a2x1 + a3z1 + u
# error terms (u and e) correlate
Sigma = matrix(c(1,0.5,0.5,1),2,2)
ue = rmvnorm(n, rep(0,2), Sigma)
# iv variable
z1 = rnorm(n)
x1 = rnorm(n)
y1 = 0.3 + 0.8*x1 - 0.5*z1 + ue[,1]
y2 = -0.9 + 0.2*x1 + 0.75*y1 +ue[,2]
# create data
dat = data.frame(z1, x1, y1, y2)
# biased OLS
lm(y2 ~ x1 + y1, data=dat)
# IV (2SLS)
ivreg(y2 ~ x1 + y1 | x1 + z1, data=dat)
# do regressions for partial F-tests
# first-stage:
fs = lm(y1 ~ x1 + z1, data = dat)
# null first-stage (i.e. exclude IVs):
fn = lm(y1 ~ x1, data = dat)
# simple F-test
waldtest(fs, fn)$F[2]
# F-test robust to heteroskedasticity
waldtest(fs, fn, vcov = vcovHC(fs, type="HC0"))$F[2]
# now lets get some F-tests robust to clustering
# generate cluster variable
dat$cluster = 1:n
# repeat dataset 10 times to artificially reduce standard errors
dat = dat[rep(seq_len(nrow(dat)), 10), ]
# re-run first-stage regressions
fs = lm(y1 ~ x1 + z1, data = dat)
fn = lm(y1 ~ x1, data = dat)
# simple F-test
waldtest(fs, fn)$F[2]
# ~ 10 times higher!
# F-test robust to clustering
waldtest(fs, fn, vcov = clusterVCV(dat, fs, cluster1="cluster"))$F[2]
# ~ 10 times lower than above (good)
Further “rule-of-thumb” measures are provided in a paper by Stock and Yogo (2005) and it should be noted that whole battery of weak-IV tests exist (for example, see the Kleinberg-Paap rank Wald
F-statistic and Anderson-Rubin Wald test) and one should perform these tests if the presence of weak instruments represents a serious concern.
# R function adapted from Ian Gows' webpage:
# http://www.people.hbs.edu/igow/GOT/Code/cluster2.R.html.
clusterVCV <- function(data, fm, cluster1, cluster2=NULL) {
# Calculation shared by covariance estimates
est.fun <- estfun(fm)
inc.obs <- complete.cases(data[,names(fm$model)])
# Shared data for degrees-of-freedom corrections
N <- dim(fm$model)[1]
NROW <- NROW(est.fun)
K <- fm$rank
# Calculate the sandwich covariance estimate
cov <- function(cluster) {
cluster <- factor(cluster)
# Calculate the "meat" of the sandwich estimators
u <- apply(est.fun, 2, function(x) tapply(x, cluster, sum))
meat <- crossprod(u)/N
# Calculations for degrees-of-freedom corrections, followed
# by calculation of the variance-covariance estimate.
# NOTE: NROW/N is a kluge to address the fact that sandwich uses the
# wrong number of rows (includes rows omitted from the regression).
M <- length(levels(cluster))
dfc <- M/(M-1) * (N-1)/(N-K)
dfc * NROW/N * sandwich(fm, meat=meat)
# Calculate the covariance matrix estimate for the first cluster.
cluster1 <- data[inc.obs,cluster1]
cov1 <- cov(cluster1)
if(is.null(cluster2)) {
# If only one cluster supplied, return single cluster
# results
} else {
# Otherwise do the calculations for the second cluster
# and the "intersection" cluster.
cluster2 <- data[inc.obs,cluster2]
cluster12 <- paste(cluster1,cluster2, sep="")
# Calculate the covariance matrices for cluster2, the "intersection"
# cluster, then then put all the pieces together.
cov2 <- cov(cluster2)
cov12 <- cov(cluster12)
covMCL <- (cov1 + cov2 - cov12)
# Return the output of coeftest using two-way cluster-robust
# standard errors.
Endogenous Spatial Lags for the Linear Regression Model
Over the past number of years, I have noted that spatial econometric methods have been gaining popularity. This is a welcome trend in my opinion, as the spatial structure of data is something that
should be explicitly included in the empirical modelling procedure. Omitting spatial effects assumes that the location co-ordinates for observations are unrelated to the observable characteristics
that the researcher is trying to model. Not a good assumption, particularly in empirical macroeconomics where the unit of observation is typically countries or regions.
Starting out with the prototypical linear regression model: $y = X \beta + \epsilon$, we can modify this equation in a number of ways to account for the spatial structure of the data. In this blog
post, I will concentrate on the spatial lag model. I intend to examine spatial error models in a future blog post.
The spatial lag model is of the form: $y= \rho W y + X \beta + \epsilon$, where the term $\rho W y$ measures the potential spill-over effect that occurs in the outcome variable if this outcome is
influenced by other unit’s outcomes, where the location or distance to other observations is a factor in for this spill-over. In other words, the neighbours for each observation have greater (or in
some cases less) influence to what happens to that observation, independent of the other explanatory variables $(X)$. The $W$ matrix is a matrix of spatial weights, and the $\rho$ parameter measures
the degree of spatial correlation. The value of $\rho$ is bounded between -1 and 1. When $\rho$ is zero, the spatial lag model collapses to the prototypical linear regression model.
The spatial weights matrix should be specified by the researcher. For example, let us have a dataset that consists of 3 observations, spatially located on a 1-dimensional Euclidean space wherein the
first observation is a neighbour of the second and the second is a neighbour of the third. The spatial weights matrix for this dataset should be a $3 \times 3$ matrix, where the diagonal consists of
3 zeros (you are not a neighbour with yourself). Typically, this matrix will also be symmetric. It is also at the user’s discretion to choose the weights in $W$. Common schemes include nearest k
neighbours (where k is again at the users discretion), inverse-distance, and other schemes based on spatial contiguities. Row-standardization is usually performed, such that all the row elements in
$W$ sum to one. In our simple example, the first row of a contiguity-based scheme would be: [0, 1, 0]. The second: [0.5, 0, 0.5]. And the third: [0, 1, 0].
While the spatial-lag model represents a modified version of the basic linear regression model, estimation via OLS is problematic because the spatially lagged variable $(Wy)$ is endogenous. The
endogeneity results from what Charles Manski calls the ‘reflection problem’: your neighbours influence you, but you also influence your neighbours. This feedback effect results in simultaneity which
renders bias on the OLS estimate of the spatial lag term. A further problem presents itself when the independent variables $(X)$ are themselves spatially correlated. In this case, completely omitting
the spatial lag from the model specification will bias the $\beta$ coefficient values due to omitted variable bias.
Fortunately, remedying these biases is relatively simple, as a number of estimators exist that will yield an unbiased estimate of the spatial lag, and consequently the coefficients for the other
explanatory variables—assuming, of course, that these explanatory variables are themselves exogenous. Here, I will consider two: the Maximum Likelihood estimator (denoted ML) as described in Ord
(1975), and a generalized two-stage least squares regression model (2SLS) wherein spatial lags, and spatial lags lags (i.e. $W^{2} X$) of the explanatory variables are used as instruments for $Wy$.
Alongside these two models, I also estimate the misspecified OLS both without (OLS1) and with (OLS2) the spatially lagged dependent variable.
To examine the properties of these four estimators, I run a Monte Carlo experiment. First, let us assume that we have 225 observations equally spread over a $15 \times 15$ lattice grid. Second, we
assume that neighbours are defined by what is known as the Rook contiguity, so a neighbour exists if they are in the grid space either above or below or on either side. Once we create the spatial
weight matrix we row-standardize.
Taking our spatial weights matrix as defined, we want to simulate the following linear model: $y = \rho Wy + \beta_{1} + \beta_{2}x_{2} + \beta_{3}x_{3} + \epsilon$, where we set $\rho=0.4$ , $\beta_
{1}=0.5$, $\beta_{2}=-0.5$, $\beta_{3}=1.75$. The explanatory variables are themselves spatially autocorrelated, so our simulation procedure first simulates a random normal variable for both $x_{2}$
and $x_{3}$ from: $N(0, 1)$, then assuming a autocorrelation parameter of $\rho_{x}=0.25$, generates both variables such that: $x_{j} = (1-\rho_{x}W)^{-1} N(0, 1)$ for $j \in \{ 1,2 \}$. In the next
step we simulate the error term $\epsilon$. We introduce endogeneity into the spatial lag by assuming that the error term $\epsilon$ is a function of a random normal $v$ so $\epsilon = \alpha v + N
(0, 1)$ where $v = N(0, 1)$ and $\alpha=0.2$, and that the spatial lag term includes $v$. We modify the regression model to incorporate this: $y = \rho (Wy + v) + \beta_{1} + \beta_{2}x_{2} + \beta_
{3}x_{3} + \epsilon$. From this we can calculate the reduced form model: $y = (1 - \rho W)^{-1} (\rho v + \beta_{1} + \beta_{2}x_{2} + \beta_{3}x_{3} + \epsilon)$, and simulate values for our
dependent variable $y$.
Performing 1,000 repetitions of the above simulation permits us to examine the distributions of the coefficient estimates produced by the four models outlined in the above. The distributions of these
coefficients are displayed in the graphic in the beginning of this post. The spatial autocorrelation parameter $\rho$ is in the bottom-right quadrant. As we can see, the OLS model that includes the
spatial effect but does not account for simultaneity (OLS2) over-estimates the importance of spatial spill-overs. Both the ML and 2SLS estimators correctly identify the $\rho$ parameter. The
remaining quadrants highlight what happens to the coefficients of the explanatory variables. Clearly, the OLS1 estimator provides the worst estimate of these coefficients. Thus, it appears preferable
to use OLS2, with the biased autocorrelation parameter, than the simpler OLS1 estimator. However, the OLS2 estimator also yields biased parameter estimates for the $\beta$ coefficients. Furthermore,
since researchers may want to know the marginal effects in spatial equilibrium (i.e. taking into account the spatial spill-over effects) the overestimated $\rho$ parameter creates an additional bias.
To perform these calculations I used the spdep package in R, with the graphic created via ggplot2. Please see the R code I used in the below.
library(spdep) ; library(ggplot2) ; library(reshape)
n = 225
data = data.frame(n1=1:n)
# coords
data$lat = rep(1:sqrt(n), sqrt(n))
data$long = sort(rep(1:sqrt(n), sqrt(n)))
# create W matrix
wt1 = as.matrix(dist(cbind(data$long, data$lat), method = "euclidean", upper=TRUE))
wt1 = ifelse(wt1==1, 1, 0)
diag(wt1) = 0
# row standardize
rs = rowSums(wt1)
wt1 = apply(wt1, 2, function(x) x/rs)
lw1 = mat2listw(wt1, style="W")
rx = 0.25
rho = 0.4
b1 = 0.5
b2 = -0.5
b3 = 1.75
alp = 0.2
inv1 = invIrW(lw1, rho=rx, method="solve", feasible=NULL)
inv2 = invIrW(lw1, rho=rho, method="solve", feasible=NULL)
sims = 1000
beta1results = matrix(NA, ncol=4, nrow=sims)
beta2results = matrix(NA, ncol=4, nrow=sims)
beta3results = matrix(NA, ncol=4, nrow=sims)
rhoresults = matrix(NA, ncol=3, nrow=sims)
for(i in 1:sims){
u1 = rnorm(n)
x2 = inv1 %*% u1
u2 = rnorm(n)
x3 = inv1 %*% u2
v1 = rnorm(n)
e1 = alp*v1 + rnorm(n)
data1 = data.frame(cbind(x2, x3),lag.listw(lw1, cbind(x2, x3)))
names(data1) = c("x2","x3","wx2","wx3")
data1$y1 = inv2 %*% (b1 + b2*x2 + b3*x3 + rho*v1 + e1)
data1$wy1 = lag.listw(lw1, data1$y1)
data1$w2x2 = lag.listw(lw1, data1$wx2)
data1$w2x3 = lag.listw(lw1, data1$wx3)
data1$w3x2 = lag.listw(lw1, data1$w2x2)
data1$w3x3 = lag.listw(lw1, data1$w2x3)
m1 = coef(lm(y1 ~ x2 + x3, data1))
m2 = coef(lm(y1 ~ wy1 + x2 + x3, data1))
m3 = coef(lagsarlm(y1 ~ x2 + x3, data1, lw1))
m4 = coef(stsls(y1 ~ x2 + x3, data1, lw1))
beta1results[i,] = c(m1[1], m2[1], m3[2], m4[2])
beta2results[i,] = c(m1[2], m2[3], m3[3], m4[3])
beta3results[i,] = c(m1[3], m2[4], m3[4], m4[4])
rhoresults[i,] = c(m2[2],m3[1], m4[1])
apply(rhoresults, 2, mean) ; apply(rhoresults, 2, sd)
apply(beta1results, 2, mean) ; apply(beta1results, 2, sd)
apply(beta2results, 2, mean) ; apply(beta2results, 2, sd)
apply(beta3results, 2, mean) ; apply(beta3results, 2, sd)
colnames(rhoresults) = c("OLS2","ML","2SLS")
colnames(beta1results) = c("OLS1","OLS2","ML","2SLS")
colnames(beta2results) = c("OLS1","OLS2","ML","2SLS")
colnames(beta3results) = c("OLS1","OLS2","ML","2SLS")
rhoresults = melt(rhoresults)
rhoresults$coef = "rho"
rhoresults$true = 0.4
beta1results = melt(beta1results)
beta1results$coef = "beta1"
beta1results$true = 0.5
beta2results = melt(beta2results)
beta2results$coef = "beta2"
beta2results$true = -0.5
beta3results = melt(beta3results)
beta3results$coef = "beta3"
beta3results$true = 1.75
data = rbind(rhoresults,beta1results,beta2results,beta3results)
data$Estimator = data$X2
ggplot(data, aes(x=value, colour=Estimator, fill=Estimator)) +
geom_density(alpha=.3) +
facet_wrap(~ coef, scales= "free") +
geom_vline(aes(xintercept=true)) +
scale_y_continuous("Density") +
scale_x_continuous("Effect Size") +
opts(legend.position = 'bottom', legend.direction = 'horizontal')
How Much Should Bale Cost Real?
It looks increasingly likely that Gareth Bale will transfer from Tottenham to Real Madrid for a world record transfer fee. Negotiations are ongoing, with both parties keen to get the best deal
possible deal with the transfer fee. Reports speculate that this transfer fee will be anywhere in the very wide range of £80m to £120m.
Given the topical nature of this transfer saga, I decided to explore the world record breaking transfer fee data, and see if these data can help predict what the Gareth Bale transfer fee should be.
According to this Wikipedia article, there have been 41 record breaking transfers, from Willie Groves going from West Brom to Aston Villa in 1893 for £100, to Cristiano Ronaldo’s £80m 2009 transfer
to Real Madrid from Manchester United.
When comparing any historical price data it is very important that we are comparing like with like. Clearly, a fee of £100 in 1893 is not the same as £100 in 2009. Therefore, the world record
transfer fees need to be adjusted for inflation. To do this, I used the excellent measuringworth website, and converted all of the transfer fees into 2011 pounds sterling.
The plot above demonstrates a very strong linear relationship between logged real world record transfer fees and time. The R-squared indicates that the year of the transfer fee explains roughly 97%
of the variation in price.
So, if Real Madrid are to pay a world transfer fee for Bale, how much does this model predict the fee will be? The above plot demonstrates what happens when the simple log-linear model is
extrapolated to predict the world record transfer fee in 2013. The outcome here is 18.37, so around £96m, in 2011 prices. We can update this value to 2013 prices. Assuming a modest inflation rate of
2% we get £96m[exp(0.02*2)]=£99.4m. No small potatoes.
bale = read.csv("bale.csv")
# data from:
# http://en.wikipedia.org/wiki/World_football_transfer_record
# http://www.measuringworth.com/ukcompare/
ols1 = lm(log(real2011)~year, bale)
# price
# inflate lets say 2% inflation
# nice ggplot
bale$lnprice2011 = log(bale$real2011)
addon = data.frame(year=2013,nominal=0,real2011=0,name="Bale?",
ggplot(bale, aes(x=year, y=lnprice2011, label=name)) +
geom_text(hjust=0.4, vjust=0.4) +
stat_smooth(method = "lm",fullrange = TRUE, level = 0.975) +
theme_bw(base_size = 12, base_family = "") +
xlim(1885, 2020) + ylim(8, 20) +
xlab("Year") + ylab("ln(Price)") +
ggtitle("World Transfer Records, Real 2011 Prices (£)")+
geom_point(aes(col="red"),size=4,data=addon) +
geom_text(aes(col="red", fontface=3),hjust=-0.1, vjust=0,size=7,data=addon) +
The Frisch–Waugh–Lovell Theorem for Both OLS and 2SLS
The Frisch–Waugh–Lovell (FWL) theorem is of great practical importance for econometrics. FWL establishes that it is possible to re-specify a linear regression model in terms of orthogonal
complements. In other words, it permits econometricians to partial out right-hand-side, or control, variables. This is useful in a variety of settings. For example, there may be cases where a
researcher would like to obtain the effect and cluster-robust standard error from a model that includes many regressors, and therefore a computationally infeasible variance-covariance matrix.
Here are a number of practical examples. The first just takes a simple linear regression model, with two regressors: x1 and x2. To partial out the coefficients on the constant term and x2, we first
regress x2 on y1 and save the residuals. We then regress x2 on x1 and save the residuals. The final stage regresses the second residuals on the first. The following code illustrates how one can
obtain an identical coefficient on x1 by applying the FWL theorem.
x1 = rnorm(100)
x2 = rnorm(100)
y1 = 1 + x1 - x2 + rnorm(100)
r1 = residuals(lm(y1 ~ x2))
r2 = residuals(lm(x1 ~ x2))
# ols
coef(lm(y1 ~ x1 + x2))
# fwl ols
coef(lm(r1 ~ -1 + r2))
FWL is also relevant for all linear instrumental variable (IV) estimators. Here, I will show how this extends to the 2SLS estimator, where slightly more work is required compared to the OLS example
in the above. Here we have a matrix of instruments (Z), exogenous variables (X), and endogenous variables (Y1). Let us imagine we want the coefficient on one endogenous variable y1. In this case we
can apply FWL as follows. Regress X on each IV in Z in separate regressions, saving the residuals. Then regress X on y1, and X on y2, saving the residuals for both. In the last stage, perform a
two-stage-least-squares regression of the X on y2 residuals on the X on y2 residuals using the residuals from X on each Z as instruments. An example of this is shown in the below code.
ov = rnorm(100)
z1 = rnorm(100)
z2 = rnorm(100)
y1 = rnorm(100) + z1 + z2 + 1.5*ov
x1 = rnorm(100) + 0.5*z1 - z2
x2 = rnorm(100)
y2 = 1 + y1 - x1 + 0.3*x2 + ov + rnorm(100)
r1 = residuals(lm(z1 ~ x1 + x2))
r2 = residuals(lm(z2 ~ x1 + x2))
r3 = residuals(lm(y1 ~ x1 + x2))
r4 = residuals(lm(y2 ~ x2 + x2))
# biased coef on y1 as expected for ols
# 2sls
# fwl 2sls
The FWL can also be extended to cases where there are multiple endogenous variables. I have demonstrated this case by extending the above example to model x1 as an endogenous variable.
# 2 endogenous variables
r5 = residuals(lm(z1 ~ x2))
r6 = residuals(lm(z2 ~ x2))
r7 = residuals(lm(y1 ~ x2))
r8 = residuals(lm(x1 ~ x2))
r9 = residuals(lm(y2 ~ x2))
# 2sls coefficients
p1 = fitted.values(lm(y1~z1+z2+x2))
p2 = fitted.values(lm(x1~z1+z2+x2))
lm(y2 ~ p1 + p2 + x2)
# 2sls fwl coefficients
p3 = fitted.values(lm(r7~-1+r5+r6))
p4 = fitted.values(lm(r8~-1+r5+r6))
lm(r9 ~ p3 + p4)
Kalkalash! Pinpointing the Moments “The Simpsons” became less Cromulent
Whenever somebody mentions “The Simpsons” it always stirs up feelings of nostalgia in me. The characters, uproarious gags, zingy one-liners, and edgy animation all contributed towards making,
arguably, the greatest TV ever. However, it’s easy to forget that as a TV show “The Simpsons” is still ongoing—in its twenty-fourth season no less.
For me, and most others, the latter episodes bear little resemblance to older ones. The current incarnation of the show is stale, and has been for a long time. I haven’t watched a new episode in over
ten years, and don’t intend to any time soon. When did this decline begin? Was it part of a slow secular trend, or was there a sudden drop in the quality, from which there was no recovery?
To answer these questions I use the Global Episode Opinion Survey (GEOS) episode ratings data, which are published online. A simple web scrape of the “all episodes” page provides me with 423 episode
ratings, spanning from the first episode of season 1, to the third episode of season 20. After S20E03, the ratings become too sparse, which is probably a function of how bad the show, in its current
condition, is. To detect changepoints in show ratings, I have used the R package changepoint. An informative introduction of both the package and changepoint analyses can be found in this
accompanying vignette.
The figure above provides a summary of my results. Five breakpoints were detected. The first occurring in the first episode of the ninth season: The City of New York Vs. Homer Simpson. Most will
remember this; Homer goes to New York to collect his clamped car and ends up going berserk. Good episode, although this essentially marked the beginning of the end.
According to the changepoint results, the decline occurred in three stages. The first lasted from the New York episode up until episode 11 in season 10. The shows in this stage have an average rating
of about 7, and the episode where the breakpoint is detected is: Wild Barts Can’t Be Broken. The next stage roughly marks my transition, as it is about this point that I stopped watching. This stage
lasts as far as S15E09, whereupon the show suffers the further ignominy of another ratings downgrade. Things possibly couldn’t get any worse, and they don’t, as the show earns a minor reprieve after
the twentieth episode of season 18.
So now you know. R code for the analysis can be found in the below.
# packages
library(Hmisc) ; library(changepoint)
# clear ws
# webscrape data
page1 = "http://www.geos.tv/index.php/list?sid=159&collection=all"
home1 = readLines(con<-url(page1)); close(con)
# pick out lines with ratings
means = '<td width="60px" align="right" nowrap>'
epis = home1[grep(means, home1)]
epis = epis[57:531]
epis = epis[49:474]
# prune data
loc = function(x) substring.location(x,"</span>")$first[1]
epis = data.frame(epis)
epis = cbind(epis,apply(epis, 1, loc))
epis$cut = NA
for(i in 1:dim(epis)[1]){
epis[i,3] = substr(epis[i,1], epis[i,2]-4, epis[i,2]-1)
#create data frame
ts1 = data.frame(rate=epis$cut, episode=50:475)
# remove out of season shows and movie
ts1 = ts1[!(ts1$episode %in% c(178,450,451)),]
# make numeric
ts1$rate = as.numeric(as.character(ts1$rate))
# changepoint function
mean2.pelt = cpt.mean(ts1$rate,method='PELT')
# plot results
ylab='Average Rating',cpt.width=2, main="Changepoints in 'The Simpsons' Ratings")
# what episodes ?
# The City of New York vs. Homer Simpson
# Wild Barts Can't Be Broken
# I, (Annoyed Grunt)-Bot -
# Stop Or My Dog Will Shoot! | {"url":"https://diffuseprior.wordpress.com/","timestamp":"2014-04-18T20:43:35Z","content_type":null,"content_length":"94122","record_id":"<urn:uuid:92b1da10-fdd8-421e-9bdd-db9c717439c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Munkres Chapter 2 Section 19 (Part I)
Problem: Suppose that for each $\alpha\in\mathcal{A}$ the topology on $X_\alpha$ is given by a basis $\mathfrak{B}_\alpha$. The collection of all sets of the form
$\displaystyle \prod_{\alpha\in\mathcal{A}}B_\alpha$
Such that $B_\alpha\in\mathfrak{B}_\alpha$ is a basis for $\displaystyle X=\prod_{\alpha\in\mathcal{A}}X_\alpha$ with the box topology, denote this collection by $\Omega_B$. Also, the collection $\
Omega_P$ of all sets of the form
$\displaystyle \prod_{\alpha\in\mathcal{A}}B_\alpha$
Where $B_\alpha\in\mathfrak{B}_\alpha$ for finitely many $\alpha$ and $B_\alpha=X_\alpha$ otherwise is a basis for the product topology on $X$.
Proof: To prove the first part we let $U\subseteq X$ be open. Then, by construction of the box topology for each $(x_\alpha)_{\alpha\in\mathcal{A}}\overset{\text{def.}}{=}(x_\alpha)\in U$ we may find
some $\displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha$ such that $U_\alpha$ is open in $X_\alpha$ and $\displaystyle (x_\alpha)\in \prod_{\alpha\in\mathcal{A}}U_\alpha\subseteq U$. So, then for
each $x_\alpha$ we may find some $B_\alpha\in\mathfrak{B}_\alpha$ such that $x_\alpha\in B_\alpha\subseteq U_\alpha$ and thus
$\displaystyle (x_\alpha)\in\prod_{\alpha\in\mathcal{A}}B_\alpha\subseteq\prod_{\alpha\in\mathcal{A}}U_\alpha\subseteq U$
Noticing that $\displaystyle \prod_{\alpha\in\mathcal{A}}B_\alpha\in\Omega_B$ and every element of $\Omega_B$ is open finishes the argument.
Next, we let $U\subseteq X$ be open with respect to the product topology. Once again for each $(x_\alpha)\in U$ we may find some $\displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha$ such that $U_\
alpha$ is open in $X_\alpha$ for each $\alpha\in\mathcal{A}$ and $U_\alpha=X_\alpha$ for all but finitely many $\alpha$, call them $\alpha_1,\cdots,\alpha_m$. So, for each $\alpha_k,\text{ }k=1,\
cdots,m$ we may find some $B_k\in\mathfrak{B}_k$ such that $x\in B_k\subseteq U_k$ and so
$\displaystyle (x_\alpha)\in\prod_{\alpha\in\mathcal{A}}V_\alpha\subseteq\prod_{\alpha\in\mathcal{A}}U_\alpha\subseteq U$
$\displaystyle V_\alpha=\begin{cases}B_k\quad\text{if}\quad \alpha=\alpha_k ,\text{ }k=1,\cdots,m\\ X_\alpha\quad\text{if}\quad xe \alpha_1,\cdots,\alpha_m\end{cases}$
Noting that $\displaystyle \prod_{\alpha\in\mathcal{A}}V_\alpha\in\Omega_P$ and $\Omega_P$ is a collection of open subsets of $X$ finishes the argument. $\blacksquare$
Problem: Let $\left\{U_\alpha\right\}_{\alpha\in\mathcal{A}}$ be a collection of topological spaces such that $U_\alpha$ is a subspace of $X_\alpha$ for each $\alpha\in\mathcal{A}$. Then, $\
displaystyle \prod_{\alpha\in\mathcal{A}}U_\alpha=Y$ is a subspace of $\displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha=X$ if both are given the product or box topology.
Proof: Let $\mathfrak{J}_S,\mathfrak{J}_P$ denote the topologies $Y$ inherits as a subspace of $X$ and as a product space respectively. Note that $\mathfrak{J}_S,\mathfrak{J}_P$ are generated by the
bases $\mathfrak{B}_S=\left\{B\cap Y:B\in\mathfrak{B}\right\}$ (where $\mathfrak{B}$ is the basis on $X$ with the product topology), and
$\displaystyle \mathfrak{B}_P=\left\{\prod_{\alpha\in\mathcal{A}}O_\alpha:O_\alpha\text{ is open in }U_\alpha,\text{ and equals }U_\alpha\text{ for all but finitely many }\alpha\right\}$
So, let $(x_\alpha)\in B$ where $B\in\mathfrak{B}_S$ then
$\displaystyle B=Y\cap\prod_{\alpha\in\mathcal{A}}V_\alpha=\prod_{\alpha\in\mathcal{A}}\left(V_\alpha\cap U_\alpha\right)$
Where $V_\alpha$ is open in $X_\alpha$, and thus $V_\alpha\cap Y$ is open in $U_\alpha$. Also, since $V_\alpha=X_\alpha$ for all but finitely many $\alpha$ it follows that $V_\alpha\cap U_\alpha=U_\
alpha$ for all but finitely many $\alpha$. And so $B\in\mathfrak{B}_P$. Similarly, if $B\in\mathfrak{B}_P$ then
$\displaystyle B=\prod_{\alpha\in\mathcal{A}}O_\alpha$
Where $O_\alpha$ is open in $U_\alpha$, but this means that $O_\alpha=V_\alpha\cap U_\alpha$ for some open set $V_\alpha$ in $X_\alpha$ and so
$\displaystyle B=\prod_{\alpha\in\mathcal{A}}O_\alpha=\prod_{\alpha\in\mathcal{A}}\left(V_\alpha\cap U_\alpha\right)=\prod_{\alpha\in\mathcal{A}}V_\alpha\cap Y\in\mathfrak{B}_S$
From where it follows that $\mathfrak{B}_S,\mathfrak{B}_P$ and thus $\mathfrak{J}_S,\mathfrak{J}_P$ are equal.
The case for the box topology is completely analgous. $\blacksquare$
Problem: Let $\left\{X_\alpha\right\}_{\alpha\in\mathcal{A}}$ be a collection of Hausdorff spaces, then $\displaystyle X=\prod_{\alpha\in\mathcal{A}}X_\alpha$ is Hausdorff with either the box or
product topologies
Proof: It suffices to prove this for the product topology since the box topology is finer.
So, let $(x_\alpha),(y_\alpha)\in X$ be distinct. Then, $x_\betae y_\beta$ for some $\beta\in\mathcal{A}$. Now, since $X_\beta$ is Hausdorff there exists disjoint open neighborhoods $U,V$ of $x_\
beta,y_\beta$ respectively. So, $\pi_\beta^{-1}(U),\pi_\beta^{-1}(V)$ are disjoint open neighborhoods of $(x_\alpha),(y_\alpha)$ respectively. The conclusion follows. $\blacksquare$
Problem: Prove that $\left(X_1\times\cdots\times X_{n-1}\right)\times X_n\overset{\text{def.}}{=}X\approx X_1\times\cdots\times X_n\overset{\text{def.}}{=}Y$.
Proof: Define
$\displaystyle \varphi:X\to Y:((x_1,\cdots,x_{n-1}),x_n)\mapsto (x_1,\cdots,x_n)$
Clearly this is continuous since $\pi_{\beta}\circ\varphi=\pi_\beta$
Problem: One of the implications holds for theorem 19.6 even in the box topology, which is it?
Proof: If $\displaystyle f:A\to\prod_{\alpha\in\mathcal{A}}X_\alpha$ where the latter is given the box topology then we have that each $\pi_\alpha$ is continuous and thus so is each $\pi_\alpha\circ
f:A\to X_\alpha$. $\blacksquare$#
Problem: Let $\left\{\bold{x}_n\right\}_{n\in\mathbb{N}}$ be a sequence of points in the product space $\displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha=X$. Prove that $\left\{\bold{x}_n\right\}_{n
\in\mathbb{N}}$ converges to $\bold{x}$ if and only if the sequences $\left\{\pi_\alpha(\bold{x}_n)\right\}_{n\in\mathbb{N}}$ coverge to $\pi_\alpha(\bold{x})$ for each $\alpha\in\mathcal{A}$. Is
this fact true if one uses the box topology?
Proof: Suppose that $U$ is a neighborhood of $\pi_{\alpha}(\bold{x})$ such that
is infinite. Notice then that if $\pi_{\alpha}(\bold{x}_n)\in K$ that $\bold{x}_notin\pi_{\alpha}^{-1}\left(U\right)$ from where it follows that $\pi_{\alpha}^{-1}\left(U\right)$ is a neighborhood of
$\bold{x}$ which does not contain all but finitely many values of $\left\{\bold{x}_n:n\in\mathbb{N}\right\}$ contradicting the fact that $\bold{x}_n\to\bold{x}$ in $X$.
Conversely, suppose that $\pi_{\alpha}(\bold{x}_n)\to\pi_{\alpha}(\bold{x})$ for each $\alpha\in\mathcal{A}$ and let $\displaystyle \prod_{\alpha\in\mathcal{A}}U_{\alpha}$ be a basic open
neighborhood of $\bold{x}$. Then, letting $\alpha_1,\cdots,\alpha_m$ be the finitely many indices such that $U_{\alpha_k}e X_k$. Since each $\pi_{\alpha_k}(\bold{x}_n)\to\pi_{\alpha_k}(\bold{x})$
there exists some $n_\ell\in\mathbb{N}$ such that $n_\ell\leqslant n\implies \pi_{\alpha_k}(\bold{x}_k)\in U_k$. So, let $N=\max\{n_1,\cdots,n_k\}$. Now, note that if $\displaystyle \bold{x}_notin\
prod_{\alpha\in\mathcal{A}}U_\alpha$ then $\pi_{\alpha}(\bold{x}_n)otin U_\alpha$ for some $\alpha\in\mathcal{A}$. But, since clearly $\pi_{\alpha}(\bold{x}_n)\in X_\alpha$ we must have that $\pi_{\
alpha}(\bold{x}_n)otin U_{\alpha_1}\cup\cdots\cup U_{\alpha_m}$ and thus $n\leqslant N$. It follows that for every $N\leqslant n$ we have that $\displaystyle \bold{x}_n\in\prod_{\alpha\in\mathcal{A}}
U_\alpha$. Then, since every neighborhood of $\bold{x}$ contains a basic open neighborhood and the above basic open neighborhood was arbitrary the conclusion follows.
Clearly the first part holds in the box topology since there was no explicit mention of the product topology or it’s defining characteristics. That said, the second does not hold. Consider $\
displaystyle \bold{x}_n=\prod_{m\in\mathbb{N}}\left\{\frac{1}{n}\right\}$. Clearly each coordinate converges to zero, but $\displaystyle \prod_{m\in\mathbb{N}}\left(\frac{-1}{m},\frac{1}{m}\right)=U$
is a neighborhood of $\bold{0}$ in the product topology. But, if one claimed that for every $n\geqslant N$ (for some $N\in\mathbb{N}$ that $\bold{x}_n\in U$ they’d be wrong. To see this merely note $
\displaystyle \frac{1}{N}otin\left(\frac{-1}{N+1},\frac{1}{N+1}\right)$ and so $\pi_{N+1}\left(\bold{x}_N\right)otin\pi_{N+1}(U)$ and thus $\bold{x}_{N}otin U$.
Problem: Let $\mathbb{R}^{\infty}$ be the subset of $\mathbb{R}^{\omega}$ consisting of all eventually zero sequences. What is $\overline{\mathbb{R}^{\infty}}$ in the box and product topology?
Proof: We claim that in the product topology $\overline{\mathbb{R}^{\infty}}=\mathbb{R}^{\omega}$. To see this let $\displaystyle \prod_{n\in\mathbb{N}}U_n$ be a basic non-empty open set in $\mathbb
{R}^{\omega}$ with the product topology. Since we are working with the product topology we know there are finitely many indices $n_1,\cdots,n_m$ such that $U_{n_k}e \mathbb{R}$. So, for each $n_1,\
cdots,n_m$ select some $x_{n_k}\in U_{n_k}$ and consider $(x_n)_{n\in\mathbb{N}}$ where
$\displaystyle \begin{cases} x_{n_k}\quad\text{if}\quad n=n_1,\cdots,n_m \\ 0 \text{ } \quad\text{if}\quad \text{otherwise}\end{cases}$
Clearly then $\displaystyle (x_n)_{n\in\mathbb{N}}\in \prod_{n\in\mathbb{N}}U_n\cap\mathbb{R}^{\infty}$ and thus every non-empty open set in $\mathbb{R}^{\omega}$ intersects $\mathbb{R}^{\infty}$ and
the conclusion follows.
Now, we claim that with the box topology that $\overline{\mathbb{R}^{\infty}}=\mathbb{R}^{\infty}$. To see this let $(x_n)_{n\in\mathbb{N}}otin\mathbb{R}^{\infty}$. Then, there exists some
subsequence $\{x_{\varphi(n)}\}$ of the sequence $\{x_n\}$ which is non-zero. For each $\varphi(n)$ form an interval $I_{\varphi(n)}$ such that $0otin I_{\varphi(n)}$. Then, consider
$\displaystyle \prod_{n\in\mathbb{N}}U_n,\text{ }U_n=\begin{cases} I_{\varphi(m)}\quad\text{if}\quad n=\varphi(m)\\ \mathbb{R}\quad\text{ }\quad\text{if}\quad \text{otherwise}\end{cases}$
Clearly then $\displaystyle \prod_{n\in\mathbb{N}}U_n$ is a neighborhood of $(x_n)_{n\in\mathbb{N}}$ and since each clearly has an infinite subsequence of non-zero values it is disjoint from $\mathbb
{R}^{\infty}$. It follows that in $\mathbb{R}^{\omega}$ with the box topology that $\mathbb{R}^{\infty}$ is closed and thus $\overline{\mathbb{R}^{\infty}}=\mathbb{R}^{\infty}$ as desired. $\
Problem: Given sequences $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ of real numbers with $a_n>0$ define $\varphi:\mathbb{R}^{\omega}\to\mathbb{R}^{\omega}$ by the equation
$\varphi:(x_n)_{n\in\mathbb{N}}\mapsto \left(a_n x_n+b_n\right)_{n\in\mathbb{N}}$
Show that if $\mathbb{R}^{\omega}$ is given the product topology that $\varphi$ is a homeomorphism. What happens if $\mathbb{R}^{\omega}$ is given the box topology?
Proof: Let us first prove that $\varphi$ is a bijection. To do this we prove something more general…
Lemma: Let $\left\{X_\alpha\right\}_{\alpha\in\mathcal{A}}$ and $\left\{Y_\alpha\right\}_{\alpha\in\mathcal{A}}$ be two classes of untopologized sets and $\left\{f_\alpha\right\}_{\alpha\in\mathcal
{A}}$ a collection of bijections $f_\alpha:X_\alpha\to Y_\alpha$. Then, if
$\displaystyle \prod_{\alpha\in\mathcal{A}}f_\alpha\overset{\text{def.}}{=}\varphi:\prod_{\alpha\in\mathcal{A}}X_\alpha\to\prod_{\alpha\in\mathcal{A}}Y_\alpha:(x_\alpha)_{\alpha\in\mathcal{A}}\mapsto
we have that $\varphi$ is a bijection.
Proof: To prove injectivity we note that if
And by definition of an $\alpha$-tuple this implies that
for each $\alpha\in\mathcal{A}$. But, since each $f_\alpha:X_\alpha\to Y_\alpha$ is injective it follows that
For each $\alpha\in\mathcal{A}$. Thus,
as desired.
To prove surjectivity we let $\displaystyle (y_\alpha)_{\alpha\in\mathcal{A}}\in\prod_{\alpha\in\mathcal{A}}Y_\alpha$ be arbitrary. We then note that for each fixed $\alpha\in\mathcal{A}$ we have
there is some $x_\alpha\in X_\alpha$ such that $f_\alpha(x_\alpha)=y_\alpha$. So, if $\displaystyle (x_\alpha)_{\alpha\in\mathcal{A}}\in\prod_{\alpha\in\mathcal{A}}X_\alpha$ is the corresponding $\
alpha$-tuple of these values we have that
from where surjectivity follows. Combining these two shows that $\varphi$ is indeed a bijection. $\blacksquare$
Lemma: Let $\left\{X_\alpha\right\}_{\alpha\in\mathcal{A}}$ and $\left\{Y_\alpha\right\}_{\alpha\in\mathcal{A}}$ be two classes of non-empty topological spaces and $\left\{f_\alpha\right\}_{\alpha\in
\mathcal{A}}$ a corresponding class of continuous functions such that $f_\alpha:X_\alpha\to Y_\alpha$. Then, if $\displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha$ and $\displaystyle \prod_{\alpha\
in\mathcal{A}}Y_\alpha$ are given the product topologies the mapping
$\displaystyle \prod_{\alpha\in\mathcal{A}}f_\alpha\overset{\text{def.}}{=}\varphi:\prod_{\alpha\in\mathcal{A}}X_\alpha\to\prod_{\alpha\in\mathcal{A}}Y_\alpha:(x_\alpha)_{\alpha\in\mathcal{A}}\mapsto
is continuous.
Proof: Since the codomain is a product space it suffices to show that
$\displaystyle \pi_\beta\circ\varphi:\prod_{\alpha\in\mathcal{A}}X_\alpha\to Y_{\beta}$
is continuous for each $\beta\in\mathcal{A}$. We claim though that the diagram
commutes where $\pi^Y_\beta$ and $\pi^X_\beta$ denote the canonical projections from $\displaystyle \prod_{\alpha\in\mathcal{A}}Y_\alpha$ and $\displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha$ to
$Y_\beta$ and $X_\beta$ respectively. To see this we merely note that
$\displaystyle \pi^Y_\beta\left(\varphi\left((x_\alpha)_{\alpha\in\mathcal{A}}\right)\right)=\pi^Y_\beta\left(\left(f_\alpha(x_\alpha)\right)_{\alpha\in\mathcal{A}}\right)=f_{\beta}(x_\beta)$
which confirms the commutativity of the diagram. But, the conclusion follows since $f_\beta\circ\pi_\beta$ is the composition of two continuous maps (the projection being continuous since $\
displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha$ is a product space).
The lemma follows by previous comment. $\blacksquare$
We come to our last lemma before the actual conclusion of the problem.
Lemma: Let $\left\{X_\alpha\right\}_{\alpha\in\mathcal{A}}$ and $\left\{Y_\alpha\right\}_{\alpha\in\mathcal{A}}$ be two classes of non-empty topological spaces and $\left\{f_\alpha\right\}_{\alpha\in
\mathcal{A}}$ a set of homeomorphisms with $f_\alpha:X_\alpha\to Y_\alpha$. Then,
$\displaystyle \prod_{\alpha\in\mathcal{A}}f_\alpha\overset{\text{def.}}{=}\varphi:\prod_{\alpha\in\mathcal{A}}X_\alpha\to\prod_{\alpha\in\mathcal{A}}Y_\alpha:(x_\alpha)_{\alpha\in\mathcal{A}}\mapsto
is a homeomorphism if $\displaystyle \prod_{\alpha\in\mathcal{A}}X_\alpha$ and $\displaystyle \prod_{\alpha\in\mathcal{A}}Y_\alpha$ are given the product topology.
Proof: Our last two lemmas show that $\varphi$ is bijective and continuous. To prove that it’s inverse is continuous we note that
$\displaystyle \left(\left(\prod_{\alpha\in\mathcal{A}}f_{\alpha}^{-1}\right)\circ\varphi\right)\left((x_\alpha)_{\alpha\in\mathcal{A}}\right)=\prod_{\alpha\in\mathcal{A}}\left\{f_\alpha^{-1}\left(f_
And similarly for the other side. Thus,
$\displaystyle \varphi^{-1}=\prod_{\alpha\in\mathcal{A}}f_{\alpha}^{-1}$
Which is continuous since each $f_{\alpha}^{-1}:Y_\alpha\to X_\alpha$ is continuous and appealing to our last lemma again. Thus, $\varphi$ is a bi-continuous bijection, or in other words a
homeomorphism. The conclusion follows. $\blacksquare$
Thus, getting back to the actual problem we note that if we denote $T_n:\mathbb{R}\to\mathbb{R}:x\mapsto a_nx+b_n$ that each $T_n$ is a homeomorphism. Thus, since it is easy to see that
$\displaystyle \varphi=\prod_{n\in\mathbb{N}}T_n$
we may conclude by our last lemma (since we are assuming that we are giving $\mathbb{R}^{\omega}$ in both the domain and codomain the product topology) that $\varphi$ is a homeomorphism.
This is also continuous if we give $\mathbb{R}^{\omega}$ the box topology. To see this we merely need to note that $\displaystyle \left(\prod_{\alpha\in\mathcal{A}}f_\alpha\right)^{-1}\left(\prod_{\
alpha\in\mathcal{A}}U_\alpha\right)=\prod_{\alpha\in\mathcal{A}}f_\alpha^{-1}\left(U_\alpha\right)$ and thus if all of the $U_\alpha$ are open then so are (since each $f_\alpha$ is continuous) is
each $f_\alpha^{-1}(U_\alpha)$ and thus the inverse image of a basic open set is the unrestricted product of open sets and thus basic open. A similar argument works for the inverse function. $\
5 Comments »
1. Why no love for section 19? (I’m working through the book to study for the math subject GRE.)
Pat | June 7, 2010 | Reply
2. n/m. Looking at the problems they pretty much seem to be the same. You working out of the first edition?
Pat | June 7, 2010 | Reply
3. Haha, yeah recheck the title, I just mislabeled it. And no, I am working out of the second edition.
Also, is there really that much top. on the GREs?
drexel28 | June 7, 2010 | Reply
4. I don’t think there’s too much, which is why I only plan to work my way through the first three chapters. Afterwards will be a crash course in algebra (Fraleigh’s book which seems good). Keep it
up, really cool what you’re doing!
Pat | June 8, 2010 | Reply
5. Thank you! I appreciate that someone actually read this. You’ll notice that this blog mostly used to be theorems, but I just really like doing problems. So, I figured over the summer I’d “try” to
do all the problems in the “trifecta”: Topology-Munkres, Algebra-Dummit and Foote (over Herstein,yes), and Analysis-Rudin (baby Rudin). To be fair, I know all of these subjects fairly well. In
fact, I’ve been through most of Munkres and Rudin’s book themselves. That said, it’ll still be hard.
P.S. I would not suggest Fraleigh. That book is kind of dumbed down and if you intend to pursue a graduate degree in mathematics (as the GRE reference suggests) it won’t do.
drexel28 | June 8, 2010 | Reply
• About Me
• Blogroll
• Lecture Notes
• Categories
• Top Posts | {"url":"http://drexel28.wordpress.com/2010/06/07/munkres-chapter-2-section-20/","timestamp":"2014-04-16T21:56:13Z","content_type":null,"content_length":"154978","record_id":"<urn:uuid:ac27a15c-8dd4-43cf-bc1d-fe46e03af245>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Somerville, MA Prealgebra Tutor
Find a Somerville, MA Prealgebra Tutor
...I feel confident in my way of teaching, yet understand it is not for everyone. With each student that I tutor I like to take an introductory lesson and work out a plan with them. We read over
the work they have, find the areas that they want to improve (or that I feel should improve), talk about a time frame, and then start from there.
15 Subjects: including prealgebra, English, reading, writing
...I am a playwright and performer who also writes theatre criticism as a freelancer (frequently as a contributor to The Arts Fuse.) I have developed and taught curricula in mime, puppetry, masks,
as well as historical styles such as commedia dell'arte and choreographed mime and puppetry for youth t...
18 Subjects: including prealgebra, reading, English, geometry
...I have tutored friends and cousins on many subjects since high school. I've have lived in many places and am very open minded: Venezuela (Caracas), US (Miami, Dallas, DC, Boston), Australia
(Melbourne). Time can be flexible, so please don't hesitate to message me! I am available most weekends!...
14 Subjects: including prealgebra, Spanish, English, algebra 2
...Praxis II, PLT (Principles of Learning and Teaching) consists of elementary education tests and subject assessments in Science, English, Mathematics, and Social Studies for those prospective
K-12 teachers of these subjects. I have successfully tutored prior students in Reading, Mathematics, and...
30 Subjects: including prealgebra, reading, writing, English
I am a licensed teacher and an experienced tutor who has worked with high school students for many years. I help with math - Algebra, Geometry, Pre-Calculus, Statistics. I am comfortable with
Standard, Honors, and AP curricula.
8 Subjects: including prealgebra, statistics, algebra 1, geometry | {"url":"http://www.purplemath.com/Somerville_MA_prealgebra_tutors.php","timestamp":"2014-04-18T16:06:38Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:e9d3f0f7-da3f-40cc-a111-ada10a5cfd0d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic and set theory around the world
Research teams and centers : Europe - North America - Other
Publications - Blogs - Organizations - Mailing lists - Software - Other
Here is a list of research groups and departments (and some isolated logics specialists in other departments) in the foundations of mathematics and computer science (logic, set theory, model theory,
theoretical computer science, proof theory, programming languages).
Created by Sylvain Poirier, author of this site of introduction to set theory and foundations of mathematics and physics, in July 2012 (see note).
Centre national de recherches de Logique (CNRL / NCNL) aims to fpster and coordinate the mathematical and philosophical research in logic among Belgian university institutions.
The Belgian Society for Logic and Philosophy of Science (BSLPS) is aimed at promoting Belgian research in logic and philosophy of science by inviting distinguished researchers to discuss their work.
• Brussels
• Louvain-la-neuve :
• Mons:
Czech Republic
• Pilsen (University of West Bohemia)
• Prague
□ Charles University
□ Academy of Science
BRICS (Basic Research in Computer Science) was a research center and PhD school funded by the Danish National Research Foundation from 1994 to 2006.
The Danish Network for Philosophical Logic and Its Applications is much outdated
Paris and surroundings
Other French regions
• Caen
□ Algebra, geometry, logic team : Pierre Ageron works on category theory (mostly in French) - Patrick Dehornoy connected braids with elementary embeddings in set theory, and wrote books on
logic and complexity.
• Chambery: LIMD (Logical Informatics and Discrete Mathematics) research team in the Department of mathematics of the Savoie University
• Lille 3 Savoirs, Textes, Langage (philosophy) : Tero Tulenheimo - Shahid Rahman
• Lyon
• Marseille-Luminy: Logique de la Programmation
• Nancy : Formal Methods department of LORIA (Lorraine Research Laboratory in Computer Science and its Applications). Contribution to logics and proof theory, techniques for the verification of
distributed and reactive systems, virology and safety. Logics, models for computations, programming models, rewriting, modelling, specification...
• Sophia Antipolis : Mathématiques, Raisonnement et Logiciel (Computer aided verification of proofs and software), INRIA, to study and use techniques for verifying mathematical proofs on the
computer to ensure the correctness of software.
Deutsche Vereinigung für Mathematische Logik und für Grundlagenforschung der Exakten Wissenschaften (DVMLG)
• Aachen : Mathematical Foundations of Computer Science, at Rheinisch-Westfälische Technische Hochschule.
□ GAMES : research and training programme for the design and verification of computing systems, using a methodological framework that is based on the interplay of finite and infinite games,
mathematical logic and automata theory.
□ AlgoSyn : Integrating approaches from computer and engineering sciences, the project aims at developing methods for the automatised design of soft- and hardware.
• Augsburg : theoretical computer science
• Berlin
• Bonn (Rheinische Friedrich-Wilhelms-Universität)
• Braunschweig : Theoretical Informatics
• Bremen
• Darmstadt : Logic Group in Technische Universität Darmstadt. Application of proof theoretic, recursion theoretic, category theoretic, algebraic and model theoretic methods from mathematical logic
to mathematics and computer science.
• Dortmund : Department of Computer Science
• Dresden (Technische Universität Dresden):
• Duisburg - Essen Algebra und Logik
• Erlangen : Department of Computer Science
• Frankfurt : Mathematical Computer Science
• Freiburg :
• Heidelberg : Mathematical Logic and Theoretical Computer Science workgroup
• Kaiserslautern : Department of computer science
• Karlsruhe
• Konstanz : Model Theory Working Group composed of members of the "Forschungsschwerpunkt reelle Geometrie und Algebra" (Real Geometry and Algebra research group) with an interest in Model Theory.
• Leipzig: Abteilung Logik und Wissenschaftstheorie, Institut für Philosophie, Universität Leipzig
• Lübeck : Institut für Theoretische Informatik
• Münster: Institut für mathematische Logik und Grundlagenforschung, Westfälische Wilhelms-Universität
• München (Munich):
• Paderborn Department of computer science
• Potsdam
• Saarbrücken
□ Saarland University, Computer Science
• Tübingen, Eberhard-Karls-Universität :
• Ulm :
• Wadern : Schloss Dagstuhl - Leibniz Center for Informatics International conference and research center for computer science.
• Würzburg
Greece - Athens
Hungary - Budapest
• Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences.
• Eötvös University
Italian Association of Logic and its applications
Italian Society for Logic and Philosophy of Science
Dutch Association for Theoretical Computer Science
• Amsterdam - Institute for Logic, Language and Computation, research institute in the interdisciplinary area between mathematics, linguistics, computer science, philosophy and artificial
• Delft University of Technology : Klaas Pieter Hart
• Nijmegen
□ Dutch Research School in Logic - The aim of the Onderzoeksschool Logica (OZSL) is to provide a forum for researchers in the Netherlands whose research involves logic.
□ Radboud Universiteit
■ The Foundations group studies and develops mathematically oriented models of computing and reasoning
• Utrecht
Logic in Spain and a list of conferences on model theory
Summa logicae : bibliography and congresses "Tools for teaching logic"
• Göteborg (Gothenburg)
• Stockholm
• Umea philosophy
• Uppsala
United Kingdom
North America
• Calgary (Alberta) - Logic Research Group, Department of Philosophy, University of Calgary.
• Montréal (Québec) : Category Theory Research Center
• Halifax : Atlantic Category Theory Group
• Hamilton (Ontario) : Mathematical Logic, McMaster University
• London (Ontario)
• Peterborough (Ontario) : Mathematics research in Trent University
• Prince Edward Island : Maxim R. Burke (Set theory, general topology, measure theory, functional analysis)
• Saskatoon : Centre for Algebra, Logic and Computation, University of Saskatchewan
• Toronto (Ontario)
□ University of Toronto
□ York University
□ Department of Computer Science, Ryerson University : Mikhail Soutchanski (Logic-based high-level programming languages...)
• Vancouver (Columbia): Simon Fraser University, Burnaby
• Phoenix : Tempe (Arizona State University) :
Los Angeles area
• UCLA : Logic Center supports teaching and research in logic and its applications. (Enderton, Herbert B. who made a list of links to ASL members and wrote some books, died October 20, 2010)
• Caltech : Alexander Kechris: Foundations of mathematics; mathematical logic and set theory; their interactions with analysis and dynamical systems.
• UC Irvine
□ Logic and Foundations of Mathematics (set theory, with emphasis on forcing, large cardinals, inner model theory, fine structure theory, regular and singular cardinal combinatorics, and
descriptive set theory.) + Paul C. Eklof, Emeritus.
San Francisco area
Washington, D.C.
Delaware (University of Delaware, Newark)
• Urbana : University of Illinois at Urbana-Champaign
□ Logic, Department of Mathematics : model theory and its applications and in descriptive set theory; nonstandard analysis ; aspects of group theory with significant connections to logic.
☆ Algorithms & Theory Group
• Chicago
• Bloomington (Indiana University)
• South Bend : University of Notre Dame
• Kansas City : Eric Jonathan Hall (Characterizing permutation models - axiom of choice and topology)
• Omaha : Andrzej Roslanowski, University of Nebraska (Set Theory, General Topology, Abstract Algebra, Set-Theoretical Aspects of Real Analysis)
New Hampshire
New Jersey
• Newark : Rutgers University
• Princeton
New Mexico
New York
North Carolina
LAOS: Logicians At Ohio State. An interdisciplinary group of scholars with research interests in logic
• Athens (Ohio University) : Philip Ehrlich (philosophy of mathematics) - Todd Eisworth (set theory)
• Columbus : Ohio State University
• Kenyon College : Bob Milnikel (mathematical analysis of logic as used in computer science) - Noah Aydin (algebraic coding theory) - .
• Oxford : Miami Mathematics research interests
South Carolina
• College of Charleston : Renling Jin : nonstandard analysis, set theory, model theory...
• Madison
□ Logic. Members, seminars, list of thesis
• Milwaukee Mathematics faculty research interests, Marquette University.
• Oshkosh: Joan Hart works in set theory and its applications to general topology, measure theory, and functional analysis.
Computer Science Faculty Research Areas - University of Wyoming Logic Society
• Canberra : Logic & Computation group - Logical theory, Mechanised reasoning, Applications of logic. Automated Reasoning Group. Publications, software, meetings, links. Research areas:
mathematical properties of non-classical logics; algorithms for reasoning in classical and non-classical systems.
• Melbourne, Monash University
• Sydney
Association for Logic in India
Logicians in Iran
• Teheran
□ Institute for Studies in Theoretical Physics and Mathematics (IPM)
Logic in Israel : obsolete page
(from West to East)
• Matsuyama : Hiroshi Fujita (Department of mathematics, Graduate School of Science and Engineering, Ehime University) works on descriptive set theory
• Kobe : Group of Logic, Statistics & Informatics, Kobe University, including Set theory.
• Osaka, Department of Mathematics and Information Sciences : Masaru Kada (cardinal invariants of the reals, Cichon's diagram, forcing, infinitary combinatorics, compactification)
• Nagoya
• Ishikawa region : Nomi : Ishihara Laboratory, School of Information Science, Japan Advanced Institute of Science and Technology (archive of old logic group page - Ono laboratory)
• Shizuoka, Department of Mathematics. The chair of Fundamental Mathematics includes algebra, topology, differential geometry and mathematical logic.
• Tsukuba : Information mathematics group : Akito Tsuboi (Mathematical Logic) - Masahiro Shioya (set theory) - Ko Sakai (theoretical computer science)
• Sendai, Tohoku University : Kazuyuki Tanaka (to not mistake with another Kazuyuki Tanaka in the same university): logic, foundations of mathematics and theory of computation.
New Zealand
Réunion (France)
South Korea
Still incomplete, more links will be added later.
The creation of this page used the whole of the following sources, thus making them obsolete (the links were not copied but completely updated and rebuit, and many more were added):
• A big thanks to Anton Setzer for having made, a few years ago, by far the biggest list of links to logic-servers world wide, though a majority of his links were broken.
• dmoz : research groups and centers (a dying portal with a dire lack of editors since many left, including myself, repelled by wrong management staff and absurd bureaucratic rules) | {"url":"http://settheory.net/world","timestamp":"2014-04-21T14:41:12Z","content_type":null,"content_length":"140030","record_id":"<urn:uuid:f7d1b843-0dd3-4eb2-8f15-747cda24f1dd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: English Language Arts (10), or Mathematics (9 - 12)
Title: Human Angles
Description: This lesson is designed to get your students moving! This lesson focuses primarily on transversals and angles. Students will be able to identify and relate some angles to cheer moves.
This is a College- and Career-Ready Standards showcase lesson plan.
Subject: Mathematics (9 - 12)
Title: Card Table Project
Description: Students will work in groups to design a card table. Students will communicate with each other through a class blog or class discussion page. Students will then work in groups to design
a card table. After the design phase, students will try to sell their product to the outside expert.
Subject: Mathematics (7 - 12), or Technology Education (9 - 12)
Title: Creating a Water Tank - Part II "Selling the Tank"
Description: Working in groups of 4-5 students will take the information,pictures and 3-D model of the water tank they assembled in Part I of Creating a Water Tank and develop a web page and a video
presentation. The web page will be a tool to advertise their water tank construction company and must include hyperlinks and digital pictures. The video presentation will be a "sales pitch" to a city
council. The web page and video will be scored using a rubric. The web page and video must include the surface area, volume and cost of construction.
Subject: Mathematics (6 - 12), or Technology Education (9 - 12)
Title: Golden Ratios of the Body, Architecture, and Nature
Description: Students will study the golden ratio as it relates to human body measurements, architecture, and nature. Students will use a desktop publishing program to create a poster. The poster
will have digital photos of themselves, architecture samples, or nature examples. Students will also include a spreadsheet with the lengths, widths, and length/width ratios of the samples included in
the photos.
Subject: Mathematics (6 - 12)
Title: Swimming Pool Math
Description: Students will use a swimming pool example to practice finding perimeter and area of different rectangles.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Soda Cans Add Bookmark
Description: This reproducible activity sheet, from an Illuminations lesson, guides students through a simulation in which they try different arrangements to make the most efficient use of space and
thus pack the most soda cans into a rectangular packing box.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Playing with Squares Add Bookmark
Description: This reproducible activity sheet, from an Illuminations lesson, prompts students to investigate the meaning of square roots by considering the area of squares and the heights of various
stacks of squares.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Cubes Everywhere Add Bookmark
Description: In this Illuminations lesson, students use cubes to develop spatial thinking and review basic geometric principles through real-life applications. Students are given the opportunity to
build and take apart structures based on cubes.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Soda Cans Add Bookmark
Description: In this lesson, one of a three-part unit from Illuminations, students investigate various designs for packaging soda cans and use geometry to analyze their designs. Students work to
create more efficient arrangements that require less packaging material than the traditional rectangular arrays. In addition, there are links to online activity sheets and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Triangula Island Overhead Add Bookmark
Description: This reproducible transparency, from an Illuminations lesson, contains an activity that asks students to conjecture the best location of a point inside a regular polygon such that the
sum of the distances to each side is a minimum.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Hospital Problem Add Bookmark
Description: This reproducible transparency, from an Illuminations lesson, describes an activity in which students must plan where to build a new hospital so that it can serve the needs of three
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Robot Sketcher Add Bookmark
Description: In this student interactive, from Illuminations, students can build compound arms having multiple joints of two types: one that rotates and is typical of rotating motors, and one that
slides and is typical of hydraulic lifts. A circle is used to represent a rotating joint, and a rectangle is used to represent a sliding joint.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Circle Packing and Curvature Add Bookmark
Description: In this lesson, one of a three-part unit from Illuminations, students investigate the curvature of circles. Students apply definitions and theorems regarding curvature to solve circle
problems. In addition, there are links to an online activity sheet and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Triangula Island Add Bookmark
Description: This student reproducible, from an Illuminations lesson, contains an activity that asks students to conjecture the best location of a point inside a regular triangle such that the sum of
the distances to each side is a minimum.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Hospital Locator Add Bookmark
Description: In this Illuminations lesson, students begin with a problem in a real-world context to motivate the need to construct circumcenters and then incenters of triangles. Students must make
sense of these constructions in terms of bisecting sides and angles. There are links to student interactives and other resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Soccer Problem Add Bookmark
Description: This student interactive, from an Illuminations lesson, allows students to investigate a soccer problem by changing the location of a soccer player as well as the distance between the
player and the goal posts. The angle changes as the player is moved, and students must therefore determine the player s position so that the angle is maximized.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Symmetries III Add Bookmark
Description: This lesson, from Illuminations, helps students to understand how translations work and what happens when two or more translations are applied one after the other. Students discover that
all band ornaments have translational symmetry and all wallpaper patterns have translational symmetry in at least two directions.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Soda Rack Add Bookmark
Description: In this lesson, one of a three-part unit from Illuminations, students consider the arrangement of cans placed in a bin with two vertical sides and discover an interesting result. They
then prove their conjectures about the interesting results. In addition, there are links to online activity sheets and other related resources.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Circle Packing Add Bookmark
Description: In this unit of three Illuminations lessons, students explore circles. In the first lesson students apply the concepts of area and circumference to explore arrangements for soda cans
that lead to a more efficient package. In the second lesson they then experiment with three-dimensional arrangements to discover the effect of gravity on the arrangement of soda cans. The final
lesson allows students to examine the more advanced mathematical concept of curvature. There are also links to online interactives that are used in the lessons.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Subject: Mathematics
Title: Hospital Locator Add Bookmark
Description: In this student interactive, from an Illuminations lesson, students act as community planners, trying to place a new medical center equidistant from three cities.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12 | {"url":"http://alex.state.al.us/plans2.php?std_id=54239","timestamp":"2014-04-16T22:00:10Z","content_type":null,"content_length":"40383","record_id":"<urn:uuid:163e0795-ff58-424c-aef7-cfcc4332c32d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
CorGen—measuring and generating long-range correlations for DNA sequence analysis
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Nucleic Acids Res. Jul 1, 2006; 34(Web Server issue): W692–W695.
CorGen—measuring and generating long-range correlations for DNA sequence analysis
CorGen is a web server that measures long-range correlations in the base composition of DNA and generates random sequences with the same correlation parameters. Long-range correlations are
characterized by a power-law decay of the auto correlation function of the GC-content. The widespread presence of such correlations in eukaryotic genomes calls for their incorporation into accurate
null models of eukaryotic DNA in computational biology. For example, the score statistics of sequence alignment and the performance of motif finding algorithms are significantly affected by the
presence of genomic long-range correlations. We use an expansion-randomization dynamics to efficiently generate the correlated random sequences. The server is available at http://corgen.molgen.mpg.de
Eukaryotic genomes reveal a multitude of statistical features distinguishing genomic DNA from random sequences. They range from the base composition to more complex features like periodicities,
correlations, information content or isochore structure. A widespread feature among most eukaryotic genomes are long-range correlations in base composition (1–6), characterized by an asymptotic
power-law decay C(r) ^−α of the correlation function
along the DNA sequence $a→=a1,…,aN$. See the top part of Figure 1 for an example. Amplitudes and decay exponents differ considerably between different species and even between different genomic
regions of the same species (6). Often the correlations are restricted to specific distance intervals r[min] < r < r[max].
CorGen analysis of a 1 Mb region on human chromosome 22. The two plots in the top part show the measured GC-profile (left) and correlation function (right) of the chromosomal region. In the
double-logarithmic correlation graph, power-law correlations ...
The widespread presence of long-range correlations raises the question if they need to be incorporated into an accurate null model of eukaryotic DNA, reflecting our assumptions about the ‘background’
statistical features of the sequence under consideration (7). The need for a realistic null model arises from the fact that the statistical significance of a computational prediction derived by
bioinformatics methods is often characterized by a P-value, which specifies the likelihood that the prediction could have arisen by chance. Popular null models are random sequences with letters drawn
independently from an identical distribution, or kth order Markov models specifying the transition probabilities P(a[i][+1]a[i][−k+1], …, a[i]) in a genomic sequence (8). However, both models are
incapable of incorporating long-range correlations in the sequence composition. In CorGen we use a dynamical model that was found to efficiently generate such long-range correlated sequences (9).
Recent findings already demonstrated that long-range correlations have strong influence on significance values for several bioinformatics analysis tools. For instance, they substantially change the P
-values of sequence alignment similarity scores (10) and contribute to the problem that computational tools for the identification of transcription factor binding sites perform more poorly on real
genomic data compared to independent random sequences (11).
In this paper we present CorGen, a web server that measures long-range correlations in DNA sequences and can generate random sequences with the same (or user-specified) correlation and composition
parameters. These sequences can be used to test computational tools for changes in prediction upon the incorporation of genomic correlations into the null model.
Several techniques for the generation of long-range correlated sequences have been proposed so far (12–14). Here, we use a simple dynamical method based on single site duplication and mutation
processes (15). This dynamics is an instance of a, so called, expansion-randomization system, which recently have been shown to constitute a universality class of dynamical systems with generic
long-range correlations (9,16). In contrast to any of the methods (12–14), the duplication-mutation model combines all of the following advantages: (i) exact analytic results for the correlation
function of the generated sequences have been derived; (ii) the method allows to generate sequences with any user-defined value of the decay exponent α > 0, desired GC-content g, and length N; (iii)
the correlation amplitude is high enough to keep up with strong genomic correlations and can easily be reduced to any user-specified value; (iv) the dynamics can be implemented by a simple algorithm
with runtime O(N); (v) the duplication and mutation processes are well known processes of molecular evolution.
In CorGen the single site duplication mutation dynamics is implemented by the following Monte Carlo algorithm. We start with a short sequence of random nucleotides (N[o] = 12). The dynamics of the
model is then defined by the following update rules:
1. A random position j of the sequence is drawn.
2. The nucleotide a[j] is either mutated with probability P[mut], or otherwise duplicated, i.e. a copy of a[j] is inserted at position j + 1 thereby increasing the sequence length by one.
If the site a[j] = X has been chosen to mutate, it is replaced by a nucleotide Y with probability
This assures a stationary GC-content g. Extending the results derived in (16) it can analytically be shown that the correlation function of sequences generated by this dynamics is a Euler beta
function with C(r) ^−α in the large r limit. By varying the mutation probability P[mut], the decay exponent α of the long-range correlations can be tuned to any desired positive value, as it is
determined by α = 2P[mut]/(1−P[mut]). The correlations C(r) of the generated sequences define the maximal amplitude obtainable by our dynamics for the specific settings of α and g. However, this
amplitude can easily be decreased by the following procedure: after the sequence has reached its desired length, the duplication process is stopped. Subsequent mutation of M randomly drawn sites
using the transition probabilities defined in (2) will uniformly decrease the correlation amplitude to C^*(r) = C(r)exp(−2M/N) without changing the exponent α and the GC-content g (9).
We use a queue data structure to store the sequences, since this allows for a fast implementation of a nucleotide duplication in runtime O(1). The complexity of the algorithm therefore is of the
order O(N + M). The software is implemented in C++. Sources are available upon request from the corresponding author.
THE WEB SERVER CorGen
The web server CorGen offers three different types of services: (i) measuring long-range correlations of a given DNA sequence, (ii) generating long-range correlated random sequences with the same
statistical parameters as the query sequence and (iii) generating sequences with specific user-defined long-range correlations. The first two tasks require the user to upload a query DNA sequence in
FASTA or EMBL format. For long-range correlations to be detectable, the sequences need to be sufficiently long (we recommend at least 1000 bp). The distance interval where a power-law is fitted to
the measured correlation function can be specified by the user.
Upon submission of a query DNA sequence, CorGen will generate plots with the measured GC-profile and correlation function, as defined by Equation 1. Unsequenced or ambiguous sites are thereby
excluded from the analysis. The user can specify a distance interval where a power-law should be fitted to the measured correlation function. The obtained values for the decay exponent α and the
correlation amplitude will be reported by CorGen. If a long-range correlated random sequence with the same statistical features in the specified fitting interval has been requested, its corresponding
composition and correlation plots will also be shown. See Figure 1, for an example output page. The generated random sequences can be downloaded by the user. If large ensembles of the generated
sequences are needed, independent realizations of the sequences can directly be obtained via non-interactive network clients, e.g. wget. Corresponding samples are given on the relevant pages.
CorGen can also be used to generate long-range correlated random sequences with specific user-defined correlation parameters. In this case, the user needs to specify the decay exponent α, the
correlation amplitude C(r^*) at a reference distance r^*, the desired GC-content g and the sequence length. Notice that there is a generic limit for the correlation amplitude depending on the values
of α and g. As a typical example, the measurement of C(r) for human chromosome 22 takes ~65 s, while a random sequence of length 1 Mb with the same correlation parameters can be generated in <5 s.
In the following, we want to exemplify a possible application of CorGen related to the problem that long-range correlations significantly affect the score distribution of sequence alignment (10).
Imagine one aligns a 100 bp long query sequence to a 1 Mb region on human chromosome 22 in order to detect regions of distant evolutionary relationship. The alignment algorithm reports a poorly
conserved hit with a P-value of 10^−2 calculated from the standard null model of a random sequence with independent nucleotides. However, the user does not trust this hit and wants to test whether it
might be an artifact of long-range correlations in human chromosome 22. As a first step, the correlation analysis service provided by CorGen is used to assess whether such correlations are actually
present in the chromosomal region of interest. It turns out that a clear power-law with α = 0.359 can be fitted to C(r), as is shown in the top part of Figure 1. The next step is to retrieve an
ensemble of random sequences generated by CorGen with the same correlation and composition parameters as the 1 Mb region of chromosome 22 (large ensembles can also be retrieved by non-interactive
network clients). For one such realization the measured GC-profile and correlation function are shown in the bottom part of Figure 1. The 100 bp query sequence is then aligned against each
realization of the ensemble in order to obtain the by chance expected distribution of alignment scores under the more sophisticated null model incorporating the genomic long-range correlations. As
has been shown in (10), for the measured correlation parameters this can increase the P-value of a randomly predicted (false-positive) hit by more than one order of magnitude. In conclusion, the hit
might be rejected as a true orthologous region. CorGen can therefore help to reduce the often encountered high false-positive rate of bioinformatics analysis tools.
Funding to pay the Open Access publication charges for this article was provided by the Max-Planck Institute for Molecular Genetics.
Conflict of interest statement. None declared.
Peng C.-K., Buldyrev S.V., Goldberger A.L., Havlin S., Sciortino F., Simons M., Stanley H.E. Long-range correlations in nucleotide sequences. Nature. 1992;356:168. [PubMed]
2. Li W., Kaneko K. Long-range correlation and partial 1/f^α spectrum in a noncoding DNA sequence. Europhys. Lett. 1992;17:655.
Voss R.F. Evolution of long-range fractal correlations and 1/f noise in DNA base sequences. Phys. Rev. Lett. 1992;68:3805. [PubMed]
Arneodo A., Bacry E., Graves P.V., Muzy J.F. Characterizing long-range correlations in DNA sequences from wavelet analysis. Phys. Rev. Lett. 1995;74:3293. [PubMed]
Bernaola-Galvan P., Carpena P., Roman-Roldan R., Oliver J.L. Study of statistical correlations in DNA sequences. Gene. 2002;300:105. [PubMed]
Li W., Holste D. Universal 1/f noise, crossovers of scaling exponents, and chromosome-specific patterns of guanine-cytosine content in DNA sequences of the human genome. Phys. Rev. E. 2005;71:041910.
Clay O., Bernardi G. Compositional heterogeneity within and among isochores in mammalian genomes: II. Some general comments. Gene. 2001;276:25. [PubMed]
8. Durbin R., Eddy S., Krogh A., Mitchison G. Biological Sequence Analysis. Cambridge, England: Cambridge University Press; 1998. ISBN: 0–521–62971–3.
Messer P.W., Arndt P.F., Lässig M. Solvable sequence evolution models and genomic correlations. Phys. Rev. Lett. 2005;94:138103. [PubMed]
10. Messer P.W., Bundschuh R., Vingron M., Arndt P.F. Alignment statistics for long-range correlated genomic sequences. In: Apostolico A., Guerra C., Istrail S., Pevzner P.A., Waterman M.S., editors.
Proceedings of the Tenth Annual International Conference on Research in Computational Molecular Biology (RECOMB 2006); Venice, Italy: Springer; 2006. pp. 426–440.
Tompa M., Li N., Bailey T.L., Church G.M., De Moor B., Eskin E., Favorov A.V., Frith M.C., Fu Y., Kent W.J. Assessing computational tools for the discovery of transcription factor binding sites. Nat.
Biotechnol. 2005;23:137. [PubMed]
Makse H.A., Havlin S., Schwartz M., Stanley H.E. Method for generating long-range correlations for large systems. Phys. Rev. E. 1996;53:5445. [PubMed]
Wang X.J. Statistical physics of temporal intermittency. Phys. Rev. A. 1989;40:6647. [PubMed]
Clegg R.G., Dodson M. Markov chain-based method for generating long-range dependence. Phys. Rev. E. 2005;72:026118. [PubMed]
Li W. Expansion-modification systems: A model for spatial 1/f spectra. Phys. Rev. A. 1991;43:5240. [PubMed]
16. Messer P.W., Lässig M., Arndt P.F. Universality of long-range correlations in expansion–randomization systems. J. Stat. Mech. 2005:P10004.
Articles from Nucleic Acids Research are provided here courtesy of Oxford University Press
• Compound
PubChem Compound links
• PubMed
PubMed citations for these articles
• Substance
PubChem Substance links
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1538783/?tool=pubmed","timestamp":"2014-04-17T19:08:11Z","content_type":null,"content_length":"62286","record_id":"<urn:uuid:a2661c1e-6e7a-4fb4-803b-39da8a930beb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Analysis of Erosion Caused by Biomimetic Axial Fan Blade
Advances in Materials Science and Engineering
Volume 2013 (2013), Article ID 254305, 9 pages
Research Article
Numerical Analysis of Erosion Caused by Biomimetic Axial Fan Blade
^1Key Laboratory of Bionics Engineering of Ministry of Education, Jilin University, Changchun 130022, China
^2College of Materials Science and Engineering, Jilin University, Changchun 130022, China
Received 29 September 2013; Accepted 27 November 2013
Academic Editor: S. Miyazaki
Copyright © 2013 Jun-Qiu Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Damage caused by erosion has been reported in several industries for a wide range of situations. In the present work, a new method is presented to improve the erosion resistance of machine components
by biomimetic method. A numerical investigation of solid particle erosion in the standard and biomimetic configuration blade of axial fan is presented. The analysis consists in the application of the
discrete phase model, for modeling the solid particles flow, and the Eulerian conservation equations to the continuous phase. The numerical study employs computational fluid dynamics (CFD) software,
based on a finite volume method. User-defined function was used to define wear equation. Gas/solid flow axial fan was simulated to calculate the erosion rate of the particles on the fan blades and
comparatively analyzed the erosive wear of the smooth surface, the groove-shaped, and convex hull-shaped biomimetic surface axial flow fan blade. The results show that the groove-shaped biomimetic
blade antierosion ability is better than that of the other two fan blades. Thoroughly analyze of antierosion mechanism of the biomimetic blade from many factors including the flow velocity contours
and flow path lines, impact velocity, impact angle, particle trajectories, and the number of collisions.
1. Introduction
Wear is one of the main reasons for failures of mechanical parts. [1, 2] Erosion is an attractive branch in the domain of wear. Solid particle erosion is a dynamic process that occurs in different
machine-parts due to the impingement of solid particles [3]. Centrifugal and axial fan working medium contains large amounts of solid particles, the particles impacting the surface of blade by very
high speed and causing erosion of blade and housing; the erosion of blade is the most serious, often appear blade fracture, runaway and other major accidents. Hence, researches in material erosion
including mechanism, factors of erosion, and the optimal selection of materials play a significant role in saving materials, reducing energy consumption, and improving economic efficiency.
Inevitably, research on erosion is attracting more and more attention in the international academic circle [4–8]. It has been concluded by previous investigators that general methods used to reduce
erosion are enhancing the wear resistance of material surface by means of wear resistant materials or coating material with better wear resistance [9–13].
Nature is a school for scientists and engineers; after billions of years of evolution, creatures in nature possess almost perfect structures and functions [14, 15]. In nature, some animals, for
example, scorpion, are living in the sand and other gas/solid mixed-media environment, which exhibit excellent antierosion function under gas/solid mixed media environment [16]. Han et al. [17]
showed that scorpions through the adaptation of the living environment and their own evolution, the formation of a special distribution of the convex and groove on the back, which can change the
state of the surface boundary layer flow, and hence, reduce erosion of surface. Figure 1 illustrates the morphology of desert scorpion and biomimetic modeling.
Many of the factors which control the rate of erosion, such as particle velocity or particle mass flow rate, particle diameter, impact angle, and particle distribution can be studied at different
flow conditions of the system. A lot of practical examples may be found when a change in flow conditions has greatly increased or decreased erosion.
The experimental study on the dynamic behavior of solid particles requires special equipment and methodology to pursue this goal. Also, the erosion process is a complex problem to obtain a
mathematical formula to account for some of the factors which control the rate of erosion of the blade. This paper presents a numerical study of the erosion process of biomimetic axial fan blade,
applying computational fluid dynamics (CFD).
2. Description of Analysis
The numerical study of the erosion process applying CFD considers a mathematical model with Eulerian conservation equations in the continuous phase and a Lagrangian frame to simulate a discrete
second phase. The dispersion of particles in the fluid phase can be predicted using a stochastic tracking model. This model includes the effect of instantaneous turbulent velocity fluctuations on the
particle trajectories.
2.1. Governing Equations
The computational domain considers the mass conservation and momentum equations for incompressible flow in a 3D geometry in a steady state. The mass conservation is where is the operator nabla, is
the density (kg/m^3), is the velocity vector (m/s), and is the mass added to the continuous phase from the dispersed second phase. The momentum equation is where is the pressure on the fluid micro
unit (Pa), is the stress tensor, and are the forces that arise from interaction with the dispersed phase. For turbulence, the numerical study includes the standard - model [18], where the turbulent
kinetic energy equation, , is expressed by and the dissipation rate equation, , is expressed by where is the velocity on the direction (m/s), is distance coordinate on the direction (m), is distance
coordinate on the direction (m), is molecular dynamic viscosity (kg/m s), is molecular dynamic viscosity on the direction (kg/m s), represents the generation of turbulent kinetic energy due the mean
velocity gradients, and are constants (, ), and are the turbulent Prandtl numbers for and , respectively.
2.2. Discrete Phase Model (DPM)
This model permits us to simulate a discrete second phase in a Lagrangian frame of reference, where the second phase consists of spherical particles dispersed in the continuous phase. The coupling
between the phases and its impact on both the discrete phase trajectories and the continuous phase flow is included. The turbulent dispersion of particles is modeled using a stochastic
discrete-particle approach. This approach predicts the turbulent dispersion by integrating the trajectory equations for individual particles, using the instantaneous fluid velocity. The prediction of
particle dispersion makes use of the concept of the integral time scale, , which describes the time spent in turbulent motion along the particle path. This time scale can be approximated in the
standard model as
The trajectory of a discrete phase particle can be predicted by integrating the force balance on the particle, which is written in a Lagrangian reference frame. This force balance equates the
particle inertia with the forces acting on the particle: where is the particle velocity (m/s), is the velocity (m/s), is drag force (N), is the drag force per unit particle mass, is the additional
forces (N), and is defined as where is the drag coefficient is applied for smooth spherical particles, is the relative Reynolds number, and is the particle diameter (m). The additional forces, , in
this case are compounded by the force required to accelerate the fluid surrounding the particle, defined as where is the apparent density of the particles, which is important when , and an additional
force due to the pressure gradient in the fluid is
The relative Reynolds number, , is defined as
To incorporate the effect of the discrete phase trajectories on the continuum, it is important to compute the interphase exchange of momentum from the particle to the continuous phase. This exchange
is computed by examining the change in momentum of a particle as it passes through each control volume in the computational domain.
This momentum change is computed as
Finally, to evaluate the erosion rate at the wall of the blade, applying the discrete phase model, it is important to define parameters such as the mass flow rate of the particle stream, , impact
angle of the particle path with the wall face , function of the impact angle , and the area of the wall face where the particle strikes the boundary . The erosion rate is defined as where is a
function of particle diameter, is the impact angle of the particle path with the wall face, is a function of impact angle, is the relative particle velocity, is a function of relative particle
velocity, and is the area of the cell face at the wall. In (12) the impact angle function was defined by a piece-linear profile and the diameter function and velocity exponent function are and 2.6,
respectively [18].
2.3. Modeling of Axial Fan
Geometric construction and meshing were performed with UG and GAMBIT. Bionic configuration created on the curved surface created by the projection along the direction of the normal surface, surface
bias, and so forth. Figure 2 shows the geometrized structure graph of axial fan blades.
2.4. Mesh Generation
Different mesh type and size were used in each region, owing to the structure of each part of the axial fan, and the flow patterns are different. Longer segment structure of the inlet and outlet is
simple, and the flow is relatively stable; hence, hexahedral grid was selected in the two regions. Strong rotation in the blade regional airflow, the flow is quite complex. Meanwhile, the structure
of the leaves is more complex, especially the bionic blade. Therefore, selection of unstructured grid geometry structure strong adaptability, and leaves of the surface of the grid are encrypted.
Table 1 is the results of mesh generation.
2.5. Calculation Model and Boundary Conditions
Pressure-inlet boundary condition was used in the entrance of fan. Pressure outlet boundary condition was used in the exit of fan. The definition of turbulence parameters is based on turbulence
intensity and hydraulic diameter.
Air flows in the tunnel with entrained solid particles at 11.6m/s velocity. The injection type was set to surface. Solid particles with 1500kg/m^3 density were released from the inlet with an
initial velocity of 11.6m/s assuming no slip between the particle and fluid. The particle diameters were 20μm, 50μm, 100μm, 150μm, and 300μm. And the mass flow rate was 2kg/s. Internal flow
field of fan was assumed to incompressible steady flow. RNG model was used as turbulence model. Standard wall function method was used near the wall. Solid wall boundary was assumed no slip, and wall
roughness is 0.5. The interface boundary condition was applied for interface of fluid region. SIMPLEC algorithm was used for velocity-pressure coupling. The differential equations were discretized by
a first-order upwind differencing scheme. A convergence criterion of 10^−3 for each scaled residual component was specified for the relative error between two successive iterations.
Open the two-phase coupling calculation in the settings panel of discrete phase model. The boundary of reflect was applied for the wall and rebound model by using (13). Calculating model of erosion
rate by using (14), the model for coal ash particles impacting steel was achieved through UDF: where and are the normal component and tangential component of particle collision surface speed,
respectively, with the unit of m/s. The subscript 1, 2, respectively, denotes the amount before and after the collision. is the angle between the particles speed and the surface tangent before the
collision, and is rebound angle of particles after the collision, and the unit is rad.
Experimental measurements reported by Hamed et al. [19, 20] indicated that erosion of a target material was found to be dependent upon the particle impact velocity and its impingement angle.
Experimental measurements were obtained for coal ash particles impacting steel at different impacting velocities and impingement angles. The experimental data was used to establish the following
empirical equation for the erosion mass parameter, , which is defined as the ratio of the eroded mass of the target material to the mass of the impinging particles: where and are the impact velocity
(m/s) and impingement angle (°), respectively. is maximum erosion angle (°), is tangential recovery ratio. The following values were used for the variables in (14) for (angle of maximum erosion):
with , , and material constants which are found to be , , , and .
3. Results and Discussion
3.1. Numerical Simulation Results
The simulation results of the erosion rate of the three kinds of blades are shown in the form of histogram in Figure 3. It was observed that the bionic blades exhibited better erosion resistance than
the smooth surface blades. The groove surface blades showed the best erosion resistance compared to the other two kinds of blades. It is concluded that the best condition to improve the components
erosion resistance is using the groove surface morphology.
3.2. Gas-Phase Flow Analysis
Figures 4, 5, and 6 show the flow velocity contours and flow path lines of the surface region ( cross-section) of the three types blades. As can be seen from the velocity contours, the flow
velocities are higher around the smooth surface than those of the convex surface and the groove surface. Especially in the groove channel, the flow velocity was significantly lower than that of the
smooth surface. Compared to the flow path lines, it is shown that the flow path lines on the smooth surface were smooth. On the convex surface were changed to a certain degree, it is indicated that
the air flow was disturbed by the convex hull. But in the groove channel, the groove surface has a great influence on the airflow. The air was rotating in the groove channel, forming a stable
low-speed reverse flow zone [16, 21].
The special flow pattern in the groove has significant influence on the erosion resistance of groove surface. The rotating flow in the groove plays an “air cushion” effect. On the one hand, the
grooves can enhance fluid turbulence, which lead to the change of the flow field around the groove surface, and the particle motion pattern was changed subsequently. Some of the particles will leave
the surface along with air flow without impact, and these particles would impact the surface if the surface was smooth. Therefore, the number of particles impacting the surface was decreased. On the
other hand, as a result of the decrease the flow velocity and the velocities of the particles in the two-phase flow were decreased as well. The rotating flow in the groove can absorb particle energy
which is used for impacting, and the energy used in impact was correspondingly reduced. These features all help to reduce the particle impact damage on the blade surface and reduce erosion wear [21].
3.3. Particles Impact Velocity Analysis
The particle impact velocity is an important impingement variable which influences the erosion behavior of materials. The dependence of erosion rate () on impact velocity () is expressed by the
following equation: where is velocity exponent and is a constant. The velocity exponent is usually in the range of 2-3 for ductile materials, while, for brittle materials, it can be much higher.
Hence, it is clear that the erosion rate increases significantly with increase in the impact velocity.
In the present work, the particle impact velocity distributions on the surface of the three types of blades were analyzed. The particle impact velocities were obtained by the FLUENT postprocessing
system. The impact velocities were recorded in the interval value of 2 units. Figure 7 shows the impact velocity distribution on the blade surface. For the smooth surface, the impact velocities are
concentrated in 30m/s. For the convex surface, the impact velocities are in the range of 5–35m/s, and mainly distributed in the range of 25–30m/s with the probability of this velocity being
greater than 80%. But for the groove surface, the impact velocities are mostly concentrated in several fixed values, such as 10m/s, 16m/s, and 30m/s. There is a certain probability of impact
velocities in 10m/s and 16m/s. As a result, the particle impacts on the groove surface were occurred at the lower impact velocities compared with the smooth and convex surfaces, which lead to lower
erosion than that of the other two types of surfaces [16].
3.4. Particles Impact Angle Analysis
Particle impact angle has an important effect on the erosion rate. The maximum erosion of ductile material occurs at angles between 20–30° [17]. The probability of particle impact angle distribution
on the blade surface is shown in Figure 8. On the smooth surface, all the impact angles were concentrated in 30°. On the convex surface, the impact angles are distributed in the range of 0–90°, and
more than 60% probability impact angles are in the range of 25–30°. On the other hand, for the groove surface, most of the impact angles fall in the range of 50–65°. Therefore, compared with the
smooth and convex surfaces, the impacts on the groove surface occurred at the high impact angles which lead to lower erosion. This is another reason why the groove surface exhibits more erosion
resistance [16].
3.5. Analysis of Particle Trajectories and Collision Times
Figures 9, 10, and 11 are the trajectories of 10 particles from the incident source under given simulation conditions. The sampling analysis and statistic shows that the collision number of 4240
particles from the incident source with smooth blade, convex blade, and groove blade, respectively, 291, 332, and 452. Obtained by calculating the numbers of collisions on the three-blade surfaces
per unit area were 0.116, 0.115, and 0.113/mm^2.
It can be seen that the impact times of groove and convex surfaces were lower than that of smooth surface, and the groove surface showed the lowest impact times. Hence, the reduction of impact on the
groove surface can lead to the decrease of erosion wear to a certain degree.
4. Conclusions
In the present work continuous-discrete phase models are used to predict the erosion of the three kinds of blades. Conclusions are as follows.
The groove surface blades showed the best erosion resistance compared to other two kinds of blades. The flow velocities are higher around the surface of smooth surface than those of the convex
surface and the groove surface from the velocity contours. The groove surface has a great influence on the airflow. The particle impact velocity of biomimetic groove axial fan blade is less than
smooth blade and the convex blade. The impacts on the biomimetic groove axial fan blade occurred at the high impact angles which are relatively less susceptible to impact damage compared with the
smooth and convex surfaces, while the surface smooth and convex hull-shaped biomimetic form of axial fan blades collision occurred in apt erosion low-angle region. The impact times of groove and
convex surfaces were lower than those of smooth surface, and the groove surface showed the lowest impact times.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported by the Natural Science Foundation of China (nos. 51175220 and 51205161), Specialized Research Fund for the Doctoral Program of Higher Education (nos. 20100061110023 and
20120061120051), China Postdoctoral Science Foundation on the 51th Grant Program (2012M511345), the Projects of Cooperation and Innovation to National Potential Oil and Gas for Production and
Research (no. OSR-04-04), the Scientific and the Technological Development Project of Jilin Province (no. 20130522066), and Basic Scientific Research Expenses of Project of Jilin University | {"url":"http://www.hindawi.com/journals/amse/2013/254305/","timestamp":"2014-04-18T08:47:08Z","content_type":null,"content_length":"184338","record_id":"<urn:uuid:3cc12d0c-5a0d-4944-96f7-aff6c7413faa>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: implementation of boschloo's test: very slow execution
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: implementation of boschloo's test: very slow execution
From "Eva Poen" <eva.poen@gmail.com>
To Statalist <statalist@hsphsun2.harvard.edu>
Subject Re: implementation of boschloo's test: very slow execution
Date Mon, 3 Mar 2008 10:00:09 +0000
thank you very much for your program to calculate the boschloo test.
What a great idea to employ -ml- for the task; I'd never had thought
of it myself.
2008/2/29, Joseph Coveney <jcoveney@bigplanet.com>: [excerpts]
> The approach below stays within Stata 9.2 and speeds things up without going
> to Mata. With it, execution time for Eva's 30-minute example is down to
> about 40 seconds on my machine.
This is an amazing improvement over what I have got now, especially
since there is no trade-off between execution time and precision, as
it is in my implementation.
> Experience might warrant lowering or raising
> the hard-coded default of 100--a repeat of 50 worked for Eva's example, but
> for insurance I've doubled that for the default. A repeat of 10 doesn't
> work reliably: it resulted in -ml- converging on a local maximum the first
> time (start-up random number seed) and on the global maximum in a second run
> with the next-in-line random number seed.
I will recalculate all p-values using your program and play around
with this default, to see what difference it makes in various
situations. If I can find a reliable pattern, I will report back.
> The -ml search- bounds and -ml
> plot- bounds are both set for a theta from 0.01 to 0.5 by default, although
> these are both options for the user. (At least for Eva's example, a plot
> throughout the entire range of theta is symmetric about theta = 0.5, so that
> the supremum is present twice in the set. I don't know whether that's true
> in general, and so the bounds are left as options.
The function appears to be symmetric in many situations. However, if
the original fisher's test p-value is very large, and thus the
Boschloo p-value is going to be large as well, the function is not
necessarily symmetric around 0.5, and making that assumption would
result in an inaccurately low p-value.
> There are two ado-files below, -boschloo.ado- and -boschloo_ll.ado-. They
> must be saved separately where Stata can find them (-help adopath-); the
> former calls the latter. There is also an example do-file that runs Eva's
> example* with -ml search- (default) and -ml plot- options, as well as
> illustrates user-set bounds. -if-, -in-, and -nolog- options are available,
> but aren't illustrated.
Fantastic implementation, Joseph; I'm thrilled. I will make an
immediate version of your command as soon as I have a little bit of
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-03/msg00057.html","timestamp":"2014-04-18T22:30:51Z","content_type":null,"content_length":"7539","record_id":"<urn:uuid:4b3652ea-790c-4597-b828-0ca54f90b1fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Modulatory space
Readers should be aware that the term "
modulatory space
" is not a standard music-theoretical term. The spaces described in this article are
pitch class spaces
which model the relationships between
pitch classes
in some musical system. These models are often
. Closely related to pitch class space is
pitch space
, which represents pitches rather than pitch classes, and
chordal space
, which models relationships between chords.
Circular Pitch Class Space
The simplest pitch space model is the real line. Fundamental frequencies f are mapped to numbers p according to the equation
p = 69 + 12log_2 {(f/440)}
This creates a linear space in which octaves have size 12, semitones (the distance between adjacent keys on the piano keyboard) have size 1, and middle C is assigned the number 60. To create circular
pitch class space we identify or "glue together" pitches p and p + 12. The result is a continuous, circular pitch class space that mathematicians call Z/12Z.
Circles of generators
Other models of pitch class space, such as the circle of fifths, attempt to describe the special relationship between pitch classes related by perfect fifth. In equal temperament, twelve successive
fifths equate to seven octaves exactly, and hence in terms of pitch classes closes back to itself, forming a circle. Abstractly, this circle is a cyclic group of order twelve, and may be identified
with the residue classes modulo twelve.
If we divide the octave into n equal parts, and choose an integer mrelatively prime, we may obtain similar circles, which all have the structure of finite cyclic groups. By drawing a line between two
pitch classes when they differ by a generator, we can depict the circle of generators as a cyclic graph, in the shape of a regular polygon.
Toroidal modulatory spaces
If we divide the octave into n parts, where n = rs is the product of two relatively prime integers r and s, we may represent every element of the tone space as the product of a certain number of "r"
generators times a certain number of "s" generators; in other words, as the direct sum of two cyclic groups of orders r and s. We may now define a graph with n verticies on which the group acts, by
adding an edge between two pitch classes whenever they differ by either an "r" generator or an "s" generator (the so-called Cayley graph of $mathbb\left\{Z\right\}_\left\{12\right\}$ with generators
r and s). The result is a graph of genus one, which is to say, a graph with a donut or torus shape. Such a graph is called a toroidal graph.
An example is equal temperament; twelve is the product of 3 and 4, and we may represent any pitch class as a combination of thirds of an octave, or major thirds, and fourths of an octave, or minor
thirds, and then draw a toroidal graph by drawing an edge whenever two pitch classes differ by a major or minor third.
We may generalize immediately to any number of relatively prime factors, producing graphs can be drawn in a regular manner on an n-torus.
Chains of generators
A linear temperament is a regular temperament of rank two generated by the octave and another interval, commonly called "the" generator. The terminology is due to Erv Wilson, and the most familiar
example by far is meantone temperament, whose generator is a flattened, meantone fifth. The pitch classes of any linear temperament can be represented as lying along an infinite chain of generators;
in meantone for instance this would be -F-C-G-D-A- etc. This defines a linear modulatory space.
Cylindrical modulatory spaces
A temperament of rank two which is not linear has one generator which is a fraction of an octave, called the period. We may represent the modulatory space of a such a temperament as n chains of
generators in a circle, forming a cylinder. Here n is the number of periods in an octave.
For example, diaschismic temperament is the temperament which tempers out the diaschisma, or 2048/2025. It can be represented as two chains of slightly (3.25 to 3.55 cents) sharp fifths a half-octave
apart, which can be depicted as two chains perpendicular to a circle and at opposite side of it. The cylindrical appearance of this sort of modulatory space becomes more apparent when the period is a
smaller fraction of an octave; for example, ennealimmal temperament has a modulatory space consisting of nine chains of minor thirds in a circle (where the thirds may be 0.02 to 0.03 cents sharp in
case you think it really matters.)
Five-limit modulatory space
Five limit just intonation has a modulatory space based on the fact that its pitch classes can be represented by 3^a 5^b, where a and b are integers. It is therefore a free abelian group with the two
generators 3 and 5, and can be represented in terms of a square lattice with fifths along the horizontal axis, and major thirds along the vertical axis.
In many ways a more enlightening picture emerges if we represent it in terms of a hexagonal lattice instead; this is the Tonnetz of Hugo Riemann, discovered independently around the same time by
Shohé Tanaka. The fifths are along the horizontal axis, and the major thirds point off to the right at an angle of sixty degrees. Another sixty degrees gives us the axis of major sixths, pointing off
to the left. The non-unison elements of the 5-limit tonality diamond, 3/2, 5/4, 5/3, 4/3, 8/5, 6/5 are now arranged in a regular hexagon around 1. The triads are the equilateral triangles of this
lattice, with the upwards-pointing triangles being major triads, and downward-pointing triangles being minor triads.
This picture of five-limit modulatory space is generally preferable since it treats the consonances in a uniform way, and does not suggest that, for instance, a major third is more of a consonance
than a major sixth. When two lattice points are as close as possible, a unit distance apart, then and only then are they separated by a consonant interval. Hence the hexagonal lattice provides a
superior picture of the structure of the five-limit modulatory space.
In more abstract mathematical terms, we can describe this lattice as the integer pairs (a, b), where instead of the usual Euclidean distance we have a Euclidean distance defined in terms of the
vector space norm
$||\left(a, b\right)|| = sqrt\left\{a^2 + ab + b^2\right\}.$
Seven-limit modulatory space
In similar fashion, we can define a modulatory space for seven-limit just intonation, by representing 3^a 5^b 7^c in terms of a corresponding cubic lattice. Once again, however, a more enlightening
picture emerges if we represent it instead in terms of the three-dimensional analog of the hexagonal lattice, a lattice called A[3], which is equivalent to the face centered cubic lattice, or D[3].
Abstractly, it can be defined as the integer triples (a, b, c), associated to 3^a 5^b 7^c, where the distance measure is not the usual Euclidean distance but rather the Euclidean distance deriving
from the vector space norm
$||\left(a, b, c\right)|| = sqrt\left\{a^2 + b^2 + c^2 + ab + bc + ca\right\}.$
In this picture, the twelve non-unison elements of the seven-limit tonality diamond are arranged around 1 in the shape of a cuboctahedron.
See also
• Riemann, Hugo, Ideen zu einer Lehre von den Tonvorstellungen, Jahrbuch der Musikbibliothek Peters, (1914/15), Leipzig 1916, pp. 1-26.
Further reading
• Cohn, Richard, Introduction to Neo-Riemannian Theory: A Survey and a Historical Perspective, The Journal of Music Theory, (1998) 42(2), pp. 167-80
• Lerdahl, Fred (2001). Tonal Pitch Space, pp. 42-43. Oxford: Oxford University Press. ISBN 0-19-505834-8.
• Lubin, Steven, 1974, Techniques for the Analysis of Development in Middle-Period Beethoven, Ph. D. diss., New York University, 1974
External links | {"url":"http://www.reference.com/browse/Modulatory+space","timestamp":"2014-04-20T04:32:43Z","content_type":null,"content_length":"88071","record_id":"<urn:uuid:11806a61-04e5-4dc0-86bc-19200f41b629>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is Voronoi diagram in
Next: What is the Delaunay Up: Voronoi Diagram and Delaunay Previous: What is cell complex?   Contents
What is Voronoi diagram in
See also 3.3.
Given a set Voronoi cell of
The set of all Voronoi cells and their faces forms a cell complex. The vertices of this complex are called the Voronoi vertices, and the extreme rays (i.e. unbounded edges) are the Voronoi rays. For
each point nearest neighbor set Voronoi vertex of
In order to compute the Voronoi diagram, the following construction is very important. For each point
By replacing the equality with inequality
Next: What is the Delaunay Up: Voronoi Diagram and Delaunay Previous: What is cell complex?   Contents Komei Fukuda 2004-08-26 | {"url":"http://www.cs.mcgill.ca/~fukuda/soft/polyfaq/node29.html","timestamp":"2014-04-18T10:47:04Z","content_type":null,"content_length":"9883","record_id":"<urn:uuid:eb6554de-a377-434c-adc6-fa4033ebf4d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
ieee_arithmetic: IEEE Arithmetic Facilities Module
Table of Contents
1 Name
ieee_arithmetic — Intrinsic module providing IEEE arithmetic facilities
2 Usage
This module provides various facilities related to IEEE arithmetic.
The contents of this module conform to technical report ISO/IEC TR 15580:1998(E).
3 Synopsis
Derived Types
IEEE_CLASS_TYPE, IEEE_FLAG_TYPE (from IEEE_EXCEPTIONS), IEEE_ROUND_TYPE, IEEE_STATUS_TYPE (from IEEE_EXCEPTIONS).
IEEE_ALL (from IEEE_EXCEPTIONS), IEEE_DIVIDE_BY_ZERO (from IEEE_EXCEPTIONS), IEEE_DOWN, IEEE_INEXACT (from IEEE_EXCEPTIONS), IEEE_INVALID (from IEEE_EXCEPTIONS), IEEE_NEAREST,
IEEE_NEGATIVE_DENORMAL, IEEE_NEGATIVE_INF, IEEE_NEGATIVE_NORMAL, IEEE_NEGATIVE_ZERO, IEEE_OTHER, IEEE_OVERFLOW (from IEEE_EXCEPTIONS), IEEE_POSITIVE_DENORMAL, IEEE_POSITIVE_INF,
IEEE_POSITIVE_NORMAL, IEEE_POSITIVE_ZERO, IEEE_QUIET_NAN, IEEE_SIGNALING_NAN, IEEE_TO_ZERO, IEEE_UNDERFLOW (from IEEE_EXCEPTIONS), IEEE_UP, IEEE_USUAL (from IEEE_EXCEPTIONS).
==, /=.
IEEE_CLASS, IEEE_COPY_SIGN, IEEE_GET_FLAG (from IEEE_EXCEPTIONS), IEEE_GET_HALTING_MODE (from IEEE_EXCEPTIONS), IEEE_GET_ROUNDING_MODE, IEEE_GET_STATUS (from IEEE_EXCEPTIONS), IEEE_IS_FINITE,
IEEE_IS_NAN, IEEE_IS_NEGATIVE, IEEE_IS_NORMAL, IEEE_LOGB, IEEE_NEXT_AFTER, IEEE_REM, IEEE_RINT, IEEE_SCALB, IEEE_SELECTED_REAL_KIND, IEEE_SET_FLAG (from IEEE_EXCEPTIONS), IEEE_SET_HALTING_MODE
(from IEEE_EXCEPTIONS), IEEE_SET_ROUNDING_MODE, IEEE_SET_STATUS (from IEEE_EXCEPTIONS), IEEE_SUPPORT_DATATYPE, IEEE_SUPPORT_DENORMAL, IEEE_SUPPORT_DIVIDE, IEEE_SUPPORT_FLAG (from
IEEE_EXCEPTIONS), IEEE_SUPPORT_HALTING (from IEEE_EXCEPTIONS), IEEE_SUPPORT_INF, IEEE_SUPPORT_NAN, IEEE_SUPPORT_ROUNDING, IEEE_SUPPORT_SQRT, IEEE_SUPPORT_STANDARD, IEEE_UNORDERED, IEEE_VALUE.
4 Derived-Type Description
TYPE IEEE_CLASS_TYPE
END TYPE
Type for specifying the class of a number. Its only possible values are those of the named constants exported by this module.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_FLAG_TYPE
See IEEE_EXCEPTIONS for a description of this type.
TYPE IEEE_ROUND_TYPE
END TYPE
Type for specifying the rounding mode. Its only possible values are those of the named constants exported by this module.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_STATUS_TYPE
See IEEE_EXCEPTIONS for a description of this type.
5 Parameter Description
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_ALL
See IEEE_EXCEPTIONS for a description of this parameter.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_DIVIDE_BY_ZERO
See IEEE_EXCEPTIONS for a description of this parameter.
TYPE(IEEE_ROUND_TYPE),PARAMETER :: IEEE_DOWN
The rounding mode in which the results of a calculation are rounded to the nearest machine-representable number that is less than the true result.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_INEXACT
See IEEE_EXCEPTIONS for a description of this parameter.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_INVALID
See IEEE_EXCEPTIONS for a description of this parameter.
TYPE(IEEE_ROUND_TYPE),PARAMETER :: IEEE_NEAREST
The rounding mode in which the results of a calculation are rounded to the nearest machine-representable number.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_NEGATIVE_DENORMAL
A negative number whose precision is less than that of the normal numbers; the result of an IEEE gradual underflow.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_NEGATIVE_INF
Negative infinity.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_NEGATIVE_NORMAL
A normal negative number.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_NEGATIVE_ZERO
Negative zero.
TYPE(IEEE_ROUND_TYPE),PARAMETER :: IEEE_OTHER
Any processor-dependent rounding mode other than IEEE_DOWN, IEEE_NEAREST, IEEE_TO_ZERO and IEEE_UP.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_OVERFLOW
See IEEE_EXCEPTIONS for a description of this parameter.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_POSITIVE_DENORMAL
A positive number whose precision is less than that of the normal numbers; the result of an IEEE gradual underflow.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_POSITIVE_INF
Positive infinity.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_POSITIVE_NORMAL
A normal positive number.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_POSITIVE_ZERO
Positive zero.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_QUIET_NAN
A “Not-a-Number” value that propagates through arithmetic operations but which does not necessarily raise the IEEE_INVALID exception on use.
TYPE(IEEE_CLASS_TYPE),PARAMETER :: IEEE_SIGNALING_NAN
A “Not-a-Number” that raises the IEEE_INVALID exception on use.
TYPE(IEEE_ROUND_TYPE),PARAMETER :: IEEE_TO_ZERO
The rounding mode in which the results of a calculation are rounded to the nearest machine-representable number that lies between zero and the true result.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_UNDERFLOW
See IEEE_EXCEPTIONS for a description of this parameter.
TYPE(IEEE_ROUND_TYPE),PARAMETER :: IEEE_UP
The rounding mode in which the results of a calculation are rounded to the nearest machine-representable number that is greater than the true result.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_USUAL
See IEEE_EXCEPTIONS for a description of this parameter.
6 Operator Description
In addition to ISO/IEC TR 15580:1998(E), the module IEEE_ARITHMETIC defines the ‘==’ and ‘/=’ operators for the IEEE_CLASS_TYPE. These may be used to test the return value of the IEEE_CLASS function.
USE,INTRINSIC :: IEEE_ARITHMETIC, ONLY: IEEE_CLASS, &
IEEE_QUIET_NAN, OPERATOR(==)
IF (IEEE_CLASS(X)==IEEE_QUIET_NAN) THEN
7 Procedure Description
ELEMENTAL TYPE(IEEE_CLASS_TYPE) FUNCTION IEEE_CLASS(X)
REAL(any kind),INTENT(IN) :: X
Returns the IEEE class that the value of X falls into.
ELEMENTAL REAL(kind) FUNCTION IEEE_COPY_SIGN(X,Y)
REAL(kind),INTENT(IN) :: X
REAL(kind),INTENT(IN) :: Y
Returns the value of X with the sign of Y. The result has the same kind as X.
This function is only available if IEEE_SUPPORT_DATATYPE(X) and IEEE_SUPPORT_DATATYPE(Y) are both true.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_GET_FLAG
See IEEE_EXCEPTIONS for a description of this procedure.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_GET_HALTING_MODE
See IEEE_EXCEPTIONS for a description of this procedure.
TYPE(IEEE_ROUND_TYPE),INTENT(OUT) :: ROUND_VALUE
Sets ROUND_VALUE to the current rounding mode, one of IEEE_DOWN, IEEE_NEAREST, IEEE_OTHER, IEEE_TO_ZERO or IEEE_UP.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_GET_STATUS
See IEEE_EXCEPTIONS for a description of this procedure.
REAL(any kind),INTENT(IN) :: X
Returns true if X is a finite number, i.e. neither an infinity nor a NaN.
ELEMENTAL LOGICAL FUNCTION IEEE_IS_NAN(X)
REAL(any kind),INTENT(IN) :: X
Returns true if X is a NaN (either quiet or signalling).
REAL(any kind),INTENT(IN) :: X
Returns true if X is negative, even for negative zero.
REAL(any kind),INTENT(IN) :: X
Returns if X is normal, i.e. not an infinity, a NaN, or denormal.
ELEMENTAL REAL(kind) FUNCTION IEEE_LOGB(X)
REAL(kind),INTENT(IN) :: X
Returns the unbiased exponent of X. For normal, non-zero numbers this is the same as the EXPONENT(X)-1; for zero, IEEE_DIVIDE_BY_ZERO is signalled and the result is negative infinity (or -HUGE(X) if
negative infinity is not available); for an infinity the result is positive infinity; for a NaN the result is a quiet NaN.
ELEMENTAL REAL(kind) FUNCTION IEEE_NEXT_AFTER(X,Y)
REAL(kind),INTENT(IN) :: X
REAL(kind),INTENT(IN) :: Y
Returns the closest machine-representable number to X (of the same kind as X) that is either greater than X (if X<Y) or less than X (if X>Y). If X and Y are equal, X is returned.
ELEMENTAL REAL(kind) FUNCTION IEEE_REM(X,Y)
REAL(kind),INTENT(IN) :: X
REAL(kind),INTENT(IN) :: Y
The result value is the exact remainder from the division X/Y, viz X-Y*N where N is the nearest integer to the true result of X/Y.
ELEMENTAL REAL(kind) FUNCTION IEEE_RINT(X)
REAL(kind),INTENT(IN) :: X
X rounded to the nearest integer using the current rounding mode.
ELEMENTAL REAL(kind) FUNCTION IEEE_SCALB(X,I)
REAL(kind),INTENT(IN) :: X
INTEGER(any kind),INTENT(IN) :: I
The result is X*2**I without computing 2**I, with overflow or underflow exceptions signalled only if the end result overflows or underflows.
INTEGER FUNCTION IEEE_SELECTED_REAL_KIND(P,R,RADIX)
INTEGER(any kind),INTENT(IN),OPTIONAL :: P
INTEGER(any kind),INTENT(IN),OPTIONAL :: R
INTEGER(any kind),INTENT(IN),OPTIONAL :: RADIX
The same as the intrinsic function SELECTED_REAL_KIND(P,R,RADIX), but only returns numbers of kinds for which IEEE_SUPPORT_DATATYPE returns true.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_SET_FLAG
See IEEE_EXCEPTIONS for a description of this procedure.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_SET_HALTING_MODE
See IEEE_EXCEPTIONS for a description of this procedure.
TYPE(IEEE_ROUND_TYPE),INTENT(IN) :: ROUND_VALUE
Sets the current rounding mode to ROUND_VALUE. This is only allowed when IEEE_SUPPORT_ROUNDING(ROUND_VALUE,X) is true for all X such that IEEE_SUPPORT_DATATYPE(X) is true.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_SET_STATUS
See IEEE_EXCEPTIONS for a description of this procedure.
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if all reals (if X is absent), or reals of the same kind as X conform to the IEEE standard for representation, addition, subtraction and multiplication when the operands and
results have normal values.
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if IEEE denormalised values are supported for all real kinds (if X is absent) or for reals of the same kind as X.
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if division on all reals (if X is absent) or on reals of the same kind as X is performed to the accuracy required by the IEEE standard.
USE,INTRINSIC :: IEEE_EXCEPTIONS,ONLY:IEEE_SUPPORT_FLAG
See IEEE_EXCEPTIONS for a description of this procedure.
See IEEE_EXCEPTIONS for a description of this procedure.
LOGICAL FUNCTION IEEE_SUPPORT_INF(X)
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if IEEE infinities are supported for all reals (if X is absent) or for reals of the same kind as X.
LOGICAL FUNCTION IEEE_SUPPORT_NAN(X)
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if IEEE NaNs are supported for all reals (if X is absent) or for reals of the same kind as X.
TYPE(IEEE_ROUND_TYPE) :: ROUND_VALUE
REAL(any kind),OPTIONAL :: X
Returns true if and only if the rounding mode for all reals (if X is absent) or reals of the same kind as X, can be changed to the specified rounding mode by the IEEE_SET_ROUNDING procedure.
LOGICAL FUNCTION IEEE_SUPPORT_SQRT(X)
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if the SQRT intrinsic conforms to the IEEE standard for all reals (if X is absent) or for reals of the same kind as X.
REAL(any kind),INTENT(IN),OPTIONAL :: X
Returns true if and only if all the other IEEE_SUPPORT inquiry functions return the value true for all reals (if X is absent) or for reals of the same kind as X.
REAL(any kind),INTENT(IN) :: X
REAL(any kind),INTENT(IN) :: Y
Returns IEEE_IS_NAN(X).OR.IEEE_IS_NAN(Y).
ELEMENTAL REAL(kind) FUNCTION IEEE_VALUE(X,CLASS)
REAL(kind),INTENT(IN) :: X
TYPE(IEEE_CLASS_TYPE),INTENT(IN) :: CLASS
Returns a sample value of the same kind as X that falls into the specified IEEE number class. For a given kind of X and class, the same value is always returned.
8 See Also
nagfor(1), ieee_exceptions(3), ieee_features(3), intro(3), nag_modules(3).
9 Bugs
Please report any bugs found to ‘support@nag.co.uk’ or ‘support@nag.com’, along with any suggestions for improvements. | {"url":"http://www.nag.com/nagware/np/r531_doc/ieee_arithmetic.html","timestamp":"2014-04-17T19:19:06Z","content_type":null,"content_length":"15475","record_id":"<urn:uuid:850e95f3-4cc4-4d71-8439-a7adfa80ec17>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
DETERMINISM OR RANDOM? Pick a side
Here is the fundamental thing you missed:
"If you think you understand quantum physics, then you don't understand quantum physics"- Richard Feynman
It appears that your fundamental thesis here is that if we cannot positively rule out that something might be deterministic, then we should regard it as deterministic, and say we just don't
understand it. What kind of logic is that? That's not an argument, it's a philosophical commitment, to a degree that is not at all unlike a religious faith. But science is not really all that
interested in trying to shake your philosophical commitments, or your religious faiths. If you want to maintain that the universe is truly deterministic, you can always do that-- you could have done
it in Aristotle's day, in Newton's, in Einstein's, and in the year 2450, had you been alive then. But what we are really talking about here is physics, and in physics, we don't ask if we can
positively rule out determinism, we ask, what has determinism done for us lately? That's a scientific kind of question-- what does the model accomplish? Certainly determinism had its day, and
continues to be a widely successful concept. But it has also exposed some limitations, in terms of our modern physics. That's the point, not that we now know determinism can't be right (we could
never know that, how do you think we ever could?), but that we have stopped finding value in clinging to the concept.
Randomness is similar-- many physical theories used randomness as part of the theory (statistical mechanics, thermodynamics, chaotic dynamics, etc.), and have done so for centuries. These are aspects
of a theory, not aspects of reality. We don't get the latter, that's not what physics does. Yes, sometimes a statistical theory that invoked randomness turned out to be underpinned by a more
fundamental theory that invoked determinism, and sometimes a deterministic theory turned out to be underpinned by a more fundamental theory that invoked randomness. And so on-- why should we ever
expect that state of affairs to end? Are you one of the people who believes there is an "ultimate theory" that explains
, and that this ultimate theory will have to be deterministic? On what basis do you hold this religious faith of yours? (Oh yeah, you base it on the fact that people said Newton couldn't do what he
did, etc.-- but as I said, that's quite a flimsy basis for your logic.) The actual truth is, we have no diea, and I doubt we ever will, but that's fine because that's never what physics was about
knowing. Physics was, is, and will be, about making models, and we will invoke whatever concepts we need at the time, be they deterministic, random, or who knows what else.
Also answer this, if indeterminacy is real, then why do we participate in physics? Are we trying to understand something with no meaning?
Here we have your other main thesis: determinism is the only thing that can grant meaning to physics. That is really pretty way off target. You might not realize this, but when Newton first came out
with his deterministic laws, many physicists were very disappointed in it-- they actually said it wasn't physics at all! That's because all it did was connect the final state to the initial state--
there wasn't anything that the dynamical equations could add, if all the information was already there in the initial condition! So they said the dynamical equations weren't actually telling us
anything, they were just pushing back the "meaning" (as you put it) to the initial state, which was still unexplained! So much for the "meaning" in determinism. Of course, nowadays we don't fret that
the information is in the initial conditions, and the dynamical equations only propagate this information forward in time, because we have discovered the power in being able to do that. So we changed
our concept of what physics was supposed to able to do, and ran with that ball.
Then the same thing happened again in quantum mechanics, except this time the "ball" we had to run with was indeterminism, and so again we changed our concept of what physics was supposed to do. And
so on. This is all perfectly natural, it's just how physics works. We have no idea where the next turn will be, but we find "meaning" all along the path-- and we have no reason whatsoever to equate
meaning with determinism, that's actually a rather limited and possibly even uneducated view (I don't mean to be harsh, I think your view is rather common) of what physics has done and can do. | {"url":"http://www.physicsforums.com/showthread.php?p=3714092","timestamp":"2014-04-20T00:51:28Z","content_type":null,"content_length":"104145","record_id":"<urn:uuid:1c4e82af-b999-43be-bb48-c84f8f49b9fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carl Gustav Jacob Jacobi
Carl Jacobi's early education was given by an uncle on his mother's side. When he was 11, he entered the Gymnasium in Potsdam. He was so advanced that while still in his first year of schooling, he
was put into the final year class. This meant that he was still only 12 years old when he had reached the necessary standard to enter a university. However the University of Berlin did not accept
students below the age of 16, so Jacobi had to remain in the same class at the Gymnasium in Potsdam until then.
Of course, Jacobi pressed on with his academic studies despite remaining in the same class at school. He received the highest awards for Latin, Greek and history but it was the study of mathematics
which he took furthest. By the time Jacobi left school he had read advanced mathematics texts including those of Euler, and had been undertaking research on his own attempting to solve quintic
equations by radicals.
Jacobi entered the University of Berlin in 1821 still unsure which topic he would concentrate on. He attended courses in philosophy, classics and mathematics for 2 years before choosing mathematics.
Jacobi continued to study on his own, reading the works of Lagrange and other leading mathematicians. By the end of 1824, Jacobi had passed the examinations necessary for him to be able to teach
mathematics, Greek, and Latin in secondary schools. Despite being Jewish, he was offered a teaching post at the Joachimsthalsche Gymnasium, one of the leading schools in Berlin.
He had submitted his doctoral dissertation to the University of Berlin, and he was allowed to move quickly to work on his habilitation thesis. Jacobi presented a paper concerning iterated functions
to the Academy of Sciences in Berlin in 1825. Around 1825 Jacobi converted to Christianity, which now made university teaching possible for him. By 1826 he was teaching at the University of Berlin,
and then the University of Königsberg. There he joined Neumann and Bessel.
Jacobi had already made major discoveries in number theory before arriving in Königsberg. Jacobi also had remarkable new ideas about elliptic functions. He wrote to several mathematicians, and Gauss
and Legendre were much impressed by his results. In 1829, Jacobi met Legendre, Fourier, Poisson, and Gauss on travels across Europe.
Jacobi and Euler were kindred spirits in the way they created their mathematics. Both were prolific writers and even more prolific calculators, both drew a great deal of insight from immense
algorithmical work, both laboured in many fields of mathematics, and both at any moment could draw from the vast armoury of mathematical methods just those weapons which would promise the best
results in the attack of a given problem.
In 1831, Jacobi was promoted to full professor after being subjected to a 4 hour oral exam. Jacobi's reputation as an excellent teacher attracted many students. He introduced the seminar method to
teach students the latest advances in mathematics.
In 1833, Jacobi's brother Moritz, a physicist, visited him in Königsberg. During the 2 years he spent there, Jacobi became more interested in physics. Jacobi carried out important research in partial
differential equations of the first order and applied them to the differential equations of dynamics. He also worked on determinants and studied the functional determinant now called the Jacobian.
Cauchy had studied the Jacobian earlierm but Jacobi wrote a long memoir devoted to the subject. He proved, among many other things, that if a set of n functions in n variables are functionally
related then the Jacobian is identically zero, while if the functions are independent the Jacobian cannot be identically zero.
In 1834, Jacobi proved that if a single-valued function of one variable is doubly periodic then the ratio of the periods is imaginary. This result prompted much further work in this area, in
particular by Liouville and Cauchy. One of the prettiest results in the global theory of curves is a theorem of Jacobi published in 1842: "The spherical image of the normal directions along a closed
differentiable curve in space divides the unit sphere into regions of equal area".
In 1842, Jacobi was diagnosed with diabetes, and was advised to spend time in Italy. The climate in Italy did indeed help Jacobi to recover and he began to publish again. He moved back to Germany,
this time Berlin. He eventually moved to Gotha, while still lecturing in Berlin. He died of smallpox. | {"url":"http://www2.stetson.edu/~efriedma/periodictable/html/I.html","timestamp":"2014-04-17T12:34:30Z","content_type":null,"content_length":"5118","record_id":"<urn:uuid:1744c3c5-bc5d-46c9-affe-95a5463de04e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tribonacci Numbers
Date: 11/11/2000 at 22:11:39
From: Anonymous
Subject: Implicit formula for the nth Tribonacci number
Is there an implicit formula to calculate the nth Tribonacci number?
Also, is there a formula to find the sum of the first n Tribonacci
numbers? The only thing I know about this sequence is that each term
starting with the fourth is the sum of the previous three terms:
1, 1, 1, 3, 5, 9, 17, ...
Thank you for your help.
Date: 11/12/2000 at 06:34:17
From: Doctor Mitteldorf
Subject: Re: Implicit formula for the nth Tribonacci number
If you haven't already done so, you can start by working out the
corresponding problem for ordinary Fibonacci numbers. Read about it
Z Transforms and the Fibonacci Sequence
The procedure for Tribonacci numbers should be similar, except that
the generator is cubic instead of quadratic.
I don't have a solution for you, but I have a few thoughts that might
point in a direction of progress on the problem.
In matrix terms, you can start with the initial triple
and transform it with the matrix:
[ 0 1 0 ]
[ 0 0 1 ] = R
[ 1 1 1 ]
to get the next (overlapping) triple
Thereafter, continuing to multiply by the matrix R gives you
succeeding terms in the sequence. I'm not sure what to do with this,
but it is, perhaps, a lead.
There's a relation between this problem and the cubic equation
r^3 = r^2 + r + 1
If the sequence tends for high n toward a constant ratio from one term
to the next, then the ratio r must obey this equation. And even at low
n, the equation can be interpreted as an operator or matrix equation
that relates each triple of terms to the next triple.
Based on analogy with the Fibonacci formula, I think you might be able
to come up with a formula for the nth Tribonacci as a linear
a(r1)^n + b(r2)^n + c(r3)^n
where a, b and c are constants that can be determined by trying the
formula out on three different n values. r1, r2, and r3 are also
unknowns, but I have a feeling that they are the three roots of the
cubic equation above.
Once you've found a solution in this form, it is easy to sum the 3
geometric series to find the sum of the first n Tribonacci numbers. It
also should be possible to prove by induction that the formula works.
This gives you some leads to try - please write back and let me know
if you make any further progress.
- Doctor Mitteldorf, The Math Forum
Date: 11/13/2000 at 11:00:36
From: Doctor Anthony
Subject: Re: Implicit formula for the nth Tribonacci number
The difference equation is
u(n+3) = u(n+2) + u(n+1) + u(n)
u(n+3) - u(n+2) - u(n+1) - u(n) = 0
The solution depends upon the solution of the auxiliary equation
x^3 - x^2 - x - 1 = 0
This cubic has one real and two complex roots:
x = 1.8393
x = -0.4196 +- 0.60629i
Call these roots a and b +- i*c.
Then convert the complex roots into r, theta form. Call them R,@. The
solution of the difference equation is
u(n) = A*a^n + R^n[B*cos(n@) + C*sin(n@)]
From here you have to use the known values of u(1), u(2), u(3) to find
the values of A, B and C.
The expression for the nth term will be very complicated indeed. You
probably know that for the Fibonacci series the nth term is
1 [1+sqrt(5)]^n 1 [1-sqrt(5)]^n
u(n) = ------- ------------- - ------- -------------
sqrt(5) 2^n sqrt(5) 2^n
so you will understand that few people wish to struggle on for a
similar expression for the Tribonacci series.
- Doctor Anthony, The Math Forum
Date: 11/13/2000 at 11:24:55
From: Doctor Rob
Subject: Re: Implicit formula for the nth Tribonacci number
Thanks for writing to Ask Dr. Math.
Yes, there is such a formula. You start by finding the three roots of
the cubic equation
x^3 - x^2 - x - 1 = 0
Call them A, B and C. One is a real number and the other two are
complex conjugates. The real root is
A = (1+[19-3*33^(1/2)]^[1/3]+[19+3*33^(1/2)]^[1/3])/2
The complex ones involve (-1+-i*3^[1/2])/2, as well as cube roots as
appearing in the formula for A.
Then, if T(n) is the n-th Tribonacci number:
T(1) = T(2) = T(3) = 1
T(n+1) = T(n) + T(n-1) + T(n-2) for n >= 3
An explicit formula for T(n) has the form
T(n) = r*A^n + s*B^n + t*C^n
for constants r, s, and t which you can determine from the initial
T(1) = 1 = r*A + s*B + t*C,
T(2) = 1 = r*A^2 + s*B^2 + t*C^2,
T(3) = 1 = r*A^3 + s*B^3 + t*C^3.
This is a set of three linear equations in r, s and t, which you can
solve to find the values of r, s and t in the above formula. You end
up with complicated combinations of (19+-3*33^[1/2])^(1/3) and
(-1+-i*3^[1/2])/2. Massive simplification is possible. In fact, r, s
and t turn out to be the three roots of the cubic equation
11*x^3 + 11*x^2 + x - 1 = 0
The real one is
r = (-11+[847-33*33^(1/2)]^[1/3]+[847+33*33^(1/2)]^[1/3])/33
To find the sum of the first n Tribonacci numbers, you will get a
similar formula.
S(n) = T(1) + T(2) + ... + T(n)
S(0) = 0
S(1) = 1
S(2) = 2
S(3) = 3
You'll have the explicit formula
S(n) = r*(A^(n+1)-A)/(A-1) + s*(B^(n+1)-B)/(B-1) +
because of the formula for the sum of a geometric series, applied to
each of the three terms of the formula for T(n).
It's messier than the Fibonacci case, but all the elements are
- Doctor Rob, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/51576.html","timestamp":"2014-04-19T02:32:28Z","content_type":null,"content_length":"11118","record_id":"<urn:uuid:707c1c21-7063-4bf3-b7be-25e46706876b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |