content
stringlengths
86
994k
meta
stringlengths
288
619
6th Grade to re-envision and reconstruct mathematics education here ... As the math class begins, Mr. Marshall allows the ... Mr. Santos 6th grade class has just completed a review of ... Grade 6 Math Released Test Questions - Standardized Testing and ... CALIFORNIA STANDARDS TEST GRADE Released Test Questions Math 6 Introduction - Grade 6 Mathematics The following released test questions are taken from the Grade 6 ... Scope and Sequence Wet Springfield Public Schools Grade 1 Scope and Sequence Wet Springfield Public Schools Pacing Guide for Scott Foresman 2009 Envision Math Edition aligned with the Massachusetts Mathematics Framework K-6 Math Program Amherst Elementary Schools 6th Grade Math MCAS 2002-2008 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 2001 2002 2003 ... -By the end of eighth grade, we envision that all students will have a solid ... Grade 6: Textbook Connections enVisionMath: California Grade 6: Textbook Connections enVisionMath: California u00a9 2009 PreK-12 Mathematics, LAUSD UNIT 1 Topic Standards Book Sections 1.5 Number: Comparing and Ordering ... K-6 Math Program Amherst Elementary Schools Current Status of Algebra in Amherst Schools Benchmarks:-By the end of sixth grade students should understand the relationships among numbers and number operations and ... Math in Focus and the Common Core Standards Draft Alignment Guide There is nothing missing from the third grade Math in Focus. Math in Focus does include more measurement than is asked for in the standards, including volume and mass. Go Online and Review! 1 Florida Digital System Envision a math program that provides more ways to connect your students to math. The enVisionMATH Florida Digital System is a completely ... Oak Questions - Los Altos School District Mid Year -Mid Year Test End of year -End of Year test 7. Does the program provide enough support material for teachers who may struggle with the required creativity and ... Grade 6 Math Released Test Questions - Standardized Testing and ... CALIFORNIA STANDARDS TEST GRADE Released Test Questions Math 6 Introduction - Grade 6 Mathematics The following released test questions are taken from the Grade 6 ... Math Mammoth Grade 5 End of the Year Test Math Mammoth Grade 5 End of the Year Test Notes This test is quite long, because it ... to this rule is integers, because they will be reviewed in detail in 6th grade. Pearson Scott Foresman enVision Pearson Scott Foresman enVision Grade 6 Topic #8: Decimals, Fractions, Mixed Numbers ... enVision MATH Grade 6 - Colors Match to Pacing Calendar Topic Total Days 1 8 2 10 3 10 4 7 ... Grade 6: Textbook Connections enVisionMath: California Grade 6: Textbook Connections enVisionMath: California 2009 PreK-12 Mathematics, LAUSD UNIT 1 Topic Standards Book Sections 1.5 Number: Comparing and Ordering ... Textbook Adoption Summary for SoCo TEXTBOOK ADOPTION SUMMARY FOR SONOMA COUNTY SCHOOL DISTRICTS Textbook Adoption Summary for SoCo Scott Foresman enVision Math First Grade Semester One Curriculum Map Microsoft Word - 1stFinalCurriculumMap0809.doc. LFG- Lessons for First Grade by Stephanie Sheffield Math Solutions Publications Math by All Means by Jane Crawford ... 6th Grade Homework Hotline 6th Grade Homework Hotline Please call 495-5573, and follow the prompts for each grade level. 6th Grade Homework Hotline I N O U R C A T H O L I C S C H O O L S We live in a world that hungers for the truth of Christ. Gathering as the Body of Christ to celebrate our mission of Catholic Education in the Liturgy of the Catholic ... 6th Grade Inventory Excel 6th Grade Inventory enVision Math enVision Math 2009 Scott Foresman- Addison Wesley CA Student Edition 330-328-27292-2 enVision math $ 65.85 CA Interactive Homework ... Core Curriculum Textbooks Grades K-5 SAN JOSu00c9 UNIFIED SCHOOL DISTRICT Textbook Listing Core Curriculum Textbooks Grades K-5 LANGUAGE ARTS Houghton Mufflin Company: Kindergarten 10 Theme Packages 10 ... Pearson Scott Foresman enVision Math * Dependent upon number of participants. Registration Form for SB472 Mathematics Professional Development Institutes Pearson Scott Foresman enVision Math June 8-12 ... Diocese of Lansing Textbook Review for Mathematics 2008 Series Title: Envision Math Grade levels : ... Series Title: SRA Real Math (Second Review) Grade levels : K-6th Publisher: SRA/McGraw ... EnVision Math Website EnVision Math Website To Register: 1. Go to www. pearsonsuccess net. com 2. ... Kindergarten SFMADP 09 CAENGKB b. 1 st Grade SFMADP 09 CAENG 1B c. 2 nd Grade SFMADP 09 ... Presents 2010 Spring Conference Tangrams and Dissection Puzzles Revisited General Room 104 Dr. Art Stoner A+ Compass This is an expanded version of a previous presentation. Each participant will ...
{"url":"http://www.cawnet.org/docid/envision+math+6th+grade/","timestamp":"2014-04-21T04:34:24Z","content_type":null,"content_length":"56511","record_id":"<urn:uuid:2c977aa5-2d03-4565-abff-36f047b2560b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamentals of passive components Engineers generally focus most attention on active components such as transistors, optoelectronic devices, ICs, and the like. In any design, however, the characteristics of passive devices significantly impact overall circuit performance. Consequently, a solid understanding of the nature of passive devices is fundamental to successful circuit design. Passives include basic components such as resistors, capacitors, and inductors as well as transformers, antennas, and many types of transducers and sensors. Unlike active components, passive components do not require an external source of energy to function. Instead, passives impede current flow through electrical resistance, store charge through capacitance, or produce a voltage in response to changes in current through inductance. Resistance R is a function of the fundamental resistivity ρ of a conductor, increasing with conductor length and decreasing with its cross section (see Fig. 1). An ideal resistor produces a voltage V that depends on constant current I and its resistance R in ohms according to the classical form of Ohm’s law: V = IR, and devices that behave as pure resistors in this fashion are often referred to as ohmic devices. Fig. 1: Resistance is a function of basic resistivity ρ of a conductor and its dimensions. In this illustration of a pcb copper trace, the trace resistance R = ρZ⁄XY. In a typical pcb, resistance for a standard copper trace of thickness 0.0015 in. is 0.45 mΩ/square — or 18 mΩ/cm for a 0.25-mm track. Source: Analog Devices. Any real device or even an ohmic device driven by a time-varying current exhibits complex impedance Z = R + jX — resistance R plus the imaginary (j) part of reactance X. At a particular frequency, Ohm’s law then becomes V = IZ with current and voltage shifted in phase by θ. Unlike resistors where impedance is essentially purely resistive, impedance in inductors and capacitors is essentially purely reactive. For example, an inductor exhibits a characteristic reactance X = 2πfL where f is the frequency in hertz. So an inductor’s impedance becomes j2πfL. Similarly, a capacitor’s reactance is 1⁄2πfC and its impedance becomes 1⁄j2πfC. Unlike resistors, which dissipate energy as heat, inductors and capacitors store energy. An inductor stores energy as a magnetic field created by current flowing through the conductor. Physically, an inductor is typically a wire wound into a coil to increase the amount of energy it can store, because inductance increases with the number of turns and cross-sectional area of the conductor. Changes in current alter the magnetic flux through the inductor, which produces a voltage output in the conductor: V = L di⁄dt, where L is the inductance of the inductor in henries, and di⁄dt is the rate of change of current. A capacitor stores energy as an electric field held between opposing conducting plates separated by an insulating material called a dielectric. Capacitance is proportional to the area A of overlap of the conducting plates and inversely proportional to the distance d between them: C ∝ ε_r A/d, where ε_r is the dielectric constant of the material between the insulating plates. In electrical terms, capacitance C in farads is defined as C= q⁄V, where q is charge on the plates and V is the voltage between the plates. The concept of an ideal passive component is an abstraction intended to approximate the component’s behavior in a circuit. In fact, any actual passive component exhibits a combination of resistive and reactive characteristics. Understanding the impact of these characteristics on circuit performance is critical to design success. Even a common capacitor has a relatively complex equivalent circuit (Figure 2) where the capacitance C is degraded by parasitics including parallel resistance Rp, equivalent series resistance (ESR), equivalent series inductance (ESL), and dielectric absorption (DA) comprising RDA and CDA series elements. Rp represents insulation resistance or leakage shunts across the capacitor that discharge the capacitor at a rate determined by the RC time constant. Electrolytic-type capacitors made from tantalum or aluminum typically have high leakage currents, while poly capacitors such as polytetrafluorethylene (Teflon), polystyrene, and polyproplene have lower leakage currents. Fig. 2: In any real capacitor, parasitics degrade performance to an extent determined by dielectric material and operating conditions. Parallel resistance Rp, ESR, and ESL are often combined in a single datasheet specification called dissipation factor (DF) that describes the efficiency of a capacitor. Dielectric absorption (DA), modeled as series resistance and capacitance, describes residual charge storage that can cause a small voltage to appear some time after the capacitor is short-circuited. Source: Analog Devices. ESR represents resistance of the capacitor leads and plates and thus appears as a series parasitic value. High ESR causes the capacitor to dissipate power and heat up when driven by high-frequency ac current that would be found in RF or decoupling applications in power supplies with high ripple currents. Similarly, ESL represents the inductance of the capacitor leads and plates and also appears as a series parasitic value. At high frequencies, ESL can dominate performance so the capacitor performs more as an inductor than as a capacitor. Because these sources of parasitics can be difficult for capacitor manufacturers to measure separately, engineers will often find capacitor datasheets specifying a single value, called dissipation factor (DF), which combines Rp, ESR, and ESL. Hence, DF describes the overall efficiency of a capacitor and is a reciprocal of capacitor figure of merit, Q. Dielectric absorption, also called soakage, is a phenomenon where a capacitor exhibits hysteresis — holding (or “soaking up”) charge and retaining that charge even after it has been short-circuited. As a result of this residual charge, a small voltage can appear across a capacitor some time after it has been discharged. Dielectric absorption can be modeled as a simple series resistance and capacitance or as parallel networks of series resistance and capacitance. Its underlying mechanisms can be complex and lead to a variety of insidious errors in circuit performance. DA values vary significantly with different dielectric materials — ranging from 0.001% in polypropylene capacitors to 0.2% in high-K monolithic ceramic capacitors. This phenomenon can be particularly vexing in high-precision data acquisition systems. For example, DA can lead to various errors in sample-and-hold (S&H) circuitry that uses a capacitor with high DA as the storage capacitor. When the S&H circuitry suddenly switches to hold mode, the voltage on the hold capacitor can sag back to its previous value. The size of this voltage sag depends on the sample time and hold time and is minimized by long sample times and short hold times (see Fig. 3). Fig. 3: In sample-and-hold circuits, dielectric absorption (DA) introduces a voltage sag that increases the longer the circuit remains in hold mode because DA in the hold capacitor causes the capacitor to sag back to its previous value over time. The use of different materials can mitigate this effect. Source: Linear Technology. Other passive components build on the fundamental notions of resistors, capacitors, and inductors to provide more complex behaviors. For example, a transformer relies on inductive coupling where a current passing through the primary, or input, coil of the transformer creates a magnetic field in the transformer’s core. Inductive coupling causes this field to induce a current to flow in the transformer’s secondary, or output coil. By varying the ratio of windings on primary and secondary coils, transformer designers can step up or step down current produced at the output. Antennas also rely on coupling to produce a voltage that depends on electromagnetic waves impinging on the antenna itself. Sensors and transducers such as a thermocouples and thermoelectric generators rely on physical phenomena to produce a voltage proportional to the temperature difference between separate junctions of two dissimilar metals. This phenomenon, called the Seebeck effect after its discoverer Johann Seebeck, forms the basis of temperature sensing circuits and temperature-based energy scavenging circuits. ■ Add Comment Text Only 2000 character limit
{"url":"http://www.electronicproducts.com/Passive_Components/Resistors_and_Potentiometers/Fundamentals_of_passive_components.aspx","timestamp":"2014-04-17T07:13:17Z","content_type":null,"content_length":"88053","record_id":"<urn:uuid:59f5dab9-2533-4c53-8fc8-363f173789cc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Testing Equality of Coefficients from two regressions [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Testing Equality of Coefficients from two regressions From "Kemal Turkcan" <kturkcan@akdeniz.edu.tr> To statalist@hsphsun2.harvard.edu Subject st: Testing Equality of Coefficients from two regressions Date Wed, 23 Apr 2008 12:53:01 +0300 Dear Statalisters, I have the following problem but couldnt solved it quite sometime. I have a panel data set consists of 30 countries and 17 years. I have performed several tests (such as Hausman test, Breush-Pagan test, autocorrelation test, and and heteroskedasticity test) that indicate XTGLS is correct procedure in my case. But I have realized that XTGLS postestimation can not produce e(RSS) which is necessary to test (Chow's F test) the equality of coefficients across two different groups (in my case: developed and developing countries). XTGLS postestimation can produce e(b) and e(V) which can be used to calculate the WALD test,asymtotically very similar to Chow's test. Something like that: W = (RB-r)'(R*V*R')(Rb-r) The problem is how I can create a restriction matrix "R" when I try to compare the sets of all coefficients from two regressions. The model I am estimating is: xtgls LIIT LAVGDP LDIFGDP LAVGDPP LDIFGDPP LDIST LMFDIOUT LEXCH, panels (hetero) corr(ar1) igls force I tried to use xtreg,re procedure to calculate Chow test as follows: xtreg LIIT LAVGDP LDIFGDP LAVGDPP LDIFGDPP LDIST LMFDIOUT LEXCH if core==1,fe vce(robust) est store core gen residualcore=e(rss) gen ncore=e(N) xtreg LIIT LAVGDP LDIFGDP LAVGDPP LDIFGDPP LDIST LMFDIOUT LEXCH if periphery==1,fe vce(robust) est store periphery gen residualperiphery=e(rss) gen nperiphery=e(N) xtreg LIIT LAVGDP LDIFGDP LAVGDPP LDIFGDPP LDIST LMFDIOUT LEXCH,fe vce est store total gen residualtotal=e(rss) gen k=7 gen F1=residualtotal-(residualcore+residualperiphery)/k gen F2=(residualcore+residualperiphery)/((ncore+nperiphery-2)*k) But this procedure is not correct when you have autocorrelation and heteroskedasticity problem. Can I use REG procedure to calculate FGLS with panel data? If I can, then I can easily compare coefficients using SUEST. Any help wilp be appreaciated, Kemal Turkcan * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg01021.html","timestamp":"2014-04-18T01:26:33Z","content_type":null,"content_length":"6900","record_id":"<urn:uuid:b30fbd88-ba53-4b88-b08b-55861123e63a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Every Monday*, the Math Forum staff gets together to talk about math. We call these meetings Math Monday. That’s where we vet new problems and resources, learn about new math and problem-solving strategies, look at how students have solved problems, think up new mathematical questions, etc. This blog brings Math Monday to the general public. We’ll post problem scenarios in search of questions, interesting math we encounter that we might want to use in a PoW someday, student work that inspires us, and anything else PoW-related we can think of. Math Monday works because we all contribute. This blog will work if you contribute too. Read the posts, the other comments, and leave your thoughts in a comment as well! *well, we aim for Mondays. Math Monday sometimes falls on a Thursday which feels weird. 1. mohammad says: let A be the sigma-algebra generated by all open and dens subset of R.is a APROPER sub set of the Borel sigma -algebra? □ Max says: I asked on Twitter and Chris Lusto responded: “I think “no”. The only open sets in R are intervals, and the sig-alg generated by intervals = the Borel Alg. So A is a subset, but not a proper one. I *think*. Have a real mathematician check that out.”
{"url":"http://mathforum.org/blogs/pows/about/","timestamp":"2014-04-17T10:27:36Z","content_type":null,"content_length":"17801","record_id":"<urn:uuid:09ff041c-9e2e-4c76-862e-91eaa5e2068f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
TARO 2 EAD 2002 Editing InstructionsNotices of the American Mathematical SocietyA variational method in the theory of harmonic integrals, IIManuscripta Mathematica, Biographical Note Charles Bradfield Morrey, Jr. was born on July 23, 1907 in Columbus, Ohio. He was educated at Ohio State University and Harvard University (Ph.D.; 1931). After National Research Council fellowships at Princeton University and Rice University, he joined the Department of Mathematics at the University of California, Berkeley (1933), retiring as professor emeritus in 1977. He died April 29, 1984. His research included work in area of surfaces, calculus of variations, and elliptic partial differential equations. His work led to the solutions of the 19th and 20th problems of Hilbert. Source: Anon., "C.B. Morrey, Jr., 1907-1984", , 31 (Aug. 1984): 474.
{"url":"http://www.lib.utexas.edu/taro/utcah/00228.xml","timestamp":"2014-04-25T06:45:29Z","content_type":null,"content_length":"15876","record_id":"<urn:uuid:dc382a81-82db-408e-926c-7a3b66c28420>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
KVS PGT Mathematics Written Examination Syllabus 2013-14 KVS PGT Mathematics Written Examination Syllabus 2013-14 - You will find here Syllabus for written examination for PGT Mathematics online. Sets: Sets and their representations. Empty set. Finite & Infinite sets. Equal sets. Subsets. Subsets of the set of real numbers. Power set. Universal set. Venn diagrams. Union and Intersection of sets. Difference of sets. Complement of a set. Relations & Functions: Ordered pairs, Cartesian product of sets. Number of elements in the cartesian product of two finite sets. Cartesian product of the reals with itself (upto R x R x R). Definition of relation, pictorial diagrams, domain. co-domain and range of a relation. Function as a special kind of relation from one set to another. Pictorial representation a function, domain, co-domain & range of a function. Real valued function of the real variable, domain and range of these functions, constant, identity, polynomial, rational, modulus, signum and greatest integer functions with their graphs. Sum, difference, product and quotients of functions. Sets and their Representations. Union, intersection and complements of sets, and their algebraic properties,Relations, equivalence relations, mappings, one-one, into and onto mappings, composition of mappings. Principle of Mathematical Induction: Processes of the proof by induction. The principle of mathematical induction. Permutations & Combinations: Fundamental principle of counting. Factorial n. Permutations and combinations, derivation of formulae and their connections, simple applications. Complex Numbers: Complex numbers, Algebraic properties of complex numbers, Argand plane and polarrepresentation of complex numbers, Statement of Fundamental Theorem of Algebra, solution of quadratic equations in the complex number system. Modulus and Argument of a complex number, square root of a complex number. Cube roots of unity, triangle inequality. Linear Inequalities: Linear inequalities. Algebraic solutions of linear inequalities in one variable and their representation on the number line. Graphical solution of linear inequalities in two Solution of system of linear inequalities in two variables- graphically. Absolute value Inequality of means, Cauchy-Schwarz Inequality, Tchebychef’s Inequality.Binomial Theorem: Statement and proof of the binomial theorem for positive integral indices. Pascal's triangle, general and middle term in binomial expansion, simple applications. Binomial Theorem for anyindex. Properties of Binomial Co-efficients. Simple applications for approximations. Sequence and Series: Sequence and Series. Arithmetic, Geometric and Harmonic progressions (G.P.), General terms and sum to n terms of A.P., G.P. and H.P. Arithmetic Mean (A.M.), Geometric MeanG.M.), and Harmonic Mean (H.M.), Relation between A.M., G.M. and H.M. Insertion of Arithmetic,Geometric and Harmonic means between two given numbers. Special series, Sum to n termsof the special series. . Arithmetico-Geometric Series, Exponential and Logarithmic series. Elementary Number Theory: Peano’s Axioms, Principle of Induction; First Principle, Second Principle, Third Principle, Basis Representation Theorem, Greatest Integer Function Test of Divisibility, Euclid’s algorithm,The Unique Factorisation Theorem, Congruence, Sum of divisors of a number . Euler’s totient function, Theorems of Fermat and Wilson. Quadratic Equations: Quadratic equations in real and complex number system and their solutions. Relation between roots and co-efficients, nature of roots, formation of quadratic equations with given Symmetric functions of roots, equations reducible to quadratic equations – application topractical problems. Polynomial functions, Remainder & Factor Theorems and their converse,Relation betweenroots and coefficients, Symmetric functions of the roots of an equation. Common roots. Matrices and Determinants: Determinants and matrices of order two and three, properties of determinants, Evaluation ofdeterminants. Area of triangles using determinants, Addition and multiplication of matrices,adjoint and inverse of matrix. Test of consistency and solution of simultaneous linear equations usingdeterminants and Two dimensional Geometry: Cartesian system of rectangular co-ordinates in a plane, distance formula, section formula, area of a triangle, condition for the collinearity of three points, centroid and in-centre of atriangle,locus and its equation, translation of axes, slope of a line, parallel and perpendicularlines, intercepts of a line on the coordinate axes. Various forms of equations of a line,intersection of lines, angles between two lines, conditionsfor concurrence of three lines, distance of a point from a line, Equations of internal andexternal bisectors of angles betweentwolines, coordinates of centroid, orthocentre andcircumcentre of a triangle, equation of family of lines passing through the point of intersection of two lines,homogeneous equation of second degree in x and y, angle between pair of lines through the origin, combined equation of the bisectors of the angles between a pair of lines, condition for the general second degree equation to represent a pair of lines, point of intersection and angle between two lines. Standard form of equation of a circle, general form of the equation of a circle, its radius andcentre, equation of a circle in the parametric form, equation of a circle when the end points of a diameter are given, points of intersection of a line and a circle with the centre at the origin and condition for a line to be tangent to the circle, length of the tangent, equation of the tangent, equation of a family of circles through the intersection of two circles, condition for two intersecting circles to be orthogonal. Sections of cones, equations of conic sections (parabola, ellipse and hyperbola) in standard forms, condition for y = mx + c to be a tangent and point(s) of tangency. Trigonometric Functions: Positive and negative angles. Measuring angles in radians & in degrees and conversion fromone measure to another. Definition of trigonometric functions with the help of unit Graphs of trigonometric functions. Expressing sin (x+y) and cos (x+y) in terms of sinx, siny, cosx & cosy. Identities related to sin2x, cos2x, tan 2x, sin3x, cos3x and tan3x. Solution of trigonometric equations, Proofs and simple applications of sine and cosine formulae. Solution of triangles. Heights and Distances. Inverse Trigonometric Functions: Definition, range, domain, principal value branches. Graphs of inverse trigonometric functions. Elementary properties of inverse trigonometric functions. Differential Calculus: Polynomials, rational, trigonometric, logarithmic and exponential functions, Inverse functions. Graphs of simple functions. Limits, Continuity and differentiability; Derivative, Geometrical interpretation of the derivative, Derivative of sum, difference, product and quotient of functions. Derivatives of polynomial and trigonometric functions, Derivative of composite functions; chain rule, derivatives of inverse trigonometric functions, derivative of implicit function. Exponential and logarithmic functions and their derivatives. Logarithmic differentiation. Derivative of functions expressed in parametric forms. Second order derivatives. Rolle's and Lagrange's Mean Value Theorems and their geometric interpretations. Applications of Derivatives: Applications of derivatives: rate of change, increasing / decreasing functions, tangents & normals, approximation, maxima and minima. Integral Calculus: Integral as an anti-derivative. Fundamental integrals involving algebraic, trigonometric, exponential and logarithmic functions. Integration by substitution, by parts and by partialfractions. Integration using trigonometric identities. Definite integrals as a limit of a sum, Fundamental Theorem of Calculus. Basic Properties of definite integrals and evaluation of definite integrals; Applications of definite integrals in finding the area under simple curves,especially lines, areas of circles / Parabolas / ellipses, area between the two curves. Differential Equations: Definition, order and degree, general and particular solutions of a differential equation. Formation of differential equation whose general solution is given. Solution of differential equations by method of separation of variables, homogeneous differential equations of first order and first degree. Solutions of linear differential equation. Vectors: Vectors and scalars, magnitude and direction of a vector. Direction cosines / ratios of vectors. Types of vectors (equal, unit, zero, parallel and collinear vectors), position vector of a point, negative of a vector, components of a vector, addition of vectors, multiplication of a vector bya scalar, position vector of a point dividing a line segment in a given ratio. Scalar (dot)product of vectors, projection of a vector on aline. Vector (cross) product of vectors.Threedimensional Geometry:Coordinates of a point in space, distance between two points; Section formula, Directioncosines / ratios of a line joining two points. Cartesian andvector equationof a line, coplanar and skew lines, shortest distance between two lines. Cartesian and vector equation of a plane. Angle between (i) two lines, (ii) two planes. (iii) a line and a plane. Distance of a point from a plane. Scalar and vector triple product. Application of vectors to plane geometry. Equation of a sphere, its centre and radius. Diameter form of the equation of a sphere. Statistics: Calculation of Mean, median and mode of grouped and ungrouped data. Measures of dispersion; mean deviation, variance and standard deviation of ungrouped / grouped data. Analysis of frequency distributions with equal means but different variances. Probability: Random experiments: outcomes, sample spaces. Events: occurrence of events, exhaustive events, mutually exclusive events, Probability of an event, probability of 'not', 'and' & 'or' Multiplication theorem on probability. Conditional probability, independent events,, Baye's theorem, Random variable and its probability distribution, Binomial and Poisson distributions and their properties. Linear Algebra Examples of vector spaces, vector spaces and subspace, independence in vector spaces, existence of a Basis, the row and column spaces of a matrix, sum and intersection of subspaces. Linear Transformations and Matrices, Kernel, Image, and Isomorphism, change of bases, Similarity, Rank and Nullity. Inner Product spaces, orthonormal sets and the Gram- Schmidt Process, the Method of Least Squares. Basic theory of Eigenvectors and Eigenvalues, algebraic and geometric multiplicity of eigen value, diagonalization of matrices, application to system of linear differential equations. Generalized Inverses of matrices, Moore-Penrose generalized inverse. Real quadratic forms, reduction and classification of quadratic forms, index and signature,triangular reduction of a pair of forms, singular value decomposition, extrema of quadraticforms.Jordan canonical form, vector and matrix decomposition. Monotone functions and functions of bounded variation. Real valued functions, continuous functions, Absolute continuity of functions, standard properties. Uniform continuity, sequence of unctions, uniform convergence, power series and radius of convergence. Riemann-Stieltjes integration, standard properties, multiple integrals and their evaluation by repeated integration, change of variable in multiple integration. Uniform convergence in improper integrals, differentiation under the sign of integral - Leibnitz rule. Dirichlet integral, Liouville’s extension. Introduction to n-dimensional Euclidean space, open and closed intervals (rectangles), compact sets, Bolzano-Weierstrass theorem, Heine-Borel theorem. Maxima-minima of functions of several variables, constrained maxima-minima of Cauchy theorem and of Cauchy integral formula with applications, Residue and contour integration. Fourier and Laplace transforms, Mellin’s inversion theorem. uestions on your mind? Get answers for free Post here
{"url":"http://www.admissioncorner.com/content/kvs-pgt-mathematics-written-examination-syllabus-2013-14","timestamp":"2014-04-20T03:13:49Z","content_type":null,"content_length":"43031","record_id":"<urn:uuid:04e29621-d523-4b15-af92-6073e03cdf69>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
[IPython-dev] Integrating pandas into pylab Carlos Córdoba ccordoba12@gmail.... Thu Oct 27 22:18:02 CDT 2011 I think EPD distributed a magic called %whoosh which let you do full text searching in docstrings which is roughly the equivalent to lookfor. I'll try to add it to Spyder during the next year and you can read about it here: About the flat namespace, it's a great idea, specially to use it with the new ipython notebook (maybe as a profile, at least at the beginning). I think it would be better if it could be accompanied by a comprehensive Py4Science book which described what you can get from it. Something similar to the Mathematica book, with topics like these: 1. Core Language A Python intro from the scientific point of view 2. Data structures: Vectors/Matrices = Numpy arrays Tabular Data = DataFrame 3. Linear algebra dot, cross, norm, etc 4, Symbolic Mathematics Variables = Sympy symbols 5. Probability Distributions = Scipy.stats Random numbers = numpy.random 6. Graphics 2D = mpl.plot 3D = mayavi.mlab 3D interactive = VPython 7. Interfaces lookfor and a book like this one will be a great improvement for any newbie trying to get started with python. Cheers and keep up the good flow of ideas, El jue 27 oct 2011 13:17:06 COT, Fernando Perez escribió: > On Thu, Oct 27, 2011 at 9:28 AM, Aaron Meurer<asmeurer@gmail.com> wrote: >> I think IPython could help on this front. Instead of relying on good >> "See Also" sections in docstrings (though those are important too), it >> would be useful to have an IPython magic that searched the docstring >> of every name in the present namespace (pardon me if this already >> exists, I didn't find anything like it in %magic). >> That way, it would be easy for users to just import everything from >> pylab (or whatever), and try to find functions related to whatever >> functionality they are interested in. This would be somewhat of an >> equivalent of the built-in help for a GUI program like Matlab or R >> GUI. And of course, the GUI versions of IPython could (and should) >> have more GUI oriented versions of this. > Absolutely. Now that we have frontends capable of presenting this > information in a richer manner, I hope someone will pitch in and > contribute a good system for help searching/introspection. Even if > initially we don't match the fantastic Mathematica help browser, the > 'where do I find a function to do X' question is one of the most > significant stumbling blocks newcomers face. Just yesterday I was > lecturing at Berkeley (mostly grad students and postdocs, but new to > Python) and this came up persistently. > Cheers, > f > _______________________________________________ > IPython-dev mailing list > IPython-dev@scipy.org > http://mail.scipy.org/mailman/listinfo/ipython-dev More information about the IPython-dev mailing list
{"url":"http://mail.scipy.org/pipermail/ipython-dev/2011-October/008299.html","timestamp":"2014-04-21T15:00:37Z","content_type":null,"content_length":"5946","record_id":"<urn:uuid:37701c1b-4c60-4771-90fe-ad1161e9c00f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Streamwood Math Tutor Find a Streamwood Math Tutor ...I encourage good study skills and note-taking and love helping my students improve in those skills as well. I look forward to working with you or your student.I taught Algebra 1 my entire teaching career. I have Bachelor of Science in Mathematics Education and a Master of Science in Applied Mathematics. 10 Subjects: including statistics, algebra 1, algebra 2, calculus ...I've helped both high school students and fellow college classmates master complex material with patience and encouragement. I plan to become an optometrist and will be beginning a Doctor of Optometry program in the Fall. Until then, I am looking forward to helping students with my favorite subjects: algebra, geometry, trigonometry, chemistry, biology, and physics. 25 Subjects: including calculus, physics, precalculus, trigonometry ...I wish to include these facts because I want to assure you that your child's education is in good hands. I am smart, engaging, easy to talk to, and I genuinely want to do a good job teaching your children. I have tutored students throughout high school and college - meaning I have over eight years of experience! 40 Subjects: including algebra 2, algebra 1, biology, geometry ...The more you know how it fits together, the easier it gets. I show a student a way to understand a math problem. I taught high school Latin over 30 years ago. 11 Subjects: including calculus, algebra 1, algebra 2, geometry ...I look forward to working with your student to help them develop confidence in learning.I have previous experience as a mathematics instructor in a variety of settings including public high schools, an alternative high school, and a juvenile detention center. I have served as an ACT Preparation ... 16 Subjects: including calculus, precalculus, trigonometry, SAT math
{"url":"http://www.purplemath.com/streamwood_il_math_tutors.php","timestamp":"2014-04-17T01:20:23Z","content_type":null,"content_length":"23879","record_id":"<urn:uuid:53b708f9-7d5f-4a82-9dc0-050cbb0e4c10>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Although theories of extra dimensions establish a connection between string theory and cosmology, the developments of the past few years have pushed the connection much further. [For reviews, see [41 , 42, 43].] The union of string theory and cosmology is barely past its honeymoon, but so far the marriage appears to be a happy one. Inflation, from its inception, was a phenomenologically very successful idea that has been in need of a fundamental theory to constrain its many variations. String theory, from its inception, has been a very well-constrained mathematical theory in need of a phenomenology to provide contact with observation. The match seems perfect, but time will be needed before we know for sure whether either marriage partner can fulfill the needs of the other. In the meantime, ideas are stirring that have the potential to radically alter our ideas about fundamental laws of physics. For many years the possibility of describing inflation in terms of string theory seemed completely intractable, because the only string vacua that were understood were highly supersymmetric ones, with many massless scalar fields, called moduli, which have potential energy functions that vanish identically to all orders of perturbation theory. When the effects of gravity are included, the energy density of such supersymmetric states is never positive. Inflation, on the other hand, requires a positive energy density, and it requires a hill in the potential energy function. Inflation, therefore, could only be contemplated in the context of nonperturbative supersymmetry-breaking effects, of which there was very little understanding. The situation changed dramatically with the realization that string theory contains not only strings, but also branes, and fluxes, which can be thought of as higher-dimensional generalizations of magnetic fields. The combination of these two ingredients makes it possible to construct string theory states that break supersymmetry, and that give nontrivial potential energy functions to all the scalar fields. One very attractive idea for incorporating inflation into string theory is to use the positions of branes to play the role of the scalar field that drives inflation. The earliest version of this theory was proposed in 1998 by Dvali and Tye [44], shortly after the possibility of large extra dimensions was proposed in [33]. In the Dvali-Tye model, the observed universe is described not by a single three-dimensional brane, but instead by a number of three-dimensional branes which in the vacuum state would sit on top of each other. If some of the branes were displaced, however, in a fourth spatial direction, then the energy would be increased. The brane separation would be a function of time and the three spatial coordinates along the branes, and so from the point of view of an observer on the brane, it would act like a scalar field that could drive inflation. At this stage, however, the authors needed to invoke unknown mechanisms to break supersymmetry and to give the moduli fields nonzero potential energy functions. In 2003, Kachru, Kallosh, Linde, and Trivedi [45] showed how to construct complicated string theory states for which all the moduli have nontrivial potentials, for which the energy density is positive, and for which the approximations that were used in the calculations appeared justifiable. These states are only metastable, but their lifetimes can be vastly longer than the 10 billion years that have elapsed since the big bang. There was nothing elegant about this construction - the six extra dimensions implied by string theory are curled not into circles, but into complicated manifolds with a number of internal loops that can be threaded by several different types of flux, and populated by a hodgepodge of branes. Joined by Maldacena and McAllister, this group [46] went on to construct states that can describe inflation, in which a parameter corresponding to a brane position can roll down a hill in its potential energy diagram. Generically the potential energy function is not flat enough for successful inflation, but the authors argued that the number of possible constructions was so large that there may well be a large class of states for which sufficient inflation is achieved. Iizuka and Trivedi [47] showed that successful inflation can be attained by curling the extra dimensions into a manifold that has a special kind of symmetry. A tantalizing feature of these models is that at the end of inflation, a network of strings would be produced [17]. These could be fundamental strings, or branes with one spatial dimension. The CMB data of Fig. 4 rule out the possibility that these strings are major sources of density fluctuations, but they are still allowed if they are light enough so that they don't disturb the density fluctuations from inflation. String theorists are hoping that such strings may be able to provide an observational window on string physics. A key feature of the constructions of inflating states or vacuumlike states in string theory is that they are far from unique. The number might be something like 10^500 [48, 49, 50], forming what Susskind has dubbed the "landscape of string theory." Although the rules of string theory are unique, the low-energy laws that describe the physics that we can in practice observe would depend strongly on which vacuum state our universe was built upon. Other vacuum states could give rise to different values of "fundamental" constants, or even to altogether different types of "elementary" particles, and even different numbers of large spatial dimensions! Furthermore, because inflation is generically eternal, one would expect that the resulting eternally inflating spacetime would sample every one of these states, each an infinite number of times. Because all of these states are possible, the important problem is to learn which states are probable. This problem involves comparison of one infinity with another, which is in general not a well-defined question [51]. Proposals have been made and arguments have been given to justify them [52], but no conclusive solution to this problem has been found. What, then, determined the vacuum state for our observable universe? Although many physicists (including the authors) hope that some principle can be found to understand how this choice was determined, there are no persuasive ideas about what form such a principle might take. It is possible that inflation helps to control the choice of state, because perhaps one state or a subset of states expands faster than any others. Because inflation is generically eternal, the state that inflates the fastest, along with the states that it decays into, might dominate over any others by an infinite amount. Progress in implementing this idea, however, has so far been nil, in part because we cannot identify the state that inflates the fastest, and in part because we cannot calculate probabilities in any case. If we could calculate the decay chain of the most rapidly inflating state, we would have no guarantee that the number of states with significant probability would be much smaller than the total number of possible states. Another possibility, now widely discussed, is that nothing determines the choice of vacuum for our universe; instead, the observable universe is viewed as a tiny speck within a multiverse that contains every possible type of vacuum. If this point of view is right, then a quantity such as the electron-to-proton mass ratio would be on the same footing as the distance between our planet and the sun. Neither is fixed by the fundamental laws, but instead both are determined by historical accidents, restricted only by the fact that if these quantities did not lie within a suitable range, we would not be here to make the observations. This idea - that the laws of physics that we observe are determined not by fundamental principles, but instead by the requirement that intelligent life can exist to observe them - is often called the anthropic principle. Although in some contexts this principle might sound patently religious, the combination of inflationary cosmology and the landscape of string theory gives the anthropic principle a scientifically viable framework. A key reason why the anthropic approach is gaining attention is the observed fact that the expansion of the universe today is accelerating, rather than slowing down under the influence of normal gravity. In the context of general relativity, this requires that the energy of the observable universe is dominated by dark energy. The identity of the dark energy is unknown, but the simplest possibility is that it is the energy density of the vacuum, which is equivalent to what Einstein called the cosmological constant. To particle physicists it is not surprising that the vacuum has nonzero energy density, because the vacuum is known to be a very complicated state, in which particle-antiparticle pairs are constantly materializing and disappearing, and fields such as the electromagnetic field are constantly undergoing wild fluctuations. From the standpoint of the particle physicist, the shocking thing is that the energy density of the vacuum is so low. No one really knows how to calculate the energy density of the vacuum, but naïve estimates lead to numbers that are about 10^120 times larger than the observational upper limit. There are both positive and negative contributions, but physicists have been trying for decades to find some reason why the positive and negative contributions should cancel, but so far to no avail. It seems even more hopeless to find a reason why the net energy density should be nonzero, but 120 orders of magnitude smaller than its expected value. However, if one adopts the anthropic point of view, it was argued as early as 1987 by Weinberg [54] that an explanation is at hand: If the multiverse contained regions with all conceivable values of the cosmological constant, galaxies and hence life could appear only in those very rare regions where the value is small, because otherwise the huge gravitational repulsion would blow matter apart without allowing it to collect into galaxies. The landscape of string theory and the evolution of the universe through the landscape are of course still not well understood, and some have argued [55] that the landscape might not even exist. It seems too early to draw any firm conclusions, but clearly the question of whether the laws of physics are uniquely determined, or whether they are environmental accidents, is an issue too fundamental to ignore.
{"url":"http://ned.ipac.caltech.edu/level5/March05/Guth/Guth4.html","timestamp":"2014-04-18T10:41:52Z","content_type":null,"content_length":"13130","record_id":"<urn:uuid:da611609-53ae-4802-bc51-6058606e3395>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment December 2008 Before the world awoke to its own finiteness and began to take the need for recycling seriously, one of the quintessential images of the working mathematician was a waste paper basket full of crumpled pieces of paper. The mathematician sits behind a large desk, furrowed brow resting on one hand, the other hand holding a stalled pencil over yet another sheet of paper soon to be crumpled and discarded. Imagine now a curious twist on this scene, as in 1909 or thereabouts, the Dutch mathematician LEJ Brouwer sits, perhaps, amidst mounds of crumpled paper as he works on a theorem about, well, crumpled paper. We can imagine his brow to be even more furrowed than most, because while paper played on his 28-year-old mind, he had already begun to reject the type of proof he was building. Over the remaining 57 years of his life would create a revolution in our view of mathematics and logic. Brouwer can be called the father of constructivist mathematics, for while people have influenced his child, he has had the largest influence. To understand what this movement stands for, we first need to look at his crumpled paper. A fixed point theorem Press it flat and you'll find that one point is back in its original position. Take first a flat sheet of paper and, imagining the most fiendishly impossible maths homework or your worst enemy (if these two are different), crumple it into a ball any way you please. Now lay this crumpled mess back on the spot where the original, uncrumpled, sheet of paper lay previously, and press it down (rather than uncrumple it) until it's absolutely flat. Unbelievable as it may seem, there is a point on the crumpled sheet of paper which is back in exactly the same spot as it was before you did the crumpling. This result is known as a fixed point theorem, since it shows that when one thing is transformed into another — in this case, the flat sheet of paper is transformed into the crumpled one — there is a point which is "fixed under the transformation"; a point which does not change its position when the transformation is performed. In fact Brouwer's result was not the only fixed point theorem, but was a very powerful one. It applies to any plane area which can be continuously transformed into a disc — in other words, any closed shape in the plane in a single piece and without bits missing from it. Any such shape has a fixed point when it is continuously transformed to itself. Here a continuous transformation means that some parts can be stretched or compressed, or even folded on to themselves, but no tears or holes can be made. To be transformed to itself means that the resulting shape can be made to occupy the same space (or part of it) as the untransformed shape did. Indeed, Brouwer showed that the same is true in any dimension. This means that when a sphere is transformed onto itself, it has a fixed point: when a child takes a glob of playdough and rolls it into a snake, at least one point remains fixed. And this is true of a 27-dimensional child making a 27-dimensional snake from a 27-dimensional glob of playdough. It definitely exists...but where? Surprising this result may be, but it seems hardly likely to start a revolution. Surprising results occur surprisingly often in mathematics — a surprising result in itself, you might say. Even Brouwer's proof used a standard and unsurprising approach not considered controversial at the time and not often considered controversial today. However, it was this type of proof, known as an existence proof, which Brouwer and his movement reject. In this type of proof, the object you are considering — in this case, a fixed point — is shown to exist with certainty. However, the proof says nothing whatsoever about where the object exists or how to find it — the treasure is certainly buried but there is no X to mark the spot. This is like saying that if a paper aeroplane hits the teacher, she is certain that someone in the class threw it, but she has no way of finding out who is the guilty culprit. The notion of proving only the existence of an object was soon rejected outright by Brouwer, and he started to claim that only those objects which we can actually construct should be believed. By "construct" Brouwer did not mean a model made from wire and papier-mâché, but rather a mathematical algorithm, a recipe for determining all of the features of an object. In so doing, you also get existence for free, of course, because if you construct an object then it certainly exists. A constructive proof of a fixed point theorem would tell you exactly where the fixed point is, exactly where X should mark the spot. If another theorem deals with a certain number, say the biggest element of a set of numbers, a constructivist proof would not merely say that the number exists but would rather tell you exactly what that number is, or how to calculate it. This is usually acknowledged to be the birth of constructivism or intuitionism in mathematics — both names are used interchangeably even though neither gives the whole picture. Constructivism is a movement in the foundations of mathematics which holds that only those objects which can be constructed in the human mind from undeniably true thought processes are to be believed. In this way, it harks back to Immanuel Kant (1724-1804) and his claim that intuition alone tells us that the basics of mathematics are true. Indeed, the constructivist school has since been built on the foundation of what is known as intuitionistic logic, to distinguish it from traditional (or "classical") logic. Did you, or didn't you? In the real world, if a teacher confronts you and says "Did you throw this paper aeroplane or didn't you?", the correct answer is always "yes". This is because either you threw it, or you didn't throw it, so a question which asks if you did either of these things must have a positive answer. If A represents a statement such as "I threw the paper aeroplane", the only options seem to be A or "not A", the latter being the opposite of A (in our example, "not A" is "I did not throw the paper aeroplane"). If we write "not A" as ~A, then it would seem unreasonable to suppose that there is a third option beyond A and ~A. This is known in classical logic as the law of the excluded middle, or LEM for short. It says that there is no middle ground between A and ~A. What distinguishes intuitionistic logic from classical logic is the rejection of the LEM. The truth table for the OR operator This deceptively simple distinction disguises a profound difference in viewpoint about truth and meaning. Someone who believes in classical logic is known as a Platonist or a realist. From the realist viewpoint, which is certainly the way you and I were taught mathematics and logic, the truth of a statement is determined by truth tables. If my statement reads "A or ~A", then I look at the possible values (true or false) of the bits which compose the statement, and then at how the logical operators (like OR, AND, NOT, and so on) combine those values to give the truth or falsity of the whole statement. For example, if A is true then ~A is false, and the OR operation says that "P or Q" is true when either P or Q (or both) are true. So in this case, "A or ~A" is true. The only other option is that A is false, but then ~A is true, and again "A or ~A" is true. The statement must be true because either I threw the aeroplane or I did not. But this argument rests on the idea that a statement is either true or false, which is known as bivalence. Intuitionists reject the validity of bivalence in mathematics — why? An intuitionist believes that a mathematical statement is made true or false by the construction of a proof. In other words, a mathematical statement has no truth table value before a proof has been constructed. It is not so much that a statement exists in some weird middle ground between true and false, but rather that truth or falsity is a quality given to a statement by the construction of a proof by a human being using ideas and techniques of mathematics which are intuitively self-evident to the human mind. As such, bivalence is rejected, and therefore so is the LEM. Pi in the sky Luitzen Egbertus Jan Brouwer, 1881-1966, thinking hard about crumpled paper. As an illustration, consider Brouwer’s example based on the familiar number There will always be simple questions we can ask about But the constructivist disagrees because of her rejection of the law of the excluded middle. The constructivist argues that it is a mistake to think of the decimal expansion of Things get even worse for the realist, better for the constructivist, when we bring in Brouwer’s number To constructivists, pi only exists up to the truncated point to which it has been calculated. The problem is that if after a billion years of calculating squadribblioblions of digits of A constructivist says that Most mathematicians quickly reject this argument and say instead that We have looked here at a couple of telling examples of the problems constructivists see in the realist view of mathematics, and much more on this fascinating topic and its history can be found in the suggested reading below. But what has been the effect on modern mathematics of this revelatory and revolutionary viewpoint? Surprisingly little, which is surprising in itself (see above). Mathematics is still taught with an underpinning, explicitly acknowledged or not, of a realist, classical logic. Proof by contradiction, existence proofs, and other methods and types of proof shown to rest on shaky foundations by the constructivists are still taught as standard to undergraduates around the world. At even the earliest stages in our schools, children are taught implicitly of numbers as existing independently of human minds and awaiting discovery. Professional mathematicians tend largely to ignore the issue; the only emotion constructivist arguments elicit tending to be annoyance. The jury is still out on whether mathematics exists independently of us, or is purely a product of the human mind, or some combination of the two. But to assume knowledge of the verdict before it has been delivered is to resort to a reliance on faith which even the realists would usually reject. Suggested reading About the author Phil Wilson is a lecturer in mathematics at the University of Canterbury, New Zealand. He applies mathematics to the biological, medical, industrial, and natural worlds.
{"url":"http://plus.maths.org/content/comment/reply/2349","timestamp":"2014-04-20T06:20:34Z","content_type":null,"content_length":"45539","record_id":"<urn:uuid:aeb0c72c-73b1-4b55-a8d4-39506d1adfed>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: T&A Boxes (was: RE: rdf as a base for other languages) From: Dave Reynolds <der@hplb.hpl.hp.com> Date: Wed, 13 Jun 2001 16:50:56 +0100 Message-ID: <3B278BE0.7ABB9E53@hplb.hpl.hp.com> To: Drew McDermott <drew.mcdermott@yale.edu> CC: www-rdf-logic@w3.org This is not my area so apologies if this is just noise ... When I recently attempted to explain description logics and T-box/A-box distinctions, a colleague likened them to the "=df" notation ("definitionally equal") in mathematical logic. He suggested that, at least as logic used to be taught over here, you distinguished notationally between three forms of equality, viz. "equal in one particular model", "provably equal in all models, bi-equivalence" and "definitionally equal". The distinction between the latter two affects how you construct your proof theory even though for a given set of models you can't distinguish between a tautology and a definition. The "not allowed to be false" nature of T-boxes seems at least analogous. Drew McDermott wrote: > [Pat Hayes] > As far as I know, there is no *mathematical* way to distinguish > definitions and assertions. > Correct me if I'm wrong, but don't logic textbook mention the case > where a definition is simply an equality or if-and-only-if? E.g., you > might write (bachelor ?x) <=> (and (male ?x) (not (married ?x))). Now > take a theory involving the term "bachelor," and you can easily > convert it to a theory that doesn't mention the term anywhere. This > two-stage process neatly captures the idea of the definition "not > being allowed to be false." By the time you catch a contradiction, > the definition is nowhere to be seen. > Of course, this won't work for recursive definitions, which may be why > people like Russell didn't trust them. My knowledge of the history of > logic is a bit shaky at this point. > -- Drew McDermott Received on Wednesday, 13 June 2001 11:51:08 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:40 GMT
{"url":"http://lists.w3.org/Archives/Public/www-rdf-logic/2001Jun/0203.html","timestamp":"2014-04-20T03:47:07Z","content_type":null,"content_length":"10426","record_id":"<urn:uuid:e140aee0-6e19-419a-ba24-430c1ec01f13>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
all 9 comments [–]devilsassassin1 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]The_Unforgiving[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]devilsassassin1 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]The_Unforgiving[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]devilsassassin1 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]The_Unforgiving[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]devilsassassin1 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]The_Unforgiving[S] 1 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]devilsassassin1 point2 points3 points ago sorry, this has been archived and can no longer be voted on
{"url":"http://www.reddit.com/r/cheatatmathhomework/comments/1gt833/integrate_trig_substitution/","timestamp":"2014-04-20T11:48:38Z","content_type":null,"content_length":"69748","record_id":"<urn:uuid:cc275fa4-ecf2-4de6-945b-e742dcb417dc>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
SLUA398 AT103 bq20z70 bq20z80 bq20z90 - Datasheet Archive Application Report SLUA398 SLUA398 ­ October 2006 Thermistor Coefficient Calculator for TI Advanced Fuel Gauges Doug Williams . Battery Management ABSTRACT TI advanced fuel-gauge battery-management ICs use a polynomial model to translate the voltage measured across the thermistor terminals into a temperature value. While the recommended Semitec AT103 AT103 is readily available, some customers prefer to use an alternate device. This report describes the use of a companion Excel® spreadsheet that automates coefficient calculation for a given thermistor. 1 Introduction The firmware algorithm in TI advanced fuel gauge battery management ICs uses a polynomial model to translate voltage measured across the thermistor terminals into temperature. While the recommended Semitec AT103 AT103 is readily available in various shapes, some customers prefer to use an alternate device. This report describes the use of a companion Excel spreadsheet that automates the calculation of coefficients for a given thermistor. The Thermistor Coefficient Calculator is a Microsoft® Excel spreadsheet, which is available as a zip file in the same location as this report. It can be used for various advanced fuel gauge ICs such as the bq2084, bq20z70 bq20z70, bq20z80 bq20z80, bq20z90 bq20z90, etc. 2 Theory of Operation Solver, an add-in tool for Excel, which is part of the standard installation, is used in this case to find a solution to a set of 3rd order polynomials. Given a few points on an unknown curve, it finds the coefficients of a cubic polynomial equation that best fits the available data. The fuel-gauge device firmware uses the cubic polynomial along with the dataflash-based coefficients at 1-s intervals when converting the A/D reading from the thermistor into a temperature value. Solver's job is to minimize the value in cell B33 (see Figure 1), which is the sum of the norms for each known data point. The norms are simply the square of the difference between what you want and what you get. Solver updates the polynomial coefficients in E25 ~ E28 for the best overall fit. You can, of course, change the coefficients manually to see what happens. The values in E31 ~ E36 should be programmed into the respective fuel gauge dataflash locations. Excel, Microsoft are registered trademarks of Microsoft Corporation. SLUA398 SLUA398 ­ October 2006 Submit Documentation Feedback Thermistor Coefficient Calculator for TI Advanced Fuel Gauges 1 www.ti.com Thermistor Tables A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Temperature -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 R1 R2 Vref Vadref ADres Vadmax 30 Rs 31 Vs 32 33 Sum of Norms: 34 35 36 37 B Resistance Calculation of A/DData Flash Thermistor Constants E Count Calculation of best-fit Calculation of (using 3rd order A/D Count polynomial) Norm 67770 53410 42470 33900 27280 22050 17960 14690 12090 10000 8313 6940 5827 4911 4160 3536 3020 2588 2228 1924 1668 25982 25309 24537 23646 22657 21562 20391 19143 17853 16537 15220 13920 12668 11469 10344 9293 8328 7445 6648 5927 5283 8450 61900 3.3 3.3 0.000100708 2.64 7435.03909 2.903624733 -19.55338015 -15.13893104 -10.40258881 -5.334192226 -0.155141665 5.103094832 10.26512627 15.35518484 20.30002249 25.1540882 29.96452929 34.80931675 39.69777558 44.70168577 49.78340483 54.9514734 60.11969589 65.25409511 70.25884332 75.11718798 79.74269257 Polynomial coefficients A0 A1 A2 A3 0.199469 0.019302 0.162078 0.111684 0.024069 0.010629 0.070292 0.126156 0.090013 0.023743 0.001258 0.03636 0.09134 0.088991 0.046913 0.002355 0.014327 0.064564 0.067 0.013733 0.066207 F G HI Instructions 1. Insert Temperature and Resistance values in colums A&B 2. Verify A/D count value in cell C2 does not exceed 27000 If necessary, raise value of R1 in cell B24 to a higher standard resistor value so that C2 is under 27000. 3. Select Solver from Tools menu. Use Add-Ins menu if not available. 4. Set Target Cell to $B$33 5. Choose Equal to: Value Of 0 (choose "Value Of", enter 0 6. Set "By Changing Cells" to $E$25:$E$28 7. Press Solve button - Accept the solution, even though not "feasible" 8. Compare columns A and D to evaluate the accuracy A lower number in B33 indicates a better fit 9. Linearity may be improved by changing R2 in some cases 10. Insure none of the values in columns J or K exceed +/- 32767 J K L Partial Result Bounds Check 10248.5197 -3774.69 10557.1208 -3760.757 10911.1179 -3752.579 11319.6818 -3753.509 11773.1833 -3767.55 12275.2905 -3799.063 12812.2472 -3851.329 13384.5119 -3928.155 13976.0355 -4030.476 14579.4813 -4158.854 15183.3856 -4311.593 15779.4947 -4486.165 16353.5936 -4676.647 16903.3895 -4879.628 17419.2531 -5088.375 17901.1844 -5299.392 18343.6807 -5506.755 18748.5763 -5707.915 19114.0371 -5898.852 19444.6483 -6079.239 19739.9516 -6246.513 Value 4032.48 -7837.818 22162.45 -30050.77 Data Flash Thermistor Constants A0 (* Coefficient 4) 4032 A1 (* Coefficient 3) -7838 A2 (* Coefficient 2) 22162 A3 (* Coefficient 1) -30051 Min A/D 0 Max Temp (K) 4032 * The polynomial coefficients have alternate names in some fuel gauges 1.330484661 Figure 1. Thermistor Coefficient Calculator Spreadsheet 3 Thermistor Tables Enter the data for the desired thermistor into cells B2 ~ B22 which correspond to the temperatures in column A. Some vendors include resistance tables in their catalog; others provide a calculator for you to generate them. If a given vendor only supplies a small table with multiples of 10°C, then use it as-is in the spreadsheet, but include some of the degree-resistance pairs twice to fill up the table of 21 pairs. 4 Circuit Modifications For maximum accuracy, the voltage input voltage to the A/D converter in the fuel gauge should be limited to around 82% of the reference voltage, which is the same as V CC in this case. Looking at it another way, the A/D count should not exceed 27000 (82% of full scale 32767) counts for low temperature readings that must be accurate. Column C displays the expected A/D count for a given temperature. Measurements between 27000 and 32767 will be degraded somewhat, but still useful. The recommended thermistor circuit, where R1 = 8.45 k, R2 = 61.9 k and Thermistor = 10 k at 25°C, should satisfy the above requirement in most cases. However, if a 10-k thermistor cannot be used, the fixed resistors should be modified in cells B24 and B25 to optimize the measurement. B25 is used to linearize the thermistor curve somewhat, enhancing the polynomial curve-fitting accuracy. 2 Thermistor Coefficient Calculator for TI Advanced Fuel Gauges SLUA398 SLUA398 ­ October 2006 Submit Documentation Feedback IMPORTANT NOTICE Texas Instruments Incorporated and its subsidiaries (TI) reserve the right to make corrections, modifications, enhancements, improvements, and other changes to its products and services at any time and to discontinue any product or service without notice. Customers should obtain the latest relevant information before placing orders and should verify that such information is current and complete. All products are sold subject to TI's terms and conditions of sale supplied at the time of order acknowledgment. TI warrants performance of its hardware products to the specifications applicable at the time of sale in accordance with TI's standard warranty. Testing and other quality control techniques are used to the extent TI deems necessary to support this warranty. Except where mandated by government requirements, testing of all parameters of each product is not necessarily performed. TI assumes no liability for applications assistance or customer product design. Customers are responsible for their products and applications using TI components. To minimize the risks associated with customer products and applications, customers should provide adequate design and operating safeguards. TI does not warrant or represent that any license, either express or implied, is granted under any TI patent right, copyright, mask work right, or other TI intellectual property right relating to any combination, machine, or process in which TI products or services are used. Information published by TI regarding third-party products or services does not constitute a license from TI to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property of the third party, or a license from TI under the patents or other intellectual property of TI. Reproduction of information in TI data books or data sheets is permissible only if reproduction is without alteration and is accompanied by all associated warranties, conditions, limitations, and notices. Reproduction of this information with alteration is an unfair and deceptive business practice. TI is not responsible or liable for such altered documentation. Resale of TI products or services with statements different from or beyond the parameters stated by TI for that product or service voids all express and any implied warranties for the associated TI product or service and is an unfair and deceptive business practice. TI is not responsible or liable for any such statements. Following are URLs where you can obtain information on other Texas Instruments products and application solutions: Products Applications Amplifiers amplifier.ti.com Audio www.ti.com/audio Data Converters dataconverter.ti.com Automotive www.ti.com/ automotive DSP dsp.ti.com Broadband www.ti.com/broadband Interface interface.ti.com Digital Control www.ti.com/digitalcontrol Logic logic.ti.com Military www.ti.com/military Power Mgmt power.ti.com Optical Networking www.ti.com/opticalnetwork Microcontrollers microcontroller.ti.com Security www.ti.com/security Low Power Wireless www.ti.com/lpw www.ti.com/telephony www.ti.com/video Wireless Mailing Address: Telephony Video & Imaging www.ti.com/wireless Texas Instruments Post Office Box 655303 Dallas, Texas 75265 Copyright 2006, Texas Instruments Incorporated
{"url":"http://www.datasheetarchive.com/AT103/Datasheet-082/DASF0043109.html","timestamp":"2014-04-19T14:37:03Z","content_type":null,"content_length":"18963","record_id":"<urn:uuid:5bef2c02-af22-41ad-b607-187bfc5b2575>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Luks: Computing in quotient groups Results 1 - 10 of 16 - Bull. Amer. Math. Soc , 1992 "... Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to ..." Cited by 40 (3 self) Add to MetaCart Abstract. In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to be done in the area. We hope to show that the study of algorithms not only increases our understanding of algebraic number fields but also stimulates our curiosity about them. The discussion is concentrated of three topics: the determination of Galois groups, the determination of the ring of integers of an algebraic number field, and the computation of the group of units and the class group of that ring of integers. 1. - INTERNATIONAL CONGRESS OF MATHEMATICANS , 1998 "... This note describes recent research using structural properties of finite groups to devise efficient algorithms for group computation. ..." - Proc. 36th IEEE FOCS , 1999 "... l over the generators grows as c l for some constant c>1 depending on G. For groups with abelian subgroups of finite index, we obtain a Las Vegas algorithm for several basic computational tasks, including membership testing and computing a presentation. This generalizes recent work of Beals an ..." Cited by 11 (2 self) Add to MetaCart l over the generators grows as c l for some constant c>1 depending on G. For groups with abelian subgroups of finite index, we obtain a Las Vegas algorithm for several basic computational tasks, including membership testing and computing a presentation. This generalizes recent work of Beals and Babai, who give a Las Vegas algorithm for the case of finite groups, as well as recent work of Babai, Beals, Cai, Ivanyos, and Luks, who give a deterministic algorithm for the case of abelian groups. # 1999 Academic Press Article ID jcss.1998.1614, available online at http:## www.idealibrary.com on 260 0022-0000#99 #30.00 Copyright # 1999 by Academic Press All rights of reproduction in any form reserved. * Research conducted while visiting IAS and DIMACS and supported in part by an NSF Mathematical Sciences - In Proceedings 25th Symposium on Mathematical Foundations of Computer Science , 2000 "... . The boolean hierarchy of k-partitions over NP for k 2 was introduced as a generalization of the well-known boolean hierarchy of sets. The classes of this hierarchy are exactly those classes of NPpartitions which are generated by nite labeled lattices. We extend the boolean hierarchy of NP-partiti ..." Cited by 7 (3 self) Add to MetaCart . The boolean hierarchy of k-partitions over NP for k 2 was introduced as a generalization of the well-known boolean hierarchy of sets. The classes of this hierarchy are exactly those classes of NPpartitions which are generated by nite labeled lattices. We extend the boolean hierarchy of NP-partitions by considering partition classes which are generated by nite labeled posets. Since we cannot prove it absolutely, we collect evidence for this extended boolean hierarchy to be strict. We give an exhaustive answer to the question of which relativizable inclusions between partition classes can occur depending on the relation between their dening posets. The study of the extended boolean hierarchy is closely related to the issue of whether one can reduce the number of solutions of NP problems. For nite cardinality types, assuming the extended boolean hierarchy of k-partitions over NP is strict, we give a complete characterization when such solution reductions are possible. 1 Introduct... , 2001 "... We introduce a new algorithm to compute up to conjugacy the maximal subgroups of a finite permutation group. Or method uses a "hybrid group" approach ..." Cited by 5 (2 self) Add to MetaCart We introduce a new algorithm to compute up to conjugacy the maximal subgroups of a finite permutation group. Or method uses a "hybrid group" approach , 1999 "... This dissertation investigates deterministic polynomial-time computation in matrix groups over finite fields. Of particular interest are matrix-group problems that resemble testing graph isomorphism. The main results are instances where the problems admit polynomial-time solutions and methods that e ..." Cited by 4 (3 self) Add to MetaCart This dissertation investigates deterministic polynomial-time computation in matrix groups over finite fields. Of particular interest are matrix-group problems that resemble testing graph isomorphism. The main results are instances where the problems admit polynomial-time solutions and methods that enable such efficiency. , 2002 "... For an integer constant d > 0, let d denote the class of finite groups all of whose nonabelian composition factors lie in S d ; in particular, d includes all solvable groups. Motivated by applications to graph-isomorphism testing, there has been extensive study of the complexity of computation for p ..." Cited by 4 (1 self) Add to MetaCart For an integer constant d > 0, let d denote the class of finite groups all of whose nonabelian composition factors lie in S d ; in particular, d includes all solvable groups. Motivated by applications to graph-isomorphism testing, there has been extensive study of the complexity of computation for permutation groups in this class. In particular, set-stabilizers, group intersections, and centralizers have all been shown to be polynomial-time computable. The most notable gap in the theory has been the question of whether normalizers of subgroups can be found in polynomial time. We resolve this question in the affirmative. Among other new procedures, the algorithm requires instances of subspace-stabilizers for certain linear representations and therefore some polynomial-time computation in matrix groups. - Groups' 93 -- Galway/St. Andrews, volume 212 of London Math. Soc. Lecture Note Ser , 1995 "... Algebra" in 1967 [Lee70]. Its proceedings contain a survey of what had been tried until then [Neu70] but also some papers that lead into the Decade of discoveries (1967--1977). At the Oxford conference some of those computational methods were presented for the first time that are now, in some cases ..." Cited by 3 (0 self) Add to MetaCart Algebra" in 1967 [Lee70]. Its proceedings contain a survey of what had been tried until then [Neu70] but also some papers that lead into the Decade of discoveries (1967--1977). At the Oxford conference some of those computational methods were presented for the first time that are now, in some cases varied and improved, work horses of CGT systems: Sims' methods for handling big permutation groups [Sim70], the Knuth-Bendix method for attempting to construct a rewrite system from a presentation [KB70], variations of the Todd-Coxeter method for the determination of presentations of subgroups [Men70]. Others, like J. D. Dixon's method for the determination of the character table [Dix67], the p-Nilpotent-Quotient method of I. D. Macdonald [Mac74] and the Reidemeister-Schreier method of G. Havas [Hav74] for subgroup presentations were published within a few years from that conference. However at least equally important for making group theorists aware of CGT were a number of applications of... - Quart. J. Math. (Oxford , 1997 "... IN this note, we consider the following problem. Let G be a finite permutation group of degree d, and let Nbea normal subgroup of G. Under what circumstances does G/N have a faithful permutation representation of degree at most di ..." Cited by 3 (1 self) Add to MetaCart IN this note, we consider the following problem. Let G be a finite permutation group of degree d, and let Nbea normal subgroup of G. Under what circumstances does G/N have a faithful permutation representation of degree at most di - Math. Comp., posted on May , 1999 "... Abstract. The lifting of results from factor groups to the full group is a standard technique for solvable groups. This paper shows how to utilize this approach in the case of non-solvable normal subgroups to compute the conjugacy classes of a finite group. 1. ..." Cited by 3 (2 self) Add to MetaCart Abstract. The lifting of results from factor groups to the full group is a standard technique for solvable groups. This paper shows how to utilize this approach in the case of non-solvable normal subgroups to compute the conjugacy classes of a finite group. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=356687","timestamp":"2014-04-20T12:33:11Z","content_type":null,"content_length":"35528","record_id":"<urn:uuid:84d94df2-e89d-465e-9fc0-0ca35c771eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypergeometric functions in arithmetic geometry Title: Hypergeometric functions in arithmetic geometry Author: Salerno, Adriana Julia, 1979- Hypergeometric functions seem to be ubiquitous in mathematics. In this document, we present a couple of ways in which hypergeometric functions appear in arithmetic geometry. First, we show that the number of points over a finite field [mathematical symbol] on a certain family of hypersurfaces, [mathematical symbol] ([lamda]), is a linear combination of hypergeometric Abstract: functions. We use results by Koblitz and Gross to find explicit relationships, which could be useful for computing Zeta functions in the future. We then study more geometric aspects of the same families. A construction of Dwork's gives a vector bundle of deRham cohomologies equipped with a connection. This connection gives rise to a differential equation which is known to be hypergeometric. We developed an algorithm which computes the parameters of the hypergeometric equations given the family of hypersurfaces. Department: Mathematics Subject: Hypergeometric functions Arithmetical algebraic geometry Hypersurfaces URI: http://hdl.handle.net/2152/18410 Date: 2009-05
{"url":"http://repositories.lib.utexas.edu/handle/2152/18410","timestamp":"2014-04-21T14:43:43Z","content_type":null,"content_length":"14890","record_id":"<urn:uuid:ca43ed59-9287-46ea-aef9-68e54eb35b3b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Another linear system April 29th 2009, 04:45 AM Another linear system i know using the substitution method plays out like so... I have to express the answer in interval notation....how would I do that, and since the answer is 0=0 is their a value for x and y at all April 29th 2009, 05:14 AM Hi nmound, The second equation is a multiple of the first equation making them the same line. There is an infinite number of solutions. We call this system "consistent and dependent". Solution: $(- \infty, + \infty)$
{"url":"http://mathhelpforum.com/algebra/86418-another-linear-system-print.html","timestamp":"2014-04-21T08:16:14Z","content_type":null,"content_length":"4746","record_id":"<urn:uuid:9c3e6683-45a5-4ac8-b39c-9b998db175fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Theoretic Function t(n) or "tao" Proof October 13th 2011, 06:49 PM #1 Oct 2011 Number Theoretic Function t(n) or "tao" Proof t(n) = number of positive divisors of n For n => 1, prove that t(n) <= 2sqrt(n) Hint: If d | n, then one of d or n/d is less than or equal to sqrt(n). I do not know where to start with this problem. This is a homework question so I just need some assistance getting started with this problem. Also, I'm not sure how to use the hint in this problem. Thanks for your help. Re: Number Theoretic Function t(n) or "tao" Proof If you don't know how to handle a problem, you should try to look at a few examples first (and in this case, see how the hint plays a role in the example). Let's take a look at the problem for $n=60$. We'd like to count up the number of divisors for $60$. We can just use the "elementary school" method, by looking at pairs of divisors (we get one "big" and one "small): $1$ and $60$, $2$ and $30$, $3$ and $20$, and so on. When can we stop checking? We stop at $\lfloor \sqrt{60}\rfloor=7$, since $8$ and its corresponding "big" divisor would exceed $60$. (In other words, they would have to multiply to $64$ or more.) So with respect to the original problem, if we want to count up the divisors, we only need to find the divisors less that $\sqrt{60}$ and double them (since everything comes in pairs). In this case, there are $12$ divisors, which is less that $2 \sqrt{60}$. Can you see how to apply this for the general $n$? Re: Number Theoretic Function t(n) or "tao" Proof Ok, that makes sense. So here is what I'm thinking: Clearly, the result holds for prime numbers of n since t(n) = 2 if n is prime and 2 <= 2sqrt(n) holds for any prime n. Thus, if n is composite, then n = d*e for some integers d and e. Either d <= sqrt(n) or n/d = e <= sqrt(n) meaning that one of these divisors is less than or equal to sqrt(n). Then since divisors come in pairs, there are t(n)/2 many divisors each less than or equal to sqrt(n). But then, how would I show that t(n)/2 <= sqrt(n)? Now, this part I'm stuck on. Re: Number Theoretic Function t(n) or "tao" Proof You're almost there. You only have to consider the worst case scenario: maybe all of the numbers less than or equal to $\sqrt{n}$ are divisors. (By the way, can you find an example where this happens?) Then, $\tau(n)/2=\sqrt{n}$. Otherwise, the number of divisors is less than that: $\tau(n)/2<\sqrt{n}$. In summary, we can say $\tau(n)/2\leq \sqrt{n}$, which gives the result. How does that sound? Re: Number Theoretic Function t(n) or "tao" Proof You're almost there. You only have to consider the worst case scenario: maybe all of the numbers less than or equal to $\sqrt{n}$ are divisors. (By the way, can you find an example where this happens?) Then, $\tau(n)/2=\sqrt{n}$. Otherwise, the number of divisors is less than that: $\tau(n)/2<\sqrt{n}$. In summary, we can say $\tau(n)/2\leq \sqrt{n}$, which gives the result. How does that sound? oh, oh i know this one! 6, right? or is it 12? Re: Number Theoretic Function t(n) or "tao" Proof I'm having trouble making the connection that t(n)/2 <= sqrt(n). I see that there are t(n)/2 many divisors less than sqrt(n) but I don't see how I can determine that t(n)/2 itself is less than or equal to sqrt(n). Re: Number Theoretic Function t(n) or "tao" Proof suppose n is a perfect square. then n has exactly one divisor that occurs as its own quotient. all the others occur in pairs, one is less than √n, and the other is greater than √n. therefore, in this case τ(n) ≤ 2(√n-1) + 1 = 2√n - 1 < 2√n. on the other hand, suppose that n is not a perfect square. then all divisors of n occur in pairs, one is less than √n, one is greater than √n. if k is the greatest integer less than √n, then τ(n) ≤ 2k < 2√n the pairing we are making, is pairing d with n/d, for each divisor d ≤ √n. Re: Number Theoretic Function t(n) or "tao" Proof I figured it out now. Thanks to all who responded to this! October 13th 2011, 07:02 PM #2 October 13th 2011, 07:17 PM #3 Oct 2011 October 13th 2011, 07:41 PM #4 October 13th 2011, 09:16 PM #5 MHF Contributor Mar 2011 October 14th 2011, 01:26 PM #6 Oct 2011 October 14th 2011, 03:26 PM #7 MHF Contributor Mar 2011 October 15th 2011, 08:06 AM #8 Oct 2011
{"url":"http://mathhelpforum.com/number-theory/190329-number-theoretic-function-t-n-tao-proof.html","timestamp":"2014-04-19T05:35:18Z","content_type":null,"content_length":"55751","record_id":"<urn:uuid:fb5f7578-e376-419c-adaf-6c4ebf761320>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
<h3 class="jump">-webkit-marquee-speed</h3> <p>Defines the scroll or slide speed of a marquee box.</p> <pre>-webkit-marquee-speed: <em... Defines the scroll or slide speed of a marquee box. -webkit-marquee-speed: speed -webkit-marquee-speed: distance / time The scroll or slide speed of the marquee. The distance term in the speed equation. The time term in the speed equation. Types Allowed Integers, time units, nonnegative values The marquee moves at a fast speed. The marquee moves at a normal speed. The marquee moves at a slow speed. This property can either take one speed parameter (slow, for example) or a measure of distance and a measure of time separated by a slash (/). Available in Safari 3.0 and later. (Called -khtml-marquee-speed in Safari 2.0.) Available in iOS 1.0 and later. Support Level Under development.
{"url":"http://hkitago.tumblr.com/post/17635505658","timestamp":"2014-04-20T20:55:24Z","content_type":null,"content_length":"23168","record_id":"<urn:uuid:8239d20b-4701-46f8-8dbc-c498cb67f188>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
When is one thing equal to some other thing? For those of you still looking, here's a fun introduction to Category Theory by Barry Mazur. If one already understands what categories, functors, natural transformations, and adjoints are, is there a good place to look for the category-theoretic interpretation of monads? Reading off the categorical definition and considering that the categories in question represent languages (or at least their semantics) should get you off to a good start. In practice there isn't a category-theoretic interpretation of monads, just a category-theoretic definition. Categorically, a monad is the composition of two adjoint functors. If F -| U, then U o F is a monad. IIRC, they were invented to model substitution in finitary(?) equational theories. The fact that they are useful in modelling effects in functional programs came as something of a surprise. Your best bet is to read Moggi's paper "Notions of Computation and Monads" -- you're probably better qualified to read it than I am. :) This, I think, is exactly what I was looking for. Thanks : ) After cursory reading, I think that the exposition has some serious defects (if I missed some profound insight, I apologize). 1. The 'bare bones' set theory suffers from the Russell's paradox due to unrestricted comprehension (which is sort ofrestricted at the category of sets level but not at the set-as-an-object level). 2. The 'bare bones' set theory does not have the axiom of infinity which makes the natural numbers object impossible. 3. The whole section ten does not make any obvious sense starting with its name and including the conclusion: "That is, the natural numbers, N, is an initial object of P." Taking the conclusion at at its face value, one has to arrive at an inescapable conclusion that the natural numbers object is an empty set because, as every CT fan knows, the empty set is the unique initial object in the category of sets (Set). [added later] After re-reading the article, I realized I made a mistake with respect to category P. P is actually a category whose objects are not sets but triples (X, x , s). So section ten would be OK if the author fixed (1) and (2). The article contains some dark hints that are hard to interpret: "This, of course, has shock-value: it recruits the entire apparatus of propositional verification to its particular end. " What's the above supposed to mean? The fact that, especially when taken broadly, mathematical inductin has extraordinary consequences, is amply illustrated by the ingenious work of Gentzen5." The author confuses the mathematical induction with the transfinite induction, using which Gentzen proved that PA is consistent. What's extraordinary about that ? I think that, for example, Goldblatt's book is a much better read. Are those flaws part of the text? (1) The text states: "The repeated use of the word repetoire is already a hint that something big is being hidden. It would be downright embarrasing, for example, to have replaced the words 'repetoire' int he above description by 'set,' for besides the blatant circularity, we would worry, with Russell, about what arcane resrictions we would have to make about the universal quantifier, once that is thrown into the picture". The author does not profer an axiom of comprehension, and explicitly sets aside notions of quantifiers, but is at pains (during the parts that I have read) to separate the concept of 'class' from the concept of 'set', so I do not understand how the theory he presents suffers from Russel's antimony. (2) Does the "bare bones" set theory need an axiom of infinity, or rather, is such a discussion truly germaine to this text? The author reminds us that the set theory being used is not as important as the concepts of category theory on which he is concentrating, and on page 10 introduces the term "B.Y.O.S.T." (bring your own set theory). (3) You already acknowledged. Being very much a beginner in category theory, I must say I've found the text, so far, to be a useful and easily absorbed contribution to a diffucult subject. How does merely using words 'class' and 'repertoire' prevents one from saying: {X | X not in X} ? The author does not profer an axiom of comprehension, and explicitly sets aside notions of quantifiers, but is at pains (during the parts that I have read) to separate the concept of 'class' from the concept of 'set', so I do not understand how the theory he presents suffers from Russel's antimony So how can one say anything about sets without some axiom of comprehension ? If the axiom is implied, {x | P(x)}, then his sets surely suffer from the paradox. Likewise, how can one say something about a set (or practically anything) without using the notion of the universal quantifier ? (2) Does the "bare bones" set theory need an axiom of infinity, or rather, is such a discussion truly germaine to this text? Because if one does not assume some sort of axiom of infinity, one cannot prove that the natural number object even exists in his category. It makes the subsequent discussion unfounded. The 'bare bone' set, as I said before, also admits arbitrary collections which leads to Russell's paradox. So, no, you cannot bring your any of own arbitrary sets, only some. 3. Right. To me it seems that if you bring a bad set theory you only have to blame your self. i.e. B.Y.O.S.T. realy means bring a set theory that you think is good. Because if one does not assume some sort of axiom of infinity, one cannot prove that the natural number object even exists in his category. It makes the subsequent discussion unfounded. Where does one draw the line, "assuming" the axiom of infinity or assuming the existence of some appropriate set theory? Of course, one needs to explore these concepts, but the paper is about category theory, not ZF set theory. Since the natural number structure in category theory is parasitic on one of Dedekind's theorems (recursive definition), the author should have explained how his 'bare' sets might have such structure. Without such explanation, the section does not make sense. The article contains some dark hints that are hard to interpret: "This, of course, has shock-value: it recruits the entire apparatus of propositional verification to its particular end. " What's the above supposed to mean? Is he not referring to the part of the definition which states "if P(n) is any proposition ... , then P(n) is true for all n in N". The definition relies on a notion of knowing what a proposition is and is not, and what a proposition means. The definition also quantifies over all propositions, which is kind of shocking, is it not? Is he not referring to the part of the definition which states "if P(n) is any proposition ... , then P(n) is true for all n in N". The definition relies on a notion of knowing what a proposition is and is not, and what a proposition means. The definition also quantifies over all propositions, which is kind of shocking, is it no The account is dubious for the following reasons: 1. It's a second order Peano postulates formulation (a strawman). There is no need to rely on second-order logic. 2. The induction principle can be reformulated in set terms without appealing to propositions and such: P3: "If P is a subset of N, 1 is in P, and the implication (x in P ==> successor(x) in P) holds, then S = N". Is it hard to understand ? 3. The induction principle is a *theorem* in the standard set theory. What's so shocking about it ? I guess the whole thing is shocking (to me) because whichever way you look at it you have axiom schemes that are instantiated once for each proposition in your language (i.e. a scheme of infinitely many axioms) such as the axiom of separation and the axiom of replacement in ZF set theory. It took me a little while when I learned it to understand the difference between that and higher order The definition of natural numbers (P3) in the parent post cannot effectively be applied without the axiom of separation, which is where your proto-quantifying over all propositions comes back into it: The definition merely hides the fact that this is what is happening. I think the author explicitly acknowledges that you ultimately need these devices, in the series of paragraphs culminating in: When we gauge the differences in various mathematical viewpoints [...] for ultimately they may require exactly the same things, but also to pay attention to the order in which each piece of equipment is introduced [...] I guess the whole thing is shocking (to me) because whichever way you look at it you have axiom schemes that are instantiated once for each proposition in your language (i.e. a scheme of infinitely many axioms) such as the axiom of separation and the axiom of replacement in ZF set theory. I do not understand what you are saying at all. In the first order Peano formulation, ZFC with its axioms is irrelevant. In the second order formulation, one does not need the infinite induction axiom schema, naturally. In fact, induction is a theorem in ZF. The (P3) definition you gave uses the term "if P is a subset of ...". How do you define the notion of subsets without ZF, or something like it? And in ZF, the mechanism of constructing subsets is the axiom (scheme) of separation which is a single sentence per proposition, i.e. infinitely many sentences. So the (P3) definition harnesses the machinery of propositions. How do you define the notion of subsets without ZF The notion of subset is trivial and does not require ZF or anything like that. What actually is required is paradox avoidance caused by unrestricted comprehension. It can be achieved in many various ways. ZF is one example with its axiom of separation/subset/restricted comprehension schema, NBG is another example which admits a finite axiomatization. And in ZF, the mechanism of constructing subsets is the axiom (scheme) of separation which is a single sentence per proposition, i.e. infinitely many sentences Why is it a problem ? So the (P3) definition harnesses the machinery of propositions I do not understand that at all. P3 is a simple theorem in ZF or NBG. It is not a problem that the axiom scheme of separation requires infinitely many axioms. The point however was to show that (P3) does indeed harness the notion of propositions, because you do not use (P3) over ZF as a lemma without also using separation to construct subsets. I get your point about the author's original definition being (misleadingly) a second order formulation, because it was expressed in the manner "for all propositions P ...". On the question of comprehension: Can you show me where the author suggested that unrestricted comprehension is on the cards? I do not see it. I do not understand what you mean by: The point however was to show that (P3) does indeed harness the notion of propositions, What's "harness the notion of propositions" supposed to mean in simple words ? because you do not use (P3) over ZF as a lemma without also using separation to construct subsets. Once again, you do no use separation to "construct" anything, you use it to *exclude* the collections that would otherwise have led to Russell's paradox. Please clarify what exactly you've meant by your statement. On the question of comprehension: Can you show me where the author suggested that unrestricted comprehension is on the cards? I do not see it He did not say explicitely that his "bare sets" allow unrestricted comprehension, but unless such restriction is clearly stated, the default is to assume the naive set theory with unrestricted comprehension. After all, there are several ways to avoid the paradox: ZF, NBG, NF, etc. He sort of hinted that the collection of objects in his category of "bare sets" are not sets (still not good enough but so be it), but he did not impose any restriction on the objects/sets themselves. I believe these quotes (elided in places) are explicit enough for the purposes of the text, is it not? [ It we were to equate [class] with "set" ... ] we would worry, with Russell, about what arcane restrictions we would have then to make regarding our universal quantifier, once that is thrown into the picture. the standard word is class and the notion behind it deserves much discussion; we will have some things to say about it in the next section. In short by a class, we mean a collection of objects, with some restrictions on which subcollections we, as mathematicians, can deem sets and thereby operate on with the resources of our set theory. believe these quotes (elided in places) are explicit enough for the purposes of the text, is it not? [ It we were to equate [class] with "set" ... ] we would worry, with Russell, about what arcane restrictions we would have then to make regarding our universal quantifier, once that is thrown into the picture. I do not understand the above. When can the universal quantifier be thrown *out* of the picture ? In short by a class, we mean a collection of objects, with some restrictions on which subcollections we, as mathematicians, can deem sets and thereby operate on with the resources of our set theory. First, what are those restrictions ? The author never quite explains what he means. His library analogy is muddled. In fact, a class *can* contain all the possible sets as its members -- that's exactly what his collection of objects in the category of sets does. Secondly, and more importantly, he uses the notion of class when he talks about the collection of objects in his category, but not when he talks about the sets-as-objects (as I said before). Thirdly, ZF for example, does not have the notion of class. So assuming he applies the notion of class to the objects as well(although there is no indications he does), would it mean that ZF is excluded as an underlying set theory for his objects ? What's his "bare bones" set theory is then ? Is it powerful enough to have a set of natural numbers ? Is it too naive to have paradoxes ? First, what are those restrictions ? The author never quite explains what he means. His library analogy is muddled. In fact, a class *can* contain all the possible sets as its members -- that's exactly what his collection of objects in the category of sets does. Muddled? I don't like the analogy but I don't see where it is muddled. Where does the author say a class cannot contain all the possible sets as its members? he uses the notion of class when he talks about the collection of objects in his category, but not when he talks about the sets-as-objects (as I said before). Can you explain this in more detail? I cannot understand your point. Thirdly, ZF for example, does not have the notion of class. So assuming he applies the notion of class to the objects as well(although there is no indications he does), would it mean that ZF is excluded as an underlying set theory for his objects ? What's his "bare bones" set theory is then ? Is it powerful enough to have a set of natural numbers ? Is it too naive to have paradoxes ? Isn't this a strawman since he doesn't attempt to construct categories in which the objects are classes? What's his "bare bones" set theory is then ? Is it powerful enough to have a set of natural numbers ? Is it too naive to have paradoxes ? On page 12 he uses the words "as we shall see, such an initial object exists given that the underlying bare bones set theory is not ridiculously impoverished". Overall though, you seem to object to the pedagogic style. Muddled? I don't like the analogy but I don't see where it is muddled. Where does the author say a class cannot contain all the possible sets as its members? So what do you think is a class as explained by the 'library analogy' ? Can you explain this in more detail? I cannot understand your point. I've already said several times that he applies the notion of class to the collection of objects in his category of sets but not to the object themselves. What's not clear about that ? Isn't this a strawman since he doesn't attempt to construct categories in which the objects are classes? So *what* kind of objects are his sets ? ZF-sets, NBG-sets or some other sets ? He does not give a hint. "as we shall see, such an initial object exists given that the underlying bare bones set theory is not ridiculously impoverished". And what do you think that supposed to mean ? What is "not ridiculously impoverished" ? Overall though, you seem to object to the pedagogic style I just do not understand whole passages in the exposition, like the library analogy, "the position of the natural numbers as a discrete dynamical system, among all discrete dynamical systems." , the "it recruits the entire apparatus of propositional verification to its particular end" piece, etc. Do you ? What's that supposed to mean ? How can natural numbers be "a discrete dynamical system" What's that supposed to mean ? How can natural numbers be "a discrete dynamical system" Anyone with a hint of imagination can guess what he has in mind: the elements of N are the points of the dynamical system and the successor function is the time evolution operator. the elements of N are the points of the dynamical system and the successor function is the time evolution operator That does not make any obvious sense. What are "the points of the dynamical system" in the context of natural numbers as defined in ZF (or any other set theory) or in category theory ? What is "the time evolution operator" in the same context ? That does not make any obvious sense. What are "the points of the dynamical system" in the context of natural numbers as defined in ZF (or any other set theory) or in category theory ? What is "the time evolution operator" in the same context ? A discrete dynamical system (I should probably throw in another adjective such as "autonomous") is a pair (X,t) consisting of a set X and a map t : X -> X. We often think of X as a kind of space, its elements as points and t as a time evolution operator that defines the evolution of these points from one time-step to the next. The objects of Mazur's Peano category can then be thought of as base-pointed discrete dynamical systems. I think this underlying definition is pretty clear from his remarks: "This strategy of defining the Natural Numbers as “an” initial object in a category of (what amounts to) discrete dynamical systems, as we have just done, is revealing, I think; it isolates, as Peano himself had done, the fundamental role of mere succession in the formulation of the natural numbers." A discrete dynamical system (I should probably throw in another adjective such as "autonomous") is a pair (X,t) consisting of a set X and a map t : X -> X. We often think of X as a kind of space, its elements as points and t as a time evolution operator that defines the evolution of these points from one time-step to the next. The objects of Mazur's Peano category can then be thought of as base-pointed discrete dynamical systems. No kidding ? A discrete dynamic system is defined as a pair (X, t) where X is a topological space and t is a continuos function t:X->X. Then, the trajectory of point x in X is defined as the sequence (x, t(x), ..., tn(x)) where n in N and tn is the composition of 'n' applications of t. It just did not occur to me, probably due to lack of imagination, that the abstruse notion of DDS can be somehow revealing when applied to natural numbers. I honestly thought that the author had some other insightful idea in his mind. Natural numbers would be a ludicrously trivial DDS instance, on par with a light switch being a DDS or virtually any thing being a DDS by taking t as the identity 'morphism', if it were not for a fatal cirularity in definition. The very word 'discrete' [in the given context] is meaningless without first having an idea what natural numbers are. Further, forgetting the sloppiness for a moment (natural numbers clearly are not the initial object, rather the 'Peano triple' as an algebra is), how does such triple 'isolates the fundamental role of mere succession' any better, or even equally well, than the original Peano axioms do ? What's the point of making something simple and easily understandable be just the opposite ? So *what* kind of objects are his sets ? ZF-sets, NBG-sets or some other sets ? He does not give a hint. He says, somewhere, that it does not matter. His very point seems to be that the whole concept of categories is independent of (or, if you want, parametric in) the choice of a particular notion of He says, somewhere, that it does not matter But it does matter for his natural numbers exposition -- the 'bare' set theory should be powerful enough to contain them and not too powerful to avoid inconsistency. As soon as you specify what powerful enough actually means, you'll discover that natural numbers description as a category-of-sets structure is just a paraphrase of set theory treatment. ... and the author acknowledges almost exactly that ("is [...] a paraphrase") in the text. Again, pedagogic style. and the author acknowledges almost exactly that ("is [...] a paraphrase") in the text. I am not sure what exact words you are referring to. Besides, I do not know how he can possibly paraphrase something if the original phrasing was not provided. The comments I refer to have already been quoted in one of the posts on this thread. As you wish. Once again, you do no use separation to "construct" anything, you use it to *exclude* the collections that would otherwise have led to Russell's paradox. Please clarify what exactly you've meant by your statement. Sorry: Don't understand this at all. (1) I am quite used to seeing terms like "we construct the subset { x in Blah | P(x) }", i.e. we use (an instance of) the axiom of separation to construct a set. (2) How would you "use it to *exclude* the collections that would otherwise ...": what are these collections, and from what other things are you excluding them? (1) I am quite used to seeing terms like "we construct the subset { x in Blah | P(x) }", i.e. we use (an instance of) the axiom of separation to construct a set. So when you say "read-headed people", do you imagine a process whereby some force plucks read-heads and puts them into some sort of a container ("constructs a set"), rather then just imagining those folks as possessing some feature ('read-headednes') ? If so, I guess it's an OK intuition although unusual when one talks about ZF sets. It depends on whether you consider sets as existing 'out there' or rather as created anew every time you want to use them and destroyed when the interest in such sets is lost. 2) How would you "use it to *exclude* the collections that would otherwise ...": what are these collections, and from what other things are you excluding them? OK, let me rephrase. In ZF, there are predicates that cannot define a set, such as R = {x | not( x in x)} that can be restated as R = { x in V | not (x in x)} where V is the universal set. Since it can be shown that thanks to AoS V does not exist, { x in V | not (x in x)} does not define a set in ZF. In NBG, on the other hand R defines a collection called a proper class. Please ignore this post I assume you mean "red headed". Nope, I meant exactly what I wrote. you would (for instance) have previously established that "[all] people" is a set That 'people' is a set in *any* set theory is an obvious fact by virtue of people being a finite collection. It depends on whether you consider sets as existing 'out there' or rather as created anew every time you want to use them and destroyed when the interest in such sets is lost. Sorry I dont understand this. What's unclear about that ? One can consider sets as either existing in some Platonic universe or being formed 'constructively'. The latter viewpoint if pursued consistently leads to construstive mathematics with its gains and pains. Here you would not using separation to show that R is not a set, No true, first one shows that the universal set does not exist assuming AoS and not(x in x) so it follows that { x in V | not (x in x)} does not define a set. See Zermello's original proof for you would use a simple diagnonalisation argument, which requires reductio ad absurdum, and utilises purely logical inference. That is not correct. ... rather it is one of the axioms of ZF that is used to include (or "construct") certain sets into the universe of sets. OK, so given {{ x in V | not (x in x)} which is an instance of such AoS construction, you construct yourself right into the paradox because you did not *prove* that V = {x | x=x} does not exist in ZF, it's a 'legitimate' set definition. So what's good is your constructive reading of AoS ? In ZF, R is an empty class due to well-foundedness, and since NBG is a conservative extension of ZF, it is an empty class in NBG too. You are terribly confused. 1. The ZF language does not have a notion of class at all. 2. R is a proper class in NBG. 3. An empty class is a set, and R is clearly not a set. In fact, in NBG, R = V. OK, so given {{ x in V | not (x in x)} which is an instance of such AoS construction, you construct yourself right into the paradox because you did not *prove* that V = {x | x=x} does not exist in ZF, it's a 'legitimate' set definition. So what's good is your constructive reading of AoS ? No it is not an instance of such an AoS construction; You have not shown that V is a set (and never will). In AoS you certainly have to prove that the thing from which you are constructing a subset is a set; Where did I state otherwise? Why do you say "... because you did not *prove* that .. does not exist"? The reason you enter a paradox is because you did not use the AoS, you used some other strawman. You are terribly confused. 1. The ZF language does not have a notion of class at all. 2. R is a proper class in NBG. 3. An empty class is a set, and R is clearly not a set. In fact, in NBG, R = V Agreed - I wasn't thinking, specifically: 2. I was thinking of R = { x in V | x in x }, which is empty. 1. I should have said "In ZF, R - as a proposition - is false" (if that had ever been the case to start with) rather than in "In ZF, R is an empty class", but you could easily have understood what I meant, there is plenty of this kind of shorthand notation around. 3. I agree, in NGB, R = V, and is a proper class. The reason you enter a paradox is because you did not use the AoS But I did. Assume: V is a set of everything. Sounds like a reasonable assumption, after all some set theories do have the universal set. By separation: R = {x in V | not(x in x) } I did not prove that such set exists, however neither did you prove that such set does not exist so your theory is open to the paradox until you eliminate V. You just do not know whether V exists or not, there are other means to 'construct' sets in ZFC other than separation. 1. I should have said "In ZF, R - as a proposition - is false" (if that had ever been the case to start with) rather than in "In ZF, R is an empty class", but you could easily have understood what I meant, there is plenty of this kind of shorthand notation around. No, I do not understand that at all. R is not a proposition but something that the Russell predicate defines, the predicate being 'not (x in x)'. So, given, S = { {1,2}, {3,4}} and T = {x in S | not (x in x)}, one can easily (hopefully) see that S = T. So how did you arrive at the bizarre conclusion that T must be empty ? The collection would be empty if not (x in x) evaluated to false for every x, but, on the contrary it evaluates to true for every x in S. So when you say "read-headed people", [ ... ] I assume you mean "red-headed people". This is not prima facie an instance of separation until you have established that "[all] people" is a set. Why did you chose that example as a follow-up? So to answer your question: I would have to start off assuming "readheaded people" is a class ("readheadedness"), until you proved to me that it was a set. In my example, "Blah" is a set (I suppose I should have said that :-), previously constructed, and separation by P is used to construct a subset of it. It depends on whether you consider sets as existing 'out there' or rather as created anew every time you want to use them and destroyed when the interest in such sets is lost. I'm sorry I dont understand this analogy. OK, let me rephrase. In ZF, there are predicates that cannot define a set, such as R = {x | not( x in x)} that can be restated as R = { x in V | not (x in x)} where V is the universal set. Since it can be shown that thanks to AoS V does not exist, { x in V | not (x in x)} does not define a set in ZF. Agreed, but where are you using separation to exclude R from being a set? Rather, you use a bunch of logical inferences (reductio ad absurdum) to prove that R cannot be a set, in which I can see no ZF axioms being applied. So the question remains, how do you (in a wide context) use separation to exclude things from being sets? I repeat my earlier claim: Separation is used to construct subsets (OK it may crop up as part of some proof that some other class is proper, but that is purely incidental). In NBG, on the other hand R defines a collection called a proper class. Do not agree: Due to well-foundedness, R is empty in ZF, and since NBG is a conservative extension of ZF, it is empty in NBG too. Therefore it is the empty set, and not a proper class. ...and why your program doesn't work unless you override both (the appropriate) equal and hash. And why there always is more than one equal... But it was a fun read anyway. Got me thinking though about what would a programming language look like if you replaced the term "equal" with "up to a unique isomorphism". I even wondered if you could get some simplification in Distributed Computing with his "I can put my finger on a specific isomorphism between the group of automorphisms". My guess is, as with most things of this sort, yes, but you have to be way smarter than the average programmer dealing with far more abstract things than the average program. Somehow I suspect the next (or subsequent) generation of template metaprogrammers will have this stuff in their bones. For something in that general direction you might consider what can be done with algebraic specification which seeks to generate requirements that define a model "up to isomorphism", and make use of category theory to allow for an essentially pluggable logic system via institutions. this article provides a good basic introduction. Also see this article by the same authors for an informal introduction. They make a good point in section 5 which is that, one doesn't usually need models "up to isomorphism". What really matters is behavioural equivalence. For example, I could implement a list ADT using a tree datatype - as long as that implementation respects the list axioms then it doesn't matter that trees are not isomorphic to lists. I can still go ahead and substitute them into any context where lists were expected. You can see echoes of this stuff all the way back, from Dijkstra's rather academic admonitions to not commit to choices you aren't forced to follow, to the very practical code-bumming advice of 60's assembly programmers and their ISA-level advice: "if you won't branch, don't test" -- both of these are pretty much equal to: "code, if you must, but only up to isomorphism". A database and its log file are examples of isomorphic data structures, but with differing optimizations (reading the former requires no decisions, writing the latter requires no decisions -- redundancy and uniqueness can be duals). When your hashing is used for caching, it's a similar story, and again introducing redundancy means that we're only dealing with addresses up to isomorphism. Over a network, there are numerous ways that a sequential stream can be sent as bags of packets, but we normally don't pay any attention, as long as they are all (up to isomorphism) the same as the original sequential stream. Ditto for a disk, which can be thought of as connecting a node and itself across time, rather than two distinct nodes across space. Some laptops have decent migration tools, so that one can expect one's new laptop to behave the same as one's old one, up to isomorphism -- note that, in general, one doesn't wish for equality here; there would be very little point in upgrading to the same box. etc. usw. (the trouble with abstract nonsense is that after a while, everything starts to look the same, only different...) quite a lispy paper (as they usually are when written by Henry Baker) but really gets to the heart of the ideas of object equality in functional and non-functional languages: a very interesting read on the subjet.
{"url":"http://lambda-the-ultimate.org/node/1338","timestamp":"2014-04-16T04:13:44Z","content_type":null,"content_length":"67317","record_id":"<urn:uuid:82d558eb-581a-4bdd-b7bd-421389c4a1f6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
How can I help special needs students feel included in class discussions? Question: I want to include all students in class discussions, but some of my special education students tune out during meetings. What can I do to make them feel included? Answer: The Accessible Math Project (NSF HRD--0090070) is working with teachers to learn more about how to successfully include students with special needs in Investigations math classrooms. One of the most common questions discussed by these teachers is how to facilitate meetings in ways that include all students, whatever the range of skills and needs. Students with special needs or those who are struggling with mathematics need time to practice doing and thinking mathematics at their own level, yet they also need to understand the tasks at hand and listen to what their classmates are thinking. Balancing the needs of the range of students and providing structures to help them learn are challenges in teaching Investigations. We offer here some of what we have learned from observing and talking with experienced Investigations teachers about facilitating inclusive mathematics meetings. Much of what we describe is appropriate for the whole class; only a few of the teachers' recommendations are specific to students with learning disabilities. First and foremost, create classroom community Meetings that are truly inclusive depend on the attitudes and behaviors of the students toward one another. Teachers build inclusive classroom communities from the beginning of the school year, letting students know how they expect them to support one another as learners. Comfortable classroom communities are based on respect and acceptance of differences. Students with special needs, both those who have difficulty keeping still as well as those who are hesitant to join in, feel safer in classroom communities with clear routines and expectations for behavior. These are classrooms where the teacher spends little time disciplining students, because a reminder is enough. For example, we have heard teachers say "We don't laugh at people in this classroom" or "You can lie down as long as you can pay attention," or "Make sure you listen to Michael. His question may be your question." Humor is a natural part of the classroom and can serve to make a point and acknowledge differences among people without a heavy hand. Mistakes are seen as chances to learn and the teacher, as well as the students, is comfortable not knowing. In a class discussion when a student remarked that the teacher had made a mistake illustrating a fraction, the teacher replied with a smile, "We all have our strengths and weaknesses; you are more organized than I am." As part of setting guidelines for how students treat one another in your class, prepare students in how they will behave during meeting so that brief reminders are all that is needed for respectful Support all students to be actively involved When asking questions, provide wait time twice. First, allow enough time for all students to form an answer. Write the question as well as saying it. Tell students you are waiting and will not call on the first person with their hand up. Some teachers ask students to show only a thumb up when they are ready to answer a question. Others ask students to jot down their answers for her to see before discussing the problem. When you call on a student, allow enough time for the student to fully explain his or her solution. Expect other students to stay quiet without their hands up. After you have introduced a game or posed a problem, ask two or three students to explain the game or put the story problem into their own words. Wait to ask a hesitant student until a classmate has rephrased the problem, so that the student has another chance to listen before saying it in his or her own way. Provide extra support or alternative activity If you know that the introduction to a new activity will take a long time, plan with a paraprofessional or student teacher to sit near the students who are likely to need extra support. An even better way to keep students actively part of the group is to teach them the activity or game in advance outside of class time so that these students will be prepared and perhaps ready to help with the introduction to the activity. If you have no assistant, arrange with students who tend to get restless or have difficulty paying attention a signal that you can give to suggest they move away from the group and start work. Some teachers provide children who have difficulty sitting still and making sense of what the group is doing with other work they can do while the class meets, and then gives them enough of an introduction so they can join in the day's activity. These teachers use meetings for 1. Introducing an activity The challenge of introducing an activity to the whole class at one time is to make sure that all students understand the task. Present the problem in several different ways. Provide materials for students to work with when useful (such as pattern blocks or cubes or numerals cards). Write problems (and solutions) on chart paper or the blackboard, or use an overhead projector, as well as In the following example, the teacher anticipated what her second grade students might need in order to do the day's activity successfully. In this task one student makes a shape with geo-blocks, and then hides it from view. The partner must create a copy of the design by listening to the designer's directions. Ms. K explains that students will work in partners and take turns making designs with 8 or fewer blocks. Then, their partner will try to reproduce it without looking. She makes a design in front of her on the floor and places a screen so the students cannot see it. She calls on students to ask her questions about her design that can be answered with either "Yes" or "No". After they ask a few questions, she asks the students to remind her of the rules. She writes the rules on the board as students suggest them. 8 or fewer pieces Ask questions that are answered yes/no You can place shapes on top of each other to make a 3D design. Ms. K also writes words that students mention during the conversation: flat, standing up, looks like, north, above, on top, almost connecting, below, left, right. Ms. K draws two shapes on the board, one above the other. She writes "above" next to the higher one and "below" next to the lower one. Ms. K: At the end I will ask what words were helpful. Take turns. You are going to need a tub of blocks and a basket for a divider. Today you will work with your blue list partners. (The teacher has made and posted several different arrangements of partners. She tells students which list to use today.) As children settle with their partners at tables, Ms. K uses one pair to demonstrate to the class how to sit across from your partner and how to place the divider. The paraprofessional sits near a pair of students who she thinks may need extra help. Ms. K moves around the room observing different pairs of students. Spend only enough time introducing an activity so that most of the students understand what they are to do and can start work. Make yourself available to students who need more help getting started, by inviting them to continue to work with you. The following is an excerpt from an introduction to subtraction problems for a group of students who have been struggling in Ms. S's first grade class. She knows that these students need extra help interpreting story problems. She has set a task for her other students that they can do independently with a student teacher overseeing their work. Ms. S: I am telling you a new story today. When I went on vacation I took a bag, and I decided to put in 5 pencils. I thought I might want to write letters. But 3 pencils fell out. Who remembers what happened in the story? Vincent: You had a bag and 5 pencils were in the bag and 3 fell out. Ms. S: Who else? Tara: You had 5 pencils in the bag, and you went to the beach and you lost three. Ms. S: One more. Vanessa: You put 5 pencils in the bag and you lost 3 pencils. Ms. S: Does anyone know how many are still in the bag? Maria? Maria: Three. Another student holds up 2 fingers and says 2. Ms. S: Wiggle your fingers if you agree with 2. Tara: You lost 3 and 2 more make 5. Ms. S.: We're going to check. How many are in the bag? Students: 5. She passes out cut out pictures of pencils. Students have a worksheet with an outline of a tote bag. Ms. S: We'll solve it together. How many pencils did I have in the bag. Kids: 5. Ms. S: Everybody try it. (The students put 5 pencil pictures on their tote bag picture.) Ms. S: Then what happens? Maria: You lost 3. (She holds up 3 fingers.) Ms. S: Get rid of 3 pencils. (Students take 3 pencil pictures away.) How many do I still have? Students: 2. Ms. S: At the end of the story do I have more or less? Students: Less. Ms. S: How did you know? Ben: Because you lost 3. Ms. S: Did I put them in or did they fall out? Students: Fell out. Ms. S. knew that these students had difficulty figuring out the action of the problem and would not have been able to start working on solving word problems independently without a concrete introduction. The students in her group then tried a different pencil problem on their own. 2. Doing mathematics together Doing some mathematics during a meeting allows the teacher to see how students approach the problem and gives students an immediate way to engage with the mathematics. For some kinds of problems, teachers provide an easier and a harder problem for students to choose between; for others, they write a problem with several steps and expect that all students will be able to complete the first step and some will do more. What follows is an example of a longer meeting in which students do and discuss some multiplication before going off to work in small groups. On the previous day Ms. G's class worked on the problem 14 x 6 in groups of three, but did not discuss their work in the whole group. Some students were using repeated addition. Others were trying to use the traditional multiplication algorithm, but did not know what number to carry, after starting with 6 x 4. Today Ms. G. poses a different problem for the students to do during the whole group meeting. On the meeting rug, Ms. G has put out a whiteboard with a marker and a piece of paper towel on it for each student. The students come in from another class, choose one of the boards, and sit on the rug where the board is placed. Three students sit on chairs at the edge of the rug. Ms. G reminds students not to click their felt markers on the boards or whisper. She writes "17 x 4" on her white board and asks students to think of the multiplication facts they know that can help them, and to try different ways that people used the other day. Ms. G: Think what the 7 really means, what the 1 really means. Students settle right into work. Ms. G observes the students while they are working. When some students are finished and some are still working, Ms. G suggests that those who are finished try doing the problem another way. 3. Discussing students' strategies When you are planning a meeting for students to discuss their problem-solving procedures, decide what the main ideas are that you would like to focus on, or the main strategies you expect to see or would like to introduce. Instead of calling on volunteers to share strategies, it can be useful to observe students at work and pick out two or three examples of solution strategies that you would like students to discuss. Some teachers provide whiteboards, clipboards, or student sheets to write on. Students as well as the teacher can look at one another's solutions and point out what they observe. You can ask a few of the students or student groups to write their procedures on the board or on large paper and post them in front of the room before the start the meeting so that everyone can see them. In this discussion Ms. G's goal is to focus on whether students seem to be making sense of what they are doing and finding the correct answer reasonably efficiently. Because the students have written on whiteboards, they can show their work when she calls on them. She decided to call on Donald because she wanted him to show his strategy of adding the tens and then the ones. Ms. G: I'm asking Donald to go first because I've never seen that way. When I give you another problem you might want to take a risk and try Donald's way. Ms. G copies Donald's problem solution, (10 + 10 + 10 + 10 = 40, 7 + 7 + 7 + 7 = 28, 40 + 28 = 68) from his white board onto her board and asks him, "How do you know it works?" Donald: I know that 17 is 10 + 7. I broke it up into 4 tens and 4 sevens. I added the tens; 40. I added the sevens. That gave me 28. Twenty eight plus 40 is 68. Ms. G: I saw some people try your way. Why could you do 10 + 7 four times? Donald: Because 17 equals 10 plus 7. Ms. G: He knows that 10 is easy to work with. Do you have a question for Donald? Philip: That is a complicated way. Ms. G: That's okay; that won't be your way. Philip: Do you mean you add the 17's up? Donald: No, I added 10s and 7s. Ms. G: Why did he do it 4 times? Lucy: Because it said seventeen times four. Ms. G: If the problem said seventeen times six, how many times would you do it? Kids: Six. To identify certain strategies, it is helpful to give them names (such as "breaking up into tens and ones") and ask who else did the problem in the same way. Here a student volunteered that her strategy was similar. Corinne: I did the problem that way. Two tens made 20 the other two tens made 40 and 4 times 7 is 28; 40 plus 28 is 68. Whenever you think any of the students who have difficulty will be able to present their solution, ask them to show it as one of the first because it might be both more accessible and less complex than other methods you want to highlight. Or, instead of calling on the student to explain his or her method, you can summarize what you noticed and, with the student's help, write his or her procedure on chart paper. You might invite a child who has difficulty with the math to demonstrate the solution another child has written or explained. When Ms. G called on Michael, she had seen on his white board that he had a successful strategy. She did not know he also had then used his answer to do a "number of the day" problem. Michael often does work that is similar to what the class did a day or two previously instead of staying with the work that the class is currently doing. She asks him his solution to 4 x 17. Ms. G: Do you want to share your method Michael? Michael: I did it another way. (He reads from his whiteboard) 100 minus 33 plus 1 equals 68, 100 minus 34 plus 2 equals 68. Remember that long timesing [sic] and adding chart? That's how I did it. Ms. G: Do you mean number of the day? Michael: Yes. Ms. G: How did you get 68? Michael: I did 100 take away 33, cross out 1, make 10... Ms. G: The problem is 17 x 4. How did you know it was 68? Michael: I put one group of 4 sevens together. I added the sevens by doing the 7's tables 7, 14, 21, 28, and [then I did] 4 tens is 40. Ms. G: so you knew another way to do 4 x 17. You knew it was 17 four times. Other students: That's a good way. I did it that way. By her questioning Ms. G was able to help Michael talk about the strategy he used that connected to the task that the class is working on. Sometimes it is useful to ask a student to describe a strategy that didn't work very well and discuss what the student might do differently. Carlos: I did four 17 times. [Had written fewer than 17 fours in a column.] I knew that wasn't the answer so I added 3 more. I counted it, and I knew it wasn't the answer. Ms. G: So you did the opposite of Michael. You did seventeen 4's instead of four 17's. How did you know to add 3? Carlos: Because I got 65 and I knew it was wrong. Ms. G: I want you all to listen. When you add 4 seventeen times, what often happens? Shanna: You make a mistake. You might add 1 less or 1 more. Ms. G: Carlos, I want you to check your work, and maybe try a different strategy. I want you to find a way that you can do that is quick and efficient. Because doing the multiplication and sharing strategies took longer than usual, there is little time left for a work period. Ms. G distributes sheets with two more problems. Ms. G: Some of you shared ways. Now I want you to work in groups to work on another problem. Teach each other your ways. We don't have a lot of time. You have about 15 minutes to do the problem and 4. Whole class reporting/checking work It is not necessary to always end an activity with sharing strategies. Many days it is valuable to use the whole math period for students to work and end the work period without a concluding discussion. At other times, use a shorter form of whole class reporting, where everyone checks his or her own work. Here are some examples: "Everyone look at their multiple tower to see what the tenth number is. Read some out. Notice that it ends with 0. Does anyone have an earlier number that ends in zero? What did you multiply to get that number?" "We have discussed different ways to do subtraction problems. Who used a 100 chart today? Who counted up? Who counted backwards from the larger number? Did anyone use an open number line? Did anyone try a method that is new for them?" You might ask students who worked together to check with another group that they agree on a list of solutions (e.g. for problems such as all the fractions less than one half or the percent equivalents for the eighths or all the ways to take a total of 8 peas or carrots). The advice teachers we work with give most consistently is to focus each meeting on learning mathematics or preparing to do a mathematics activity. 1. Keep meetings short and focused. Keep the important mathematics in the lesson in mind when planning meetings. This can focus the discussions to meet the needs of the class as a whole as well as the needs of the students who are struggling with mathematics. 2. Set specific guidelines and expectations for behavior during meetings. In many primary grade classrooms, students gather on a rug for the class meeting and then return to their places or choose places for work-time. In fourth and fifth grade classrooms, students often stay at their own desks for meetings. Some teachers vary the setting, with children staying at their desks when the meeting will be short, and gathering together when a longer time is needed. To facilitate a smooth transition to meeting on the rug, some teachers assign students places to sit on the rug, changing them every month or so. Others place circles, mats, or white boards to clearly mark the places available for students to sit. Others allow students to sit wherever they want in a circle as long they can see the teacher and all of the other students. They might remind students to make a good choice about sitting where they will be able to pay attention. Some students will sit, others kneel, and others may find they are able to sit more quietly in a chair. The teacher might remind students about "listening behavior" during the course of a meeting. 3. Prepare materials and working groups ahead of time. Establish routines so that students can get right to work independently at the end of meetings Then you can then work with a small group instead of taking time to help the class settle in and get Have all materials easily accessible to students. In some classrooms, students keep generic materials such as pencils, crayons, erasers, scissors, rulers, and paste in baskets in the middle of each table or group of desks. Materials particular to one activity or subject can be prepared and placed in boxes or trays for each group of students who will work together to fetch as they begin work. If math materials are stored in a particular area of the room, it is easy for students to readily access what they need when they need it. With student help, teachers can distribute student books or papers quickly. It is useful to provide a few extra copies in case a student wants to start over and to make some copies of additional work for students to do quietly when they have extra time. One folder might hold work to provide practice and review, and another to provide more challenging problems. Establish procedures for choosing or assigning partners in advance so that you don't take time away from doing mathematics. Some teachers keep students with the same partner for a period of time; some alternate the way they pair students, according to the type of activity; others let students choose partners with some adult "guidance" about what makes a suitable partner. 4. Provide Extra Help Students with learning disabilities need to be included in the regular class meetings and activities in ways that they will be successful. They also need time to work in a small group with a teacher. In an example above, we saw how Ms. S. worked with a group of first grade students during class time. Teachers of older students may let students decide to work with her to get started on an activity, limiting this help to 3 to 5 students each day so that it becomes a privilege. In the best of circumstances, a special education teacher or paraprofessional skilled in working with Investigations math can work with the teacher during classes and also offer extra help to students outside of class. Teachers can use these extra sessions to • make the mathematics more explicit to the students; • provide guided practice and suggest practice students can do at home; • introduce an activity or game before it is introduced to the whole class; • rehearse with students how they can write out and explain their methods during class meetings. Because class discussions are an important part of building a mathematical community, the range of learners in a classroom needs to be taken into account in planning these meetings and sharing sessions, and encouraging participation. However, students who have difficulty listening to instruction in a large group are not likely to learn new mathematics through these discussions. Often students who are struggling with mathematics need extra time to practice mathematics instead of participating in whole group activities. Keeping the discussions brief and focused and the students actively engaged will result in more time spent on building mathematical understanding and developing efficient ways to solve problems for all learners. Cornelia Tierney and Judy Storeygard, Accessible Mathematics Project, TERC March 2005 This information was reprinted with permission of CESAME, Northeastern Univ., and the Educational Alliance, Brown University.
{"url":"http://investigations.terc.edu/library/implementing/qa-1ed/special_needs_class_disc.cfm","timestamp":"2014-04-16T22:38:05Z","content_type":null,"content_length":"34577","record_id":"<urn:uuid:c2a6a7c1-85e6-404b-a17c-8f228f69bed0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Chandler Heights, AZ 85142 Master Calculus/Statistics/Math/SAT/ACT with a Certified Math Teacher ...I am a highly qualified and state certified math teacher with 14 years of classroom teaching experience. In that time, I have taught algebra, geometry, trigonometry, honors pre- , and AP . As a tutor, I have helped high school, college, and adult... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Apache_Junction_Calculus_tutors.aspx","timestamp":"2014-04-16T08:49:53Z","content_type":null,"content_length":"61215","record_id":"<urn:uuid:ec5c7353-270c-40a3-9694-dbc5487ee687>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Walkthrough - Complete Walkthrough/FAQ Walkthrough for Professor Layton and the Curious Village Nintendo DS Professor Layton and the Curious Village Walkthrough : This walkthrough for Professor Layton and the Curious Village [Nintendo DS] has been posted at 14 Mar 2010 by francis.medina and is called "Complete Walkthrough/FAQ". If walkthrough is usable don't forgot thumbs up francis.medina and share this with your freinds. And most important we have 6 other walkthroughs for Professor Layton and the Curious Village, read them all! Walkthrough - Complete Walkthrough/FAQ Page 1 Version 1.4 11/13/09 ??? ??? ??? ??? Professor Layton and the Curious Village Walkthrough by The Lost Gamer (ilovecartoonssomuch@yahoo.com) Copyright 2009 To see if there is a newer version of this guide, please check Table of Contents: 001. General information 002. Video Walkthrough 003. Walkthrough 003a. Chapter 1: Reinhold Manor Awaits 003b. Chapter 2: The Fugitive Feline 003c. Chapter 3: The Missing Servant 003d. Chapter 4: Night Falls 003e. Chapter 5: The Hunt Begins 003f. Chapter 6: The Elusive Tower 003g. Chapter 7: The Abandoned Park 003h. Chapter 8: The Shadowy Intruder 003i. Chapter 9: The Tower's Secret 004. Gizmos 005. Furniture 006. Painting Pieces 007. The Golden Apple's House 008. The Puzzle Master's House 009. UK Exclusive Puzzles 010. Credits 001-General Information This is a walkthrough for the Nintendo DS game called Professor Layton and the Curious Village. It's a puzzle/mystery game containing a lot of brainteasers. Please note that this guide does not cover the Wi-fi puzzles you can get with this game, because I am currently unable to download the wi-fi puzzles for some reason. It worked when I first purchased this game, but it doesn't work now. Dang. You can contact me at ilovecartoonssomuch@yahoo.com if you have any questions, suggestions or that sort of thing. 002-Video Walkthrough I made a video walkthrough for this game, so if you would rather watch me solve various puzzles rather than read about them, you can see them all at: I will put the links to individual videos in the guide as well. Video #1: http://www.youtube.com/watch?v=xCDPzFH944A The game begins with Professor Layton and his apprentice Luke driving to St. Mystere, where the Professor has been asked to help solve an inheritance dispute. It seems that Baron Augustus Reinhold left behind a will stating that his entire estate will go to the one who finds the Golden Apple. This is quite a mysterious will, given that no one had ever heard of the Golden Apple before the will's reading. Could it be a treasure of some sort? Lady Dahlia, Baron Reinhold's wife, has sent for Professor Layton to help solve the mystery. They just need to go to the town and...wait, what's this? Puzzle #001: Where's the Town? Lady Dahlia included a map, along with the instructions "My village is on a road that leads to no other towns." The instructions are on the top screen of the DS, and the map is on the bottom screen. As you can see, there are four roads and five towns. Using your stylus, draw a circle around the correct town. Which town is that? Why, it's the town in the upper/left, the one with red roofs. Circle it and submit it as your answer. With the puzzle solved, Professor Layton and Luke make it to town, where they stare at the very odd tower at the north part of town. Video #2: http://www.youtube.com/watch?v=lnflujF2vzs The drawbridge to town is closed. Tap on the man who runs the drawbridge, Franco, to talk to him. Puzzle #002: The Crank and the Slot Location: Drawbridge Character: Franco Summary: To solve this puzzle, figure out which slot fits the crank. Is it slot 1, 2 or 3? You can tap the crank to change the viewing angle to help you out. Solution: See the two square parts sticking out of the crank? Slots 2 and 3 show that the side between the two of them is flat. However, the crank shows that there is actually a sharp angle between the two of them. Therefore, Slot 1 is the correct Franco lowers the drawbridge and lets you into town. Now it's off to find Lady Dahlia! But first...let's explain the game's control scheme! Click on the shoe in the bottom/right corner. Yellow arrows will appear. Click on the arrow to travel in that direction. A white glove appears in front of doors that you can enter through. You can also click on a person to talk to him or her. Try it out now by tapping on the woman on the left. Puzzle #003: Strange Hats Location: Entrance Character: Ingrid Summary: You have four hats pictures here. Which hat is as wide as it is tall? Solution: Hat A is the correct hat. You can talk to the man here. He tells you to tap on the barrel for a hint coin. You can find hint coins hidden all over this game, and you can use them to get three hints on every puzzle. You're reading a guide with the answers to all the puzzles, so you shouldn't need them, and so, I do not list where all the hint coins are. Head north, and the professor explains about the game's save system. Tap the briefcase in the upper/right corner to be taken to a screen where you can save your game, look up an index of all the puzzles you've seen, read a journal to get an overview of the plot, and read about the various mysteries in the game. You can also solve the painting, gizmo and furniture meta- puzzles here, but you don't learn about them quite yet. Basically, sometimes when you solve puzzles in this game, you will get a painting piece, gizmo, or piece of furniture as a reward. Collect them all, and at the end of the game, you can solve each of the meta-puzzles for bonus puzzles. And now that you've gotten an idea of the control scheme, it's time for... 003a-Chapter One: Reinhold Manor Awaits The goal of this chapter is to get to Reinhold Manor, which is in the northeast part of town. Of course, along the way, you'll want to talk to everyone you meet in order to get a puzzle. In this game, pretty much everyone you talk to has a puzzle for you to solve. For example, that man there! Percy! The town author! Tap on him to get... Puzzle #004: Where's My House? Location: Plaza Character: Percy Summary: Leave the front door of Percy's house and turn left. Take a right at the first intersection you meet, and a right at the next intersection you meet. You end up facing the morning sun. Which of the seven houses in the picture is the correct Solution: The morning sun is in the east, so you can work backwards from there to figure it out. Alternately, you can use guess and check on every house--pick a house, follow the instructions and see if it results in facing east. You can also rule out a few houses that are facing the same direction, because taking the same route from those houses would result in facing the same direction, and there's only one solution, so when two houses both lead you to the same direction, they can be ruled out, right? Right. Anyway, the answer is the house with the blue roof in the middle of the screen, the one that faces north. Video #3: http://www.youtube.com/watch?v=2Z0ZpTSUhnI Ah, but it's not just talking to people that gives you puzzles! You can also find puzzles by examining things! For example, examine that clock tower there to get a puzzle. Puzzle #005: Digital Digits Location: Plaza Examine: Clock Tower Summary: Occasionally, a digital clock will show the same digit three or more times in a row, such as when it's 3:33. How many times does a digital clock do this over the course of one day? Solution: There's not much to do to solve this puzzle besides make a list of all the times during the day when three or more digits appear in a row. Here's a list of all those times: 1:11 AM 2:22 AM 3:33 AM 4:44 AM 5:55 AM 10:00 AM 11:10 AM 11:11 AM 11:12 AM 11:13 AM 11:14 AM 11:15 AM 11:16 AM 11:17 AM 11:18 AM 11:19 AM 12:22 AM 1:11 PM 2:22 PM 3:33 PM 4:44 PM 5:55 PM 10:00 PM 11:10 PM 11:11 PM 11:12 PM 11:13 PM 11:14 PM 11:15 PM 11:16 PM 11:17 PM 11:18 PM 11:19 PM 12:22 PM The list is made up of 24 different times, so the answer is 34. Those are both the puzzles you can do on this screen, meaning there's not much left to do besides go right to the next screen. Do so, and the Professor and Luke see a man blocking the road. Tap on this man with your stylus to talk to him. His name is Marco, and he gives you a puzzle to solve. Puzzle #006: Light Weight Location: Manor Road Character: Marco Summary: You have seven weights here that are all the same weight. You also have a weight that is rather light, but it looks exactly like the other weights. Can you figure out which weight is the light one by using a scale only twice? Solution: The number of the light weight is randomly determined. You're going to have to use logic here. These instructions work Place weights 1, 2 and 3 on the left-hand scale. Then place weights 4, 5 and 6 on the right-hand scale. Press the red button to weigh them. If the left-hand scale goes up, you know the light weight is either 1, 2 or 3. Then, weigh 1 and 2. If 1 and 2 are the same weight, 3 is the light weight. If 1 and 2 are not the same weight, choose the one that is lighter. If the right-hand scale goes up, you know the light weight is either 4, 5 or 6. Then, weigh 5 and 6. If 5 and 6 are the same weight, 4 is the light weight. If 5 and 6 are not the same weight, choose the one that is lighter. If the left-hand scale is equal to the right-hand scale, then you know the light weight must be 7 or 8. Weigh the two of them to determine which is lighter. Marco lets you pass. Now you can walk north towards the manor. Your path here is block by Ramon, a servant at the manor. He thinks Layton could be someone in disguise, so he asks Layton to solve a puzzle and thereby prove his identity. In case you haven't figured it out yet, pretty much every character in this game has a puzzle for you to solve. Puzzle #007: Wolves and Chicks Location: Manor Border Character: Ramon Summary: You want to take three wolves and three chicks across a river. You have to take one or two animals with you when you cross the river, and if the wolves ever outnumber the chicks on one bank, they will eat the chicks. Can you get the animals across the river without allowing the wolves to eat the chicks? Solution: There are multiple solutions, and you get as many tries as you need to solve the puzzle. My strategy is to start by getting the wolves on the right side of the river. 1. Take two wolves right. 2. Take one wolf left. 3. Take two wolves right. Then you want to start moving all the chicks over. This will require having an equal amount of wolves and chicks on both 4. Take one wolf left. 5. Take two chicks right. 6. Take a chick and a wolf left. Now move two chicks right so all the chicks are on the right hand side. 7. Take two chicks right. Now that all the chicks are on the right, you want just bring all the wolves right. 8. Take one wolf left. 9. Take two wolves right. 10. Take one wolf left. 11. Take two wolves right. Ramon is pleased when you solve the puzzle, and he leaves. You can now walk towards the manor, but before that, tap on the flowers on the other side of the riverbank to get a puzzle. Puzzle #008: Farm Work Location: Manor Border Examine: Flowers Summary: Alfred and Roland are being paid $100 to plow and sow a 10-acre plot of land. Each boy will plow and sow half of the land. Alfred plows 20 minutes per acre, and Roland plows 40 minutes per acre. Roland sows three times as fast as Alfred. So, how much of the $100 should Roland receive? Solution: Ha, this is a tricky puzzle. It tries to confuse you by giving you a bunch of unnecessary mathematical facts. To put it simply, Alfred works on half the land, and Roland does the other half. Since each of them does half the work, each of them should receive half the money. The answer is $50. Now head forward towards the manor. Luke is impressed by how large it is, and thus ends Chapter One. 003b-Chapter Two: The Fugitive Feline Video #4: http://www.youtube.com/watch?v=ZbKyoZZvunI Professor Layton and Luke have arrived at Reinhold Manor. They go inside and meet Matthew, the butler. He says that everyone is waiting upstairs, but before you can join them, you must solve a matchstick puzzle. Matthew then explains the control scheme for matchstick puzzles. Puzzle #009: One Poor Pooch Location: Manor Entrance Character: Matthew Summary: Here's a picture of a dog. After the picture, the dog is hit by a car. Move two matchsticks to show what the dog looks like after it is hit by the car. Solution: The game says not to assume you'll be looking at the dog from the side when the puzzle is done. That's your hint that tells you you'll be looking at the dog's flattened body from Change the picture from /| |__ /\ /\ /| |__ / \ Examine the bookshelf for another puzzle. Puzzle #010: Alphabet Location: Manor Foyer Examine: Books Summary: What's the last letter of the alphabet? Hint: It's not Solution: Tricky puzzle! It wants you to answer "t", which is the last letter in the word "alphabet". While in the manor foyer, you can look at the paintings on the wall. One is of the late Baron Augustus Reinhold, and the other is of his daughter Flora. When you look at the paintings (if you don't look at them now, later on, Layton will look at them automatically), Matthew gives you a painting frame. Throughout this game, when you solve certain puzzles, you get pieces of the painting that belongs in this frame. When you collect all twenty of them, you can put the painting together and get three bonus puzzles. Go upstairs now to meet Baron Reinhold's wife, Lady Dahlia. A cutscene plays, where Professor Layton talks to Lady Dahlia. There is a mysterious crashing sound, and her cat escapes from the house. Lady Dahlia becomes upset and demands that you search for her cat, Claudia by name. This will be the goal of Chapter Two--to find Claudia the Cat. You'll have to search throughout the entire town to find her, and so, you will get a basic idea of the town's layout while searching for Claudia. You can talk to Simon, the man with glasses, for a puzzle. Puzzle #011: Arc and Line Location: Manor Parlor Character: Simon Summary: You have a mathematical puzzle, which requires some basic geometry skills to solve. How long is line AC? Solution: Line AC is the same length as line BD, because ABCD is a rectangle, and the two diagonals of a rectangle are always the same length. Now, you may notice that BD is a radius of the circle. The radius of the circle is ten inches (5 + 5), so BD must be 10 inches as well. And since AC = BD, AC is 10 inches. The answer, therefore, is 10 inches. Tap on the chandelier to get a hidden puzzle. Puzzle #110: The Vanishing Cube Location: Manor Parlor Examine: Chandelier Summary: You have a bunch of matchsticks here that form four cubes. Move one matchstick so you have only three cubes. Solution: This requires three-dimensional thinking. See the cube in the middle/back? Only one side of this cube is showing. That means this is the cube that is going to "disappear". Take the matchstick that forms the side of this cube, and rotate it so it is facing straight up and down. Then place it so it forms a side on the uppermost cube. It will connect the red end of two matches if you do this properly. Now that you're done with the puzzles here, go outside. Video #5: http://www.youtube.com/watch?v=3feXjYIaOrU Hey, Claudia the Cat is here! Tap on her, and she runs off. Then, there's a minor scene in which a mysterious villain crash- lands in town and declares his intent to get the treasure of Baron Reinhold and defeat Professor Layton! Uh oh! The villain won't try to kill Layton for a while, though, so just forget about him for now and look for Claudia. Move back to the screen with Ramon, and you can click on the boat for a Puzzle #013: Sinking Ship Location: Manor Parlor Examine: Boat Summary: Fifteen people are on a sinking ship. There is a five person raft onboard, and an island four and a half minutes away. How many people can make it to the island if the boat sinks in twenty minutes? Solution: The trick to this puzzle is realizing that you'll need at least one person on the raft at all times. Minute 1: Five people get on the raft. Minute 4 1/2: Four people land on the island. One takes the raft back to the ship. Minute 9: Four people get on the raft. Minute 13 1/2: Four people land on the island, making eight people on the island in total. One takes the raft back to the Minute 18: Four people get on the raft. Minute 20: Ship sinks. The eight people on the island and the five people on the raft are safe. Therefore, 13 people lived. Go down a screen, to the screen with Marco. Here, you can go inside the building on the left by tapping on the door. Inside this abandoned store are two puzzles. Puzzle #014: Which Chair? Location: General Store Examine: Chair Summary: Which of the five chairs is good for an auditorium? Solution: Chair E, because it is the only chair that is stackable. Humbug, I say. That's not much of a puzzle. Puzzle #015: How Many Are Left? Location: General Store Examine: Candle Summary: Ten candles are lit. Some wind blows out three of the candles. How many candles are left in the end? Solution: This puzzle is phrased in an attempt to confuse you. What it means by "how many candles are left in the end?" is "How many candles will NOT melt?". The answer is three--the ones that were blown out. That's it for the general store. Leave, then go left to the clock tower area. Franco shows up and says someone stole the crank to the town, so no one can get in or out of town now. I bet that's the work of the mysterious villain! Franco also found a weird gizmo, which he gives to you. From now on, people will give you gizmos when you solve certain puzzles. Collect all the gizmo pieces, open up the "Gizmo Puzzle" in Layton's trunk, then click on all the gizmo pieces you collected to get a robot dog. Claudia the Cat is here. Tap on her, and Luke tries to calm her down. She attacks Luke. What a vicious cat! There are a few buildings here at the Clock Tower, but the only one you can enter now is the Town Hall, which has a blue door. Inside is Rodney. Talk to him for a puzzle, and when you solve the puzzle, he gives you a gizmo. So, do that to see how getting gizmos works. Puzzle #016: Triangles and Ink Location: Town Hall Character: Rodney Summary: There's a big triangle made up of little triangles. With one dip in ink, you can make four little triangles. How many dips of ink does it take to draw the big triangle? Solution: Seven. Tap on the window on the right for a hidden puzzle. Puzzle #118: Red and Black Cards Location: Town Hall Examine: Windows Summary: A deck of cards is shuffled, then cut in half. If you do this 1000 times, how many times will the number of black cards in Pile A equal to the number of red cards in Pile B? Solution: Math! Fun math, too. Let's say Pile B has x black cards. That means Pile A, then, must have 26-x black cards (because there are 26 black cards in total). As for red cards, Pile B has 26-x red cards (because there are 26 cards in Pile B in total). So, the number of red cards in Pile B = the number of black cards in Pile A. Aha! The number of red cards in one pile always equals the number of black cards in the other pile! So, if you do this trick 1000 times, the conditions will be met 1000 times. Cool! Video #6: http://www.youtube.com/watch?v=c3nOTQfjTN4 Leave the town hall. If you talk to Deke, he tells you Claudia went left. However, Professor Layton suggests taking a detour to the south to get a room at the inn. So, go south. Talk to Stachenscarfen here for a puzzle. Puzzle #017: Five-Card Shuffle Location: Entrance Character: Stachenscarfen Summary: Which one of the four pictures is different than the Solution: D is the one that is different. Now, go inside the Inn, which is the building with the blue door. Talk to the woman there, Beatrice, to get the Furniture Puzzle. It's sort of like the Gizmo puzzle. From now on, whenever you solve certain puzzles, you get pieces of furniture. Get all the pieces of furniture and solve a furniture puzzle to get three bonus puzzles! You won't be able to do that for quite some time, however. Beatrice also has a puzzle for you. Puzzle #018: Of Dust and Dustpan Location: Inn Character: Beatrice Summary: Move two matches to make it so the dustpan is holding the dust. Solution: You have to turn the dustpan upside-down to solve this one. Move the rightmost match to the left of the dust. Then, move the middle match (the only match that is horizontal) to the left a little to connect the two lower matches. See how the dustpan is now upside-down? There is a hidden puzzle in the picture behind Beatrice. Puzzle #112: My Beloved Location: Inn Examine: Picture Summary: Arrange the pieces of this picture so they form a picture of "My Beloved". Solution: If you think of the pieces of the picture like this: A B C D Switch C and B, then C and A. C A B D Rotate A twice, then rotate D twice. The black outline sort of looks like a woman, and that's the solution. Leave the Inn and go south. If you tap on the Laytonmobile, you get a puzzle. Puzzle #019: Parking Lot Gridlock Location: Drawbridge Examine: Laytonmobile Summary: Slide the cars out of the way so the Laytonmobile can 1. Move the second-from-the-top car right. 2. Move the car right of the Laytonmobile up. 3. Move the car right of the Laytonmobile up. 4. Move the Laytonmobile right. 5. Move the bottom/left car up. 6. Move the bottommost car left. 7. Move the second-from-the-bottom car left. 8. Move the second-from-the-left car down. 9. Move the Laytonmobile left. 10. Move the topmost car down. 11. Move the second-from-the top car left. 12. Move the topmost car left. 13. Move the rightmost car up. 14. Move the car right of the Laytonmobile up. 15. Move the Laytonmobile right. Okay, let's get back to searching for Claudia! Go north twice, then left. The woman here, Agnes, knows how to get on Claudia's good side: offer her some food. But, of course, Agnes won't give you the food unless you solve a Puzzle #022: Pigpen Partitions Location: Park Road Character: Agnes Summary: Draw three lines to separate the seven pigs. Solution: Okay, see the pig in the bottom middle and the pig in the top middle? One line starts from the peg left of the pig in the bottom middle, and stops on the peg right of the pig on the top middle. Another line starts from the peg right of the pig in the bottom middle, and stops on the peg two left of the pig on the top The third line starts on the peg below the pig in the upper/left, and stops on the peg below the pig on the Agnes gives you fish bones for Claudia. You can now go north to give the fish bones to Claudia and end the chapter if you want, but there are a few puzzles left in this chapter. Three of them are on this screen. Video #7: http://www.youtube.com/watch?v=D9XKH7_cS44 Puzzle #020: Unfriendly Neighbors Location: Park Road Character: Pauly Summary: Draw four lines between the four sets of numbers. Just make sure that none of the lines overlap. Solution: The line between the two B's is a straight up/down line. The D line goes left from the left D, up to the top of the screen and to the right. The line from the right D goes left once, up to the top of the screen and left. The two will The C line goes right from the left C, and it goes all the way right, then up, then right and goes into the C. The A line goes down, right, down, left, down, right and up--basically, it goes along the only empty spot left for it. Puzzle #021: Pill Prescription Location: Park Road Character: Pauly Summary: A dude man has to take ten pills, and he has to label them in the order he will take them. What's the least number of pills he has to label? Solution: If he labels all the pills except one, he can always identify it as the unlabeled one. For example, if he decides not to label Pill 5, he will always know which pill is Pill 5--it's the only one without a label. So the solution is that he can get away without labeling one pill, but the puzzle assumes that he doesn't have to label the first pill either, because he'll swallow it before doing any labeling. I don't know HOW he knows which pill is Pill 1 if he doesn't do any labeling, but that's the puzzle solution anyway. Puzzle #111: Mystery Item francis.medi ------------------------- na Location: Park Road Examine: Poster Summary: What is...something that you need to live, something that people keep in their houses AND something that decreases in quantity the longer it's kept? Solution: Food. Move a matchstick on the leftmost box to spell out the word "Food" and submit it as your answer. Go inside the blue door building to the right, which is the local restaurant. Both the characters here have puzzles. Puzzle #025: Equilateral Triangle Location: Restaurant Character: Flick Summary: Can you move this triangle: X X X X X X X X X Upside down by only moving three X's? Solution: Yes! Move coins 1, 2 and 3: X X X X X 2 X X 3 To these spots: 2 X X 3 X X X X X Puzzle #023: Juice Pitchers Location: Restaurant Character: Crouton Summary: Crouton has an eight-quart container of juice, a five- quart container and a three-quart container. Can you move the juice between the three containers until you get four quarts in one jar? Solution: Crouton has a few puzzles that involve separating liquids in uneven containers. The solution to these puzzles is generally these four steps: 1. Big jar to middle jar 2. Middle jar to small jar 3. Small jar to big jar 4. Middle jar to small jar Just repeat the four steps until you get the desired amount of liquid. In this case, it takes seven turns. Once you solve the puzzles here, go outside and go north. Claudia the Cat is here, and she'll come with you if you got fish bones from Agnes (puzzle #22). Before you do that, check out the garbage. Professor Layton will be reminded of a puzzle, because pretty much _everything_ reminds the Professor of a Puzzle #026: Bottle Full of Germs Location: Park Gate Examine: Garbage Summary: You have a breed of germ that divides in half every minute. If you start with 1 germ, the jar becomes full in 60 minutes. If you start with 2 germs, the jar becomes full in how many minutes? Solution: The 1 germ becomes 2 germs after 1 minute...so starting with 2 germs only shaves 1 minute off the total. 60 � 1 = 59, so the answer is 59 minutes. Good, once you're done with the puzzle, NOW you can pick up Claudia and end the chapter. 003c-Chapter Three: The Missing Servant The game tells you that if you missed any puzzles in the last chapter, they will now appear in Granny Riddleton's Shack near the clock tower. Video #8: http://www.youtube.com/watch?v=lHH3PkMzmtY Professor Layton and Luke return to the mansion with Claudia, but there's no time for celebrating. Something really bad has happened--Simon (the man with glasses) has just been killed! Fortunately, Inspector Chelmey from London is on the case! Now, talk to Matthew and Gordon for puzzles and plot advancement. Puzzle #027: Bickering Brothers Location: Manor Foyer Character: Gordon Summary: Six brothers are sitting at a table. Brothers 3 and 5 can't sit next to each other, and no brother sits next to someone a year older or younger than he is. Solution: Place the brothers like this: Puzzle #028: Find the Dot Location: Manor Parlor Character: Matthew Summary: You have a strange shape with a red dot and black dot on it. When you turn it around, where will the black dot be? Solution: The lower/right corner. Matthew tells you that he found a mysterious small cog on the ground at the scene of the crime. Chelmey returns after you talk to Gordon and Matthew. He becomes concerned that Ramon isn't here, and so he puts Ramon at the top of his suspect list. The Professor can then go into Lady Dahlia's room. She gives you the mission of finding Ramon while Chelmey inspects the crime scene. Chapter Three officially begins here. Inside Lady Dahlia's room is a picture of her holding a baby. Leave the room and ask her about it to learn that she's never had a baby in her life. How odd. Video #9: http://www.youtube.com/watch?v=OlQcYomeFJo Inspector Chelmey here has a puzzle for you. Puzzle #029: Five Suspects Location: Manor Parlor Character: Chelmey Summary: Five people were arrested, and each gave a statement. A: One of us is lying. B: Two of us are lying. C: Three of us are lying. D: Four of us are lying. E: All five of us are lying. How many people are telling the truth? Solution: Only one of them is telling the truth: Person D. You know this is the case, because all five disagree, so no more than one can be telling the truth. Head downstairs and talk to Matthew. He says that the picture of the woman with the baby is really Lady Violet with Flora Reinhold. How curious that she looks so much like Lady Dahlia! Head outside, and our heroes see Claudia with Simon's glasses. That could be an important clue! They inform Inspector Chelmey about the glasses, but he just gets mad at the Professor for interrupting his investigation. As Luke points out, Inspector Chelmey is kind of rude. Oh well. It's best to forget about him for now, and just start looking for Ramon. Outside of the manor, Agnes is waiting with a puzzle. Puzzle #030: One Line Puzzle 1 Location: Manor Border Character: Agnes Summary: Which of the four pictures can't be drawn with only a single stroke of the pen? Solution: The cottage. Leave the manor grounds now, and talk to Marco. He tells you to avoid the mysterious tower. If you talk to him again, he gives you a puzzle. Puzzle #031: Racetrack Riddle Location: Manor Road Character: Marco Summary: We have three horses here. Horse A runs two laps in a minute. Horse B runs three laps in a minute. Horse C runs four laps in a minute. If you line up all three horses and set them running on the same racetrack at the same time, how long will it be before they're all lined up at the starting line again? Solution: One minute, clearly. Head inside the abandoned general store now. This will be the last time you visit the general store in this game. Examine the candle and the candy jar for puzzles. Puzzle #033: Light Which One? Location: General Store Examine: Candle Summary: You have one match. You want to light a room with an oil lamp, fireplace and heat your bathwater. In order to do this successfully, what do you light first? Solution: The match! Puzzle #032: Candy Jars Location: General Store Examine: Candy Jar Summary: You have 10 jars with 50 pieces of candy each. You pour out the candy into bags. Now, you have 20 bags full of candy. You want to have an average of 25 pieces of candy per bag. What is the percentage of this happening? Solution: 100% -- this happens 100% of the time. Basically, you need to know the mathematical concept of "average", or else you probably won't be able to solve this puzzle. Video #10: http://www.youtube.com/watch?v=TxuqjXl5glE Leave the General Store (forever), then go to the left to the plaza where the clock tower is. Percy warns the Professor and Luke to stay away from the mysterious tower. He also gives you a Puzzle #034: How Many Sheets? Location: Plaza Character: Percy Summary: There's a picture of several sheets of paper. How many sheets of paper are overlapping at the place where the most sheets overlap? Solution: Five. Tap on the building right of Percy to get a puzzle. Puzzle #058: Get the Ball Out 1 Location: Plaza Examine: Door Summary: Can you move all these various blocks in order to get the red ball to the bottom of the lock? Solution: My strategy (which isn't the fastest) involves filling the right-hand side of the puzzle with blocks and not moving them at all. That way, you have less blocks you have to worry about dealing with. 1. Move the left blue block down. 2. Move the left green block down. 3. Move the right blue block up. 4. Move the upper purple block up and left. 5. Move the lower purple block up twice. 6. Move the bottom yellow block right. 7. Move the upper yellow block down. 8. Move the left blue block right. 9. Move the left green block down. 10. Move the upper yellow block in the upper/left corner. 11. Move the red ball down three times. 12. Move the upper yellow block right. 13. Move the left green block up. 14. Move the red ball into the lower/right corner. 15. Move the left blue block up. 16. Move the red ball into the finishing position. If you go to the left, a loud noise sounds. Pauly says it comes from the mysterious tower. Professor Layton figures that because people have been mentioning the tower a lot lately, it must be important. Go inside the restaurant here and talk to Crouton for a puzzle. Puzzle #024: Milk Pitchers Location: Restaurant Character: Crouton Summary: Crouton has a ten-quart container of milk, a seven- quart container and a three-quart container. Can you move the milk between the three containers until you get five quarts in two jars? Solution: Move the milk like this: 1. Big to middle 2. Middle to small 3. Small to big 4. Middle to small 5. Small to big 6. Middle to small 7. Big to middle 8. Middle to small 9. Small to big Leave the restaurant and go right to the plaza with the clock tower. Talk to Deke to gain access to the northern part of town, where the tower is. He doesn't let you pass until you've solved twelve puzzles, as well as puzzle #35. Puzzle #035: Strange Dots Location: Plaza Character: Deke Summary: You have some dice, with numbers associated. What is the number that goes with the fourth die? Solution: Three! See, the dice are supposed to resemble clock hands. Deke must have thought up this puzzle because he's standing in front of a clock tower. Deke then steps aside and lets our heroes into the northern part of town. Video #11: http://www.youtube.com/watch?v=YfUSRM6DWBM You can get a number of puzzles here. One comes from Lucy, one comes from the cat and mouse, and one comes from the doorway to the left. Puzzle #036: Too Many Mice Location: Clock Tower Character: Cat and Mouse Summary: A certain mouse species gives birth once a month, with twelve babies being born at a time. A mouse that is two months old can give birth. You buy one of these mice. How many mice will you have in ten Solution: One mouse, the one you bought. Puzzle #037: Brother and Sister Location: Clock Tower Character: Lucy Summary: A boy has an older sister. If you take two years away from the boy's age and give them to the sister, she would be twice his age. If you take three years away from the boy's age and give them to the sister, she would be three times his age. How old are they? Solution: Math time! If B is the boy's age and G is the girl's age, the premises of the puzzle give us two equations. 2(B � 2) = G + 2 3(B � 3) = G + 3 The first equation simplifies into... 2B � 4 = G + 2 2B � 6 = G The second equation simplifies into... 3B � 9 = G + 3 3B � 12 = G Now we know that both 2B � 6 and 3B � 12 equal G, so... 2B � 6 = 3B � 12 2B + 6 = 3B 6 = B The boy is 6. Now, plug that into 2B � 6 = G... 2(6) � 6 = G 12 � 6 = G 6 = G The girl is 6 as well. Alternately, you could have just guessed ages at random for one character and fit that age into both equations to see if it Puzzle #107: A Worm's Dream Location: Clock Tower Examine: Doorway on the Left Summary: We have a worm in an apple! Slide all the pieces of the puzzle into place. Solution: The trick to the puzzle is that the upper/left and bottom/right pieces are identical. The bottom/right piece goes in the upper/left corner, and the upper/left piece goes in the bottom/right corner. So don't let the two identical pieces fool you while solving the puzzle. Go north now. Both of the characters here have puzzles. Puzzle #038: Island Hopping Location: Fork in the Road Character: Zappone Summary: You have a series of islands and bridges. You want to visit every island once and cross over every bridge once. Sadly, this is impossible. It IS possible if another bridge existed. Where should this bridge go? Solution: The bridge goes from the purple house (in the middle of the diagram) to the lighthouse which is down/left of it. Puzzle #039: One-Line Puzzle 2 Location: Fork in the Road Character: Agnes Summary: Agnes has four more pictures for you. Three of them can be drawn with one stroke of the pen, but one can't. Which one is the one that doesn't belong? Solution: Professor Layton's top hat. Sorry, Professor! Take the left fork to the north to reach the market, where there are two puzzles. One comes from Archibald, and one comes from the wall above Archibald. Puzzle #040: How Old is Dad? Location: Market Character: Archibald Summary: A 22-year-old boy doesn't know how old his father is. The father figures his son is kind of a dolt, and tries to get him to think by saying he is as old as the son, plus half of his own age. How old is the father? Solution: If you think about it, everyone is half of his or her own age, plus half of his or her own age. So the son's age must be half of the father's age. The son is 22, so the father must be 44. Puzzle #101: Splitting It Up Location: Market Examine: Wall Summary: You take a cube and paint all six sides red. Then, you cut the cube into 27 identical-sized pieces. How many of those pieces will have one red side? Solution: Six, one on each face of the larger cube. That's all there is to the left fork. Go back south, then take the right fork north. Video #12: http://www.youtube.com/watch?v=r264v-kGf50 A timid-looking fellow named Gerard is standing here. Talk to him twice for two puzzles. You might have to go away and come back to hear the second puzzle, though. Puzzle #041: Spare Change Location: Northern Path Character: Gerard Summary: You have a picture of coins and a rope. If the rope is pulled tight, how many coins will be above it? Solution: Nine. Puzzle #042: The Camera and Case Location: Northern Path Character: Gerard Summary: A person is selling a camera and a case for $310. He says the camera costs $300 more than the case. You buy the case with a $100 bill. How much change should you get back? Solution: We know that the camera and the case cost $310, and that the camera is $300 more than the case. The only way this works is if the camera is $305, and the case is $5. So, if you pay $100 for a $5 case, you should get $95 in change. Good news! This is the point in the game where you should have collected all the gizmo parts! Put them together to get a robot dog that hunts out hint coins! When you go north from here, you meet Jarvis. He tells you that Zappone should know where Ramon is. Go south two screens, then talk to Zappone. Zappone tells you to visit Crouton, the man who runs the restaurant. Go south two screens. Before visiting the restaurant to the left, you should go inside town hall building to get a puzzle from Rodney, and you should go north to get a puzzle from Puzzle #043: Three Umbrellas Location: Clock Tower Character: Lucy Summary: There are three girls. Each person has an umbrella. But say they take their umbrellas from an umbrella stand without looking. What are the odds that only two girls will walk off with their own umbrellas? Solution: The odds are 0. It's impossible for only two girls to have their own umbrellas. If two girls have their own umbrellas, the third girl has hers, too, which means _three_ girls have their own umbrellas. Puzzle #044: Stamp Stumper Location: Town Hall Character: Rodney Summary: Divide the block of stamps into seven groups, each group with a total of 100. Just to make things harder, each group has to be a different shape. With those puzzles out the way, it's high time to put an end to this chapter of the game. Go to the restaurant and talk to Crouton. He mentions a rumor about an old man who kidnaps Crouton tells you to visit the cafe, and the chapter ends. 003d-Chapter Four: Night Falls Video #13: http://www.youtube.com/watch?v=nFM8oU-Urjs It looks like the Professor and Luke get to do even MORE wandering around town. Adrea here has a puzzle. Puzzle #045: Puzzled Aliens Location: Park Road Character: Adrea Summary: Some aliens are confused about an Earth device, and give a very strange description of it. What are they talking Solution: The answer is a compass. Bleh... When you go to the right, Gerard interrupts the investigation by forcing you to find his glasses. Fortunately, they're not far. Go left, then up to the park gate. Deke knows where the glasses Puzzle #046: The Biggest Star Location: Park Gate Character: Deke Summary: Draw a line between four stars to make a big star. Solution: The trick is that the top of the pine tree forms one of the points of the big star. Deke tells you to go the inn for the glasses. So, go to the inn. Outside of the inn, click on the sign for a hidden puzzle. Puzzle #113: The Pet Hotel Location: Entrance Examine: Inn Sign Summary: Rearrange the matches to spell the name of an animal. Solution: CAT. Fortunately, not too many three-letter animal names exist, so guessing the correct answer is pretty easy. Go inside the inn, and ask Beatrice for the glasses. She gives them to you. Then, go the Gerard and give him his glasses. He will now step aside and let you explore the north part of town. He will also give you a puzzle. Puzzle #047: On the Run Location: Plaza Character: Gerard Summary: There's a map with many exits. A burglar starts at the red arrow, and whenever he reaches a junction, he turns left or right. He never retraces his steps, either. Which exit can he not go through? Solution: Exit B! No matter which direction he approaches it from, he can't go through it. Video #14: http://www.youtube.com/watch?v=R19K8VP1f9Q Go to the north part of town now. Lucy has a puzzle, and the cat and mouse have a puzzle, too. Puzzle #048: Cats and Mice Location: Clock Tower Character: Cat and Mouse Summary: 5 cats catch 5 mice in 5 minutes. How many cats can catch 100 mice in 100 minutes? Solution: 5 cats. Hey, if they catch mice at a rate of 5 mice per 5 minutes, that means in 100 minutes, they catch 100 mice. Puzzle #049: 1,000 Times Location: Clock Tower Character: Lucy Summary: _ is 1,000 times _ _. Which letter fits in all three Solution: The letter "m". If you use the metric system, one meter (M) equals 1,000 millimeters (mm). Go north and talk to Marco for a puzzle. Puzzle #050: OTTF? Location: Fork in the Road Character: Marco Summary: There is a series of letters: O T T F _ S S E N T. Which letter goes in the empty spot? Solution: The letters spell out the alphabet. One Two Three Four Five Six Seven Eight Nine Ten. The answer is "F" for "Five". Head inside the cafe. The barman agrees to help if you've solved 30 puzzles. If you haven't done that yet, go to Granny Riddleton's shack and solve some puzzles. Once you've solved enough puzzles, the Crumm the Barman tells you to see Prosciutto. To get to his house, you leave the cafe and take the right fork. However, Professor Layton is going to take the long way to Prosciutto's house, in order to make sure he finds all the puzzles in the north part of town. Two puzzles are here at the cafe. Zappone has one, and the other is a hidden puzzle behind the bottles. Puzzle #051: The Town Barbers Location: Crumm's Cafe Character: Zappone Summary: The two has two barbers. Barber A has a horrible haircut, and Barber B has a nice haircut. Who is the better Solution: Barber A. The assumption is that the barbers cut each other's hair, instead of cutting their own hair. If that's the case, Barber A gives a better haircut. Puzzle #106: How Many Glasses? Location: Crumm's Cafe Examine: Bottles Summary: A man wants to move the glasses of juice in the top row so they look like the glasses of juice in the bottom row. What is the least number of glasses he has to touch? Solution: One. Presumably, he pour from one glass into another without touching the second glass. Leave the cafe forever now (you will never have to return) and take the left fork. Agnes and Giuseppe here both have puzzles. Puzzle #053: Fish Thief Location: Market Character: Agnes Summary: Brothers A, B and C are accused of eating a fish. They each make a statement. A: I ate the fish. B: I saw A eat the fish. C: B and I did not eat the fish. One of the brothers is lying, but which one? Solution: C is the liar. Apparently, C and A both ate the fish. Puzzle #054: Monster! Location: Market Character: Giuseppe Summary: Oh no! A monster is attacking the town! Stab it in the Solution: The monster is the sky, so stab the moon. Head to the right to see Pauly, who's upset. He always seems upset about something, I've noticed. Video #15: http://www.youtube.com/watch?v=i0dFELmUeg8 Pauly's upset because he can't solve a puzzle! This looks like a job for Professor Layton! Puzzle #052: Find a Star Location: Northern Hill Character: Pauly Summary: Find a star in the picture. Solution: It's in the upper-left corner. Pauly gets mad at the Professor for solving the puzzle. That's gratitude for you! Go south and into a house to the left. This is Prosciutto's house. He's got a big ham hanging from his ceiling, and those of you who speak Italian will be amused at the fact that "Prosciutto" means "ham" in Italian. Talk to him, and he tells you about the strange kidnapper. Then, tap on the ham for a hidden puzzle. Puzzle #114: Tetrahedron Trial Location: Prosciutto's Examine: Ham Summary: You are given a picture of a tetrahedron that has been lain flat. Which of the triangles goes in the empty spot so all the various lines will connect with the tetrahedron is folded up 0 comments Abuse Sorry, to fulfil this action you have to be CheatsGuru User Login/register with FaceBook! Another Professor Layton and the Curious Village Walkthrough : Hot Cheats : • Moshi Monsters cheats, Android • Plants vs. Zombies cheats, NDS, XBOX 360, PS3, IPHONE, Android • Pokemon Heart Gold Version cheats • Pokemon Soul Silver cheats • The Lord of the Rings: War in the North cheats, XBOX 360, PS3 • Shogun 2: Total War cheats • BioShock Infinite cheats, XBOX 360, PS3 • Darksiders 2 cheats, XBOX 360, PS3 • Call of Duty: Modern Warfare 3 cheats, PC, NDS, WII, PS3 • Elder Scrolls V: Skyrim cheats, XBOX 360, PS3 • Torchlight II cheats • Borderlands 2 cheats, XBOX 360, PS3 • Temple Run cheats • Jagged Alliance: Back in Action cheats • XCom: Enemy Unknown cheats, XBOX 360, PS3
{"url":"http://www.cheatsguru.com/nds/professor_layton_and_the_curious_village/walkthrough/2159315.htm","timestamp":"2014-04-19T00:50:13Z","content_type":null,"content_length":"103285","record_id":"<urn:uuid:1b8eb26a-aa59-48f9-bccb-84bd194650a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
2-Equivariance and 2-Equivariance and “Weak Pullback” Posted by urs Here is math question related to strings on orbifolds which I have just submitted to sci.math.research. Replies of all kinds are very welcome. Consider equivariant principal bundles with connection in functorial language as follows. (1)$E\to M$ be a principal $G$-bundle over a base manifold $M$. Let there be a finite group $K$ acting freely (for simplicity) on $M$ by diffeomorphisms. Let $P\left(M\right)$ be the groupoid of thin-homotopy classes of paths in $M$. Let $\mathrm{Trans}\left(E\right)$ be the transport groupoid of $E$ with objects the fibers of $E$ (which are $G$-torsors) and morphisms the torsor morphisms between them. A connection on $E$ is a smooth functor (2)$\mathrm{trans}:P\left(M\right)\to \mathrm{Trans}\left(E\right)\phantom{\rule{thinmathspace}{0ex}}.$ The action of a group element $k\in K$ on $M$ gives rise to an obvious functor (3)$k:P\left(M\right)\to P\left(M\right)\phantom{\rule{thinmathspace}{0ex}}.$ The pullback under $k$ of the bundle $E$ with connection $\mathrm{trans}$ has the transport functor (4)$P\left(M\right)\stackrel{{k}^{*}\mathrm{trans}}{\to }\mathrm{Trans}\left(E\right)=P\left(M\right)\stackrel{k}{\to }P\left(M\right)\stackrel{\mathrm{trans}}{\to }\mathrm{Trans}\left(E\right)$ given simply by composing $\mathrm{trans}$ with $k$ . Next consider the quotient $M/K$ and the projection $\pi :M\to M/K$. Let there be a bundle $E\prime$ with connection $\mathrm{trans}\prime$ on $M/K$. We can pull it back to $M$ by (5)$\mathrm{trans}=\mathrm{trans}\prime \circ \pi \phantom{\rule{thinmathspace}{0ex}}.$ The bundle with connection $\mathrm{trans}=\mathrm{trans}\prime \circ \pi$ is invariant under $K$. What would we have to do to get something equivariant under $K$ instead? The answer is the following: instead of pulling back $\mathrm{trans}\prime$ globally, we only assume that $\mathrm{trans}\prime$ is locally naturally isomorphic to $\mathrm{trans}$. My question is (at last): what is the general abstract nonsense notion for this idea of “weak pullback of transport functors”? I’ll make this more precise. The construction I have in mind is the following. Given $\mathrm{trans}\prime$ on $M/G$, choose a good covering of $M/G$ by open sets $\left\{{U}_{i}\right\}$ together with sections ${s}_{i}:{U}_{i}\to M$ such that (6)$M={\cup }_{i}\phantom{\rule{thickmathspace}{0ex}}{s}_{i}\left({U}_{i}\right).$ Choose on each ${s}_{i}\left({U}_{i}\right)\subset M$ a transport functor (7)${\mathrm{trans}}_{i}:P\left({s}_{i}\left({U}_{i}\right)\right)\to \mathrm{Trans}\left(E\prime \right)$ such that it is naturally isomorphic to $\mathrm{trans}\prime$ restricted to ${U}_{i}$, with the natural isomorphism called ${L}_{i}$ (“L”ift): (8)${L}_{i}:\mathrm{trans}\prime \mid {U}_{i}\to {\mathrm{trans}}_{i}.$ By composing the inverse of ${L}_{i}$ with ${L}_{j}$ over ${U}_{i}\cap {U}_{j}$ we get a natural isomorphism upstairs (9)${L}_{j}\circ \left({L}_{i}{\right)}^{-1}:{\mathrm{trans}}_{i}\to {\mathrm{trans}}_{j}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\left(\mathrm{on}{U}_{\mathrm{ij}}\ By construction, ${L}_{j}\circ \left({L}_{i}{\right)}^{-1}$ is associated to an element $k\in K$. Suppose it is possible to glue all the ${\mathrm{trans}}_{i}$ such that there is a single trans which restricts to them strictly: (10)$\mathrm{trans}{\mid }_{{s}_{i}\left({U}_{i}\right)}={\mathrm{trans}}_{i}.$ If this $\mathrm{trans}$ exists, it will automatically be $K$-equivariant in the following sense: a) For each $k\in K$ there is a natural isomorphism (11)$\mathrm{trans}\stackrel{{O}_{k}}{\to }{k}^{*}\mathrm{trans}\phantom{\rule{thinmathspace}{0ex}}.$ b) Moreover, these natural isomorphisms automatically satisfy a triangle ‘coherence law’ (12)$\stackrel{{O}_{{k}_{1}}}{\to }\stackrel{{O}_{{k}_{2}}}{\to }=\stackrel{{O}_{{k}_{2}{k}_{1}}}{\to }$ saying that the group product is respected. I am doing the analogous construction for equivariant 2-bundles with 2-transport in order to describe strings on orbifolds and on orientifolds. It turns out to indeed reproduce known constructions when specialized appropriately. Hence it looks like the ‘right’ thing to do. But I have the feeling that the concept of ‘weak pullback’ of $p$-fucntors, or whatever it should be called, which is used here, is much more general than the application to equivariant $p$-bundles might suggest. Does it have an established name? Can anyone point me to references where this is discussed more generally? P.S. In case anyone is wondering: I am aware that in applications one is interested in the case where $K$ does not act freely. The point of the above is to derive the right notion of (2-)equivariance from the case where it does act freely and then impose that notion on the non-freely acting setup. Posted at November 7, 2005 5:41 PM UTC Re: 2-Equivariance and “Weak Pullback” There is a vast literature on weak limits (which is what a pullback is, remember) but the best I can do, I think, is refer you to the original master work “Formal Category Theory: Adjointness for 2-categories” LNM 391 by John W. Gray. On page 217 he defines Cartesian quasi-limits as follows: Let $F:C\to \mathrm{Cat}$ be a 2-functor and let $P:\left[1,F\right]\to C$ (where the bracket thing means comma category) be the canonical projection. The so-called ‘category of sections’ of $\left[1,F\right]$ is the pullback in $2\mathrm{Cat}$ of ${P}^{C}$ and ${1}_{C}$. This defines an object $\Gamma \left(F\right)$, which is the said quasi-limit. Of course one needs to sort out the structure of 2cats and comma cats for this to make sense, which Gray does, so its not a ‘trivial’ categorification. Posted by: Kea on November 7, 2005 7:59 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” Oh dear. Let’s not forget Basic Concepts of Enriched Category Theory, London Math Soc. Lecture Notes 64 (1982) by M. Kelly which I think is a reprint of the original LNM of the same name, but I’m not sure. Posted by: Kea on November 7, 2005 8:17 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” Thanks a lot. So you say is what I am looking for is called a quasi-limit? Maybe my use of ‘pullback’ is a red herring, I wonder. Actually in what I wrote I don’t really use a pullback construction explicitly. I note that composition of transport functors with diffeos induces pullbacks on the bundles involved - but implicitly. Hm, I gotta run now. Have to learn a little about sigma models on stacks before going to bed in order to prepare for Eric Sharpe, who is visiting us tomorrow. Posted by: Urs on November 7, 2005 8:25 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” Actually, Kelly’s book talks about indexed limits, which I think were later called weighted limits. Its one of my dreams to one day understand all these concepts but at the rate I’m going that’s going to take a LONG time! Posted by: Kea on November 7, 2005 9:24 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” A pity we cannot draw diagrams here. Kelly points out (page 119) that Kan extensions could replace indexed limits as the ‘proper’ index notion. My favourite illustration of this is the following: recall the triangle defining transposes and exponential objects in a Cartesian closed category. This diagram involves a total of 3 arrows and 4 objects. Now draw the (co)Kan diagram with 4 arrows and fill the spaces in with 3 2-arrows. This is precisely (if one chooses the functors properly) the categorified triangle! Posted by: Kea on November 7, 2005 10:17 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” Of course I might mention that the word ‘Cartesian’ in the above is like a sledge hammer being used to open a can of worms. Once one has 2-categories floating about and one wishes to consider their products, the universal one is NOT Cartesian, but rather the Gray product. Posted by: Kea on November 7, 2005 10:59 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” Hi Marni, many thanks indeed for all these comments. I do appreciate it. I feel, however, a little lost! :-) Maybe with your assistance I’ll be able to see clearer how what you have in mind applies to my problem. Maybe I reformulate it as follows, more abstractly. Say I have a functor $S\stackrel{F}{\to }T$. Also assume I have another functor $S\prime \stackrel{p}{\to }S$. In the example that I talked about it happened to be the case that the mere composition of these functors (1)$S\prime \stackrel{p}{\to }S\stackrel{F}{\to }T$ induced a pullback on some of the data defining these functors. This lead me, maybe unwisely, to mention the term pullback, even though there is no pullback cone involved here. Not directly at least. I was interested furthermore in the situation where there are invertible automorphisms $S\prime \stackrel{k}{\to }S\prime$ acting on $S\prime$ such that (2)$S\prime \stackrel{k}{\to }S\prime \stackrel{p}{\to }S=S\prime \stackrel{p}{\to }S\phantom{\rule{thinmathspace}{0ex}}.$ This implies that the gadget that I, maybe unwisely, called the pullback of $F$, namely (3)$S\prime \stackrel{p}{\to }S\stackrel{F}{\to }T$ is also invariant under composition with these automorphisms. I am interested in something weaker than that. I want to cook up from $F$ a functor $\Phi \left(F\right)$ (4)$S\prime \stackrel{\Phi \left(F\right)}{\to }T$ which is not plain invariant, but something I dared to call equivariant (which is possibly also not the best terminology, even though it amounts, in the special example I considered, to standard Namely I want there to be a natural isomorphism ${O}_{k}$ (5)$\begin{array}{c}S\prime \stackrel{k}{\to }S\prime \stackrel{\Phi \left(F\right)}{\to }T\\ ↓{O}_{k}\\ S\prime \stackrel{\Phi \left(F\right)}{\to }T\end{array}\phantom{\rule{thinmathspace}{0ex}}.$ I indicated a way how to construct such a $\Phi \left(F\right)$ in the special example that I am interested in. What I am trying to find out is if that construction is known in more general terms. Right now I cannot say if what you hinted at is indeed secretly an approach to an answer to that question. If it is, you need to help me see how! :-) Posted by: Urs on November 8, 2005 9:38 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” Hi Urs Sorry. I guess I got a little carried away with the weak pullback idea. I’m just hoping that you’ll start putting all that creative energy into true tricategorical constructions rather than String specific ones. Anyway, putting side the equivariance question for a second: assuming you are really dealing with pseudofunctors then the diagram for ${O}_{k}$ lives in 2cat proper and is thus a pseudonatural transformation, which is automatically invertible. A definition of 2-equivariance? This seems like a good question for a mathematician! But as far as I can tell it MUST involve limits. The usual EGxM/G (homotopically same as M/G for free action) is defined using pullbacks. I googled on this and did actually come up with some stuff about G-sheaves and Kan extensions but I’m afraid it all looked rather mysterious to me. I’ll keep thinking about Posted by: Kea on November 8, 2005 11:29 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” assuming you are really dealing with pseudofunctors then the diagram for ${O}_{k}$ lives in 2cat Wait, in what I wrote about equivariance of bundles, the ${O}_{k}$ are natural transformations between parallel transport 1-functors. So the triangle diagram satisfied by them lives in the functor category, which, for the case I discussed, is a 1-category. But what I am of course really interested in is the categorification of that. This yields transport 2-functors and the ${O}_{k}$ are then pseudonatural transformations between these satisfying a ‘solid’ triangle in the corresponding 2-functor 2-category. The triangle is filled with a modification of pseudonat. transformations, which itself then makes a tetrahedron diagram 2-commute. which is automatically invertible True. In fact all natural or pseudonatural transformations or modifications thereof in my setup are invertible, since the transport (1-, 2-) functors they act between all take values in groupoids. A definition of 2-equivariance? This seems like a good question for a mathematician! Yes, maybe. With all due modesty I am claiming to have a good notion of 2-equivariance. It is ‘good’ in the sense that it is the right thing to handle gerbes with connection and curving on orbifolds and orientifolds. It’s the rather obvious generalization of the equivariance condition in the way I stated it in the above entry for bundles. With that 2-equivariance definition in hand, I was now wondering if maybe I was reinventing the wheel. If maybe this is just a special case of some well known general abstract nonsense. The usual $EG×M/G$ (homotopically same as $M/G$ for free action) is defined using pullbacks. Hm, I am not sure I see what you are talking about. I assume $EG$ is supposed to denote the universal $G$-bundle. Now what precisely is defined using pullbacks? Sorry. Posted by: Urs on November 9, 2005 7:55 PM | Permalink | Reply to this Re: 2-Equivariance and “Weak Pullback” You can find the Kelly’s Enriched book which you mention from London Math Soc Lec Note Series retyped recently at the tac archive as the link number 8 on the page Posted by: Zoran Skoda on November 24, 2005 9:44 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/string/archives/000674.html","timestamp":"2014-04-20T19:17:07Z","content_type":null,"content_length":"45180","record_id":"<urn:uuid:9fc48733-582e-4654-94e4-94f8bfed2643>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimizing Area word problem...Help? A 24-in piece of string is cut into two pieces. One piece is used to form a circle wile the other is used to form a square. How should the string be cut so that the sum of the areas is a minimum? I'm having a hard time setting this one up. Would anyone be willing to help me? Thank you in advance. Re: Minimizing Area word problem...Help? j0hnx777 wrote:I'm having a hard time setting this one up. What have you tried? What variable did you start with, etc? j0hnx777 wrote:A 24-in piece of string is cut into two pieces. One piece is used to form a circle wile the other is used to form a square. How should the string be cut so that the sum of the areas is a minimum? If one of the cut pieces is length "s", what expression is the other length? (Use what "how much is left" thing they describe here.) Let's say you use the s-inch piece for the square. Then the sides are s/4, so what is the area? The other piece is for the circle, and the length is the circumference. Plug this into the formula C = 2(pi)r to get r in terms of s. Then plug this into the formula A = (pi)r^2. Once you have the two areas you can add them and then find the vertex for the minimum. If you get stuck, please write back showing how far you got in doing the steps I've listed. Thanks!
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=3133","timestamp":"2014-04-16T04:39:27Z","content_type":null,"content_length":"19148","record_id":"<urn:uuid:4945d3f6-5de8-4c1b-8a99-8191ca2adee3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Complete Sets of Unifiers and Matchers in Equational Theories Results 1 - 10 of 13 - ACM Computing Surveys , 1989 "... The unification problem and several variants are presented. Various algorithms and data structures are discussed. Research on unification arising in several areas of computer science is surveyed, these areas include theorem proving, logic programming, and natural language processing. Sections of the ..." Cited by 103 (0 self) Add to MetaCart The unification problem and several variants are presented. Various algorithms and data structures are discussed. Research on unification arising in several areas of computer science is surveyed, these areas include theorem proving, logic programming, and natural language processing. Sections of the paper include examples that highlight particular uses , 1990 "... ... unification problems. Then, in this framework, we develop a new unification algorithm for a-calculus with dependent function (II) types. This algorithm is especially useful as it provides for mechanization in the very expressive Logical Framework (LF). The development (object-languages). The ric ..." Cited by 25 (1 self) Add to MetaCart ... unification problems. Then, in this framework, we develop a new unification algorithm for a-calculus with dependent function (II) types. This algorithm is especially useful as it provides for mechanization in the very expressive Logical Framework (LF). The development (object-languages). The rich structure of a typed-calculus,asopposedtotraditional,rst- generalideaistousea-calculusasameta-languageforrepresentingvariousotherlanguages thelattercase,thealgorithmisincomplete,thoughstillquiteusefulinpractice. Thelastpartofthedissertationprovidesexamplesoftheusefulnessofthealgorithms.The algorithmrstfordependentproduct()types,andsecondforimplicitpolymorphism.In involvessignicantcomplicationsnotarisingHuet'scorrespondingalgorithmforthesimply orderabstractsyntaxtrees,allowsustoexpressrules,e.g.,programtransformationand typed-calculus,primarilybecauseitmustdealwithill-typedterms.Wethenextendthis Wecanthenuseunicationinthemeta-languagetomechanizeapplicationoftheserules. - Proceedings of the 10th International Conference on Automated Deduction, volume 449 of LNAI , 1992 "... In functional programming environments, one can use types as search keys in program libraries, if one disregards trivial differences in argument order or currying. A way to do this is to identify types that are isomorphic in every Cartesian closed category; simpler put, types should be identified if ..." Cited by 24 (1 self) Add to MetaCart In functional programming environments, one can use types as search keys in program libraries, if one disregards trivial differences in argument order or currying. A way to do this is to identify types that are isomorphic in every Cartesian closed category; simpler put, types should be identified if they are equal under an arithmetic interpretation, with Cartesian product as multiplication and function space as exponentiation. When the type system is polymorphic, one may also want to retrieve identifiers of types more general than the query. This paper describes a method to do both, that is, an algorithm for pattern matching modulo canonical CCC-isomorphism. The algorithm returns a finite complete set of matchers. An implementation shows that satisfactory speed can be achieved for library search. Contents 1 Introduction 2 2 Unification/Matching in Equational Theories 6 3 Comparison with Previous Work 9 4 An Algorithm for \Gamma-matching 14 5 Practical Experience of Library Search 25 6... - Proceedings of 1998 Joint International Conference and Symposium on Logic Programming , 1998 "... We review and compare the main techniques considered to represent finite sets in logic languages. We present a technique that combines the benefits of the previous techniques, avoiding their drawbacks. We show how to verify satisfiability of any conjunction of (positive and negative) literals based ..." Cited by 11 (6 self) Add to MetaCart We review and compare the main techniques considered to represent finite sets in logic languages. We present a technique that combines the benefits of the previous techniques, avoiding their drawbacks. We show how to verify satisfiability of any conjunction of (positive and negative) literals based on =, ⊆, ∈, and ∪, ∩, \, and ||, viewed as predicate symbols, in a (hybrid) universe of finite sets. We also show that ∪ and | | (i.e., disjointness of two sets) are sufficient to represent all the above mentioned operations. 1 , 1997 "... A unification algorithm is said to be minimal for a unification problem if it generates exactly a (minimal) complete set of most-general unifiers, without instances, and without repetitions. The aim of this paper is to present a combinatorial minimality study for a significant collection of sample p ..." Cited by 10 (7 self) Add to MetaCart A unification algorithm is said to be minimal for a unification problem if it generates exactly a (minimal) complete set of most-general unifiers, without instances, and without repetitions. The aim of this paper is to present a combinatorial minimality study for a significant collection of sample problems that can be used as benchmarks for testing any set-unification algorithm. Based on this combinatorial study, a new Set-Unification Algorithm (named SUA) is also described and proved to be minimal for all the analyzed problems. Furthermore, an existing nave set-unification algorithm has also been tested to show its bad behavior for most of the sample problems. - In Proc. 7th Conf. Functional Programming Languages and Computer Architecture , 1995 "... Numeric types can be given polymorphic dimension parameters, in order to avoid dimension errors and unit errors. The most general dimensions can be inferred automatically. It has been observed that polymorphic recursion is more important for the dimensions than for the proper types. We show that, un ..." Cited by 9 (1 self) Add to MetaCart Numeric types can be given polymorphic dimension parameters, in order to avoid dimension errors and unit errors. The most general dimensions can be inferred automatically. It has been observed that polymorphic recursion is more important for the dimensions than for the proper types. We show that, under polymorphic recursion, type inference amounts to syntactic semi-unification of proper types, followed by equational semi-unification of dimensions. Syntactic semi-unification is unfortunately undecidable, although there are procedures that work well in practice, and proper types given by the programmer can be checked. However, the dimensions form a vector space (provided that their exponents are rational numbers). We give a polynomial-time algorithm that decides if a semi-unification problem in a vector space can be solved and, if so, returns a most general semi-unifier. 1 Introduction We will combine three good things as far as possible: dimension types, polymorphic recursion, and aut... , 1999 "... In this paper we show how to extend a set unification algorithm -- i.e., an extended unification algorithm incorporating the axioms of a simple theory of sets -- to hyperset unification, that is to sets in which, roughly speaking, membership can form cycles. This is obtained by enlarging the domain ..." Cited by 9 (8 self) Add to MetaCart In this paper we show how to extend a set unification algorithm -- i.e., an extended unification algorithm incorporating the axioms of a simple theory of sets -- to hyperset unification, that is to sets in which, roughly speaking, membership can form cycles. This is obtained by enlarging the domain from that of terms (hence, trees) to that of graphs involving free as well as interpreted function symbols (namely, the set element insertion and the empty set), which can be regarded as a convenient denotation of hypersets. We present a hyperset unification algorithm which (non-deterministically) computes, for each given unification problem, a finite collection of systems of equations in solvable form whose solutions represent a complete set of solutions for the given unification problem. The crucial issue of termination of the algorithm is addressed and solved by the addition of simple non-membership constraints. Finally, the hyperset unification problem dealt with is proved to be NP-comp... - in Proceedings of the 13th International Conference on Automated Deduction, M.A. McRobbie and J.K. Slaney (Eds.), Springer LNAI 1104 , 1996 "... . We establish that there is no polynomial-time general combination algorithm for unification in finitary equational theories, unless the complexity class #P of counting problems is contained in the class FP of function problems solvable in polynomial-time. The prevalent view in complexity theory is ..." Cited by 4 (0 self) Add to MetaCart . We establish that there is no polynomial-time general combination algorithm for unification in finitary equational theories, unless the complexity class #P of counting problems is contained in the class FP of function problems solvable in polynomial-time. The prevalent view in complexity theory is that such a collapse is extremely unlikely for a number of reasons, including the fact that the containment of #P in FP implies that P = NP. Our main result is obtained by establishing the intractrability of the counting problem for general AG-unification, where AG is the equational theory of Abelian groups. Specifically, we show that computing the cardinality of a minimal complete set of unifiers for general AG-unification is a #P-hard problem. In contrast, AG-unification with constants is solvable in polynomial time. Since an algorithm for general AG-unification can be obtained as a combination of a polynomialtime algorithm for AG-unification with constants and a polynomial-time , 1994 "... A substitution oe AG-semi-unifies the inequation s ? AG t iff there is another substitution ae such that ae(oe(s)) =AG oe(t), where =AG is equality in Abelian groups. I give an algorithm that decides if an inequation has an AG-semi-unifier and, if so, returns a most general one. This is a firs ..." Cited by 3 (2 self) Add to MetaCart A substitution oe AG-semi-unifies the inequation s ? AG t iff there is another substitution ae such that ae(oe(s)) =AG oe(t), where =AG is equality in Abelian groups. I give an algorithm that decides if an inequation has an AG-semi-unifier and, if so, returns a most general one. This is a first step towards type derivation in programming languages with dimension types and polymorphic recursion. Key words: algorithms; Abelian groups; equational semi-unification; programming languages; compilers; dimension types; polymorphic recursion. 1 Introduction This article describes an algebraic algorithm, which is related to unification. Such an algorithm seems to be necessary (but not sufficient) for type derivation in some programming languages that have dimension types as well as polymorphic recursion. I will only sketch the kind of type system I am aiming at; the technical contribution is the algebraic algorithm. 1.1 Dimension types Most type systems allow programmers to make dimensi...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=773423","timestamp":"2014-04-20T10:15:53Z","content_type":null,"content_length":"37748","record_id":"<urn:uuid:e35b7afd-ffa7-4117-974a-c15039bfcb79>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
The PARTICLE_TRACE procedure traces the path of a massless particle through a vector field, given a set of starting points (or seed points). Particles are tracked by treating the vector field as a velocity field and integrating. Each particle is tracked from the seed point until the path leaves the input volume or a maximum number of iterations is reached. The vertices generated along the paths are returned packed into a single array along with a polyline connectivity array. The polyline connectivity array organizes the vertices into separate paths (one per seed). Each path has an orientation. The initial orientation may be set using the SEED_NORMAL keyword. As a path is tracked, the change in the normal is also computed and may be returned to the user as the optional Normals argument. Path output can be passed directly to an IDLgrPolyline object or passed to the STREAMLINE procedure for generation of ribbons. Control over aspects of the integration (e.g. method or stepsize) is also provided. The following procedure calculates the path of three particles through a vector field representing wind and plots it against a background displaying the vector field itself. PRO ex_particle_trace2 ; Restore u, v, x, and y variables containing wind data RESTORE, FILEPATH('globalwinds.dat', $ ; Build a data array from the wind data. data = FLTARR(2, 128, 64) data[0, *, *] = u data[1, *, *] = v ; Define starting points for the streamlines seeds = [[32, 32], [64, 32], [96, 32]] ; Calculate the vertext and connectivity data for the ; streamline paths PARTICLE_TRACE, data, seeds, verts, conn, MAX_ITERATIONS=30 ; Plot the underlying vector field VELOVECT, u, v, x, y, COLOR='AAAAAA'x ; Overplot the streamlines i = 0 sz = SIZE(verts, /STRUCTURE) WHILE (i LT sz.dimensions[1]) DO BEGIN nverts = conn[i] ; OLD METHOD: Uses 'x' and 'y' as lookup tables, losing the fractional ; part of the result, ending up only using data grid points. PLOTS, x[verts[0, conn[i+1:i+nverts]]], $ y[verts[1, conn[i+1:i+nverts]]], $ COLOR='0000FF'x, THICK=2 ; NEW METHOD: Find 'index' locations of desired points, then scale them ; correctly to exact degrees. Plotting both to show improvement. xIndices = verts[0, conn[i+1:i+nverts]] yIndices = verts[1, conn[i+1:i+nverts]] xDeg = (xIndices / 128) * 360 - 180 yDeg = (yIndices / 64) * 180 - 90 PLOTS, xDeg, yDeg, COLOR='0000FF'x, THICK=2 i += nverts + 1 PARTICLE_TRACE, Data, Seeds, Verts, Conn [, Normals] [, MAX_ITERATIONS=value] [, ANISOTROPY=array] [, INTEGRATION={0 | 1}] [, SEED_NORMAL=vector] [, TOLERANCE=value] [, MAX_STEPSIZE=value] [, / A three- or four-dimensional array that defines the vector field. Data can be of dimensions [2, dx, dy] for a two-dimensional vector field or [3, dx, dy, dz] for a three-dimensional vector field, • Data[0,*,*] or Data[0,*,*,*] contains the X components of the two- or three-dimensional vector field (commonly referred to as U). • Data[1,*,*] or Data[1,*,*,*] contains the Y components of the two- or three-dimensional vector field (commonly referred to as V). • Data[2,*,*,*] contains the Z components of the three-dimensional vector field (commonly referred to as W). An array of two- or three-element vectors ([2, n] or [3, n]) specifying the indices of the n points in the Data array at which the tracing operation is to begin. The result will be n output paths. A named variable that will contain the output path vertices as a [2, n] or [3, n] array of floating-point values. A named variable that will contain the output path connectivity array in IDLgrPolyline POLYLINES keyword format. There is one set of line segments in this array for each input seed point. A named variable that will contain the output normal estimate at each output vertex ([3, n] array of floats). Set this input keyword to a two- or three- element array describing the distance between grid points in each dimension. The default value is [1.0, 1.0, 1.0] for three-dimensional data and [1.0, 1.0] for two-dimensional data. Set this keyword to one of the following values to select the integration method: • 0 = 2nd order Runge-Kutta (the default) • 1 = 4th order Runge-Kutta Set this keyword to a three-element vector which selects the initial normal for the paths. The default value is [0.0, 0.0, 1.0]. This keyword is ignored for 2-D data. This keyword is used with adaptive step-size control in the 4th order Runge-Kutta integration scheme. It is ignored if the UNIFORM keyword is set or the 2nd order Runge-Kutta scheme is selected. This keyword specifies the maximum number of line segments to return for each path. The default value is 200. This keyword specifies the maximum path step size. The default value is 1.0. If this keyword is set, the step size will be set to a fixed value, set via the MAX_STEPSIZE keyword. If this keyword is not specified, and TOLERANCE is either unspecified or inapplicable, then the step size is computed based on the velocity at the current point on the path according to the formula: stepsize = MIN(MaxStepSize, MaxStepSize/MAX(ABS(U), ABS(V), ABS(W))) where (U,V,W) is the local velocity vector. Version History This page has no user notes yet. Be the first one!
{"url":"http://exelisvis.com/docs/PARTICLE_TRACE.html","timestamp":"2014-04-19T22:05:42Z","content_type":null,"content_length":"68637","record_id":"<urn:uuid:50c03848-2b28-4543-9107-1c808fe5a760>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: expand the logarithmic expression logb square root 57/74 • one year ago • one year ago Best Response You've already chosen the best response. If you mean:\[\log_{b} \sqrt{\frac{ 57 }{ 74 }}=\log_{b}\left( \frac{57}{74}\right )^{\frac{1}{2}}\]you could use two rules:\[\log_{b}x^a=a \cdot \log_{b} x\]and:\[\log_{b}\frac{ p }{ q } =\log_ Best Response You've already chosen the best response. (In that order, btw...) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510eb007e4b09cf125bce5bb","timestamp":"2014-04-23T16:06:04Z","content_type":null,"content_length":"30082","record_id":"<urn:uuid:66210c64-a37d-4071-a5dd-f53f3e305a20>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Midland Park Math Tutor I have taught Mathematics in courses that include basic skills (arithmetic and algebra), probability and statistics, and the full calculus sequence. My passion for mathematics and teaching has allowed me to develop a highly intuitive and flexible approach to instruction, which has typically garnere... 7 Subjects: including algebra 1, algebra 2, calculus, geometry ...And that you really learn something to meet your goals. Grades are important, but the challenge of learning a new discipline is more rewarding after all is said and done. And you will feel great when you have made your dreams and your goals come true. 30 Subjects: including algebra 1, biology, chemistry, sewing ...Having worked with a number of students with learning differences, I have developed strategies to help these alternative learners more effectively access confusing concepts, in turn bolstering their academic confidence. I always look forward to meeting new students and helping each to meet his/h... 33 Subjects: including prealgebra, algebra 1, English, Spanish ...It has been my experience that a student can learn best when he/she understands how he/she can best learn. I first began to tutor at an after school center. There I was able to work with students who had challenging learning needs. 27 Subjects: including algebra 1, elementary (k-6th), grammar, writing ...I love words, and I think it helps students that I'm able to define the words we encounter in a fun and relatable way, without the aid of a dictionary. I also teach a number of memory techniques to assist students in building their vocabularies and to aid in the memorization they do for other su... 36 Subjects: including probability, calculus, algebra 1, ACT Math
{"url":"http://www.purplemath.com/Midland_Park_Math_tutors.php","timestamp":"2014-04-16T10:25:43Z","content_type":null,"content_length":"23802","record_id":"<urn:uuid:eaf1866d-8f0d-4b53-8d92-867a6c54eb35>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help please • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5093ffe8e4b0c875f79e02dc","timestamp":"2014-04-19T22:24:22Z","content_type":null,"content_length":"54960","record_id":"<urn:uuid:353bd7d2-2e47-405a-9cba-7a03dca77c36>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] ploting an ellipse keeps giving errors Martin Maechler maechler at stat.math.ethz.ch Thu Oct 28 11:18:34 CEST 2004 >>>>> "Sun" == Sun <sun at cae.wisc.edu> >>>>> on Wed, 27 Oct 2004 04:25:00 -0500 writes: Sun> Thank you. I found there are two ellipses Sun> 1. Sun> R2.0 Sun> library (car) Sun> 2. Sun> R1.9 and R2.0 Sun> library (ellipse) Sun> And they are different! I can't run 1. Sun> But the 2. is kind of specialized for t-distribution confidence and so on. Sun> I need to find a general ellipse for an ellipse equation like Sun> (x-x0)^2/a + (y-y0)^2/b =1 Sun> . Since I used chi-square percentile not t. I am trying to obtain the large Sun> sample 95% simultaneous confidence ellipse for two population means (say, Sun> weight and height). The input are the two sample means and their Sun> covariances. Sun> Maybe I have to make my own ellipse function. maybe not. There is more (than you mentioned above) available: 1) The recommended (i.e. you don't have to install it) package "cluster" has a. ellipsoidhull() {which is not what you need directly} which returns an (S3) object of class 'ellipsoid' and there's methods for such objects: predict(<ellipsoid>) computes points you can draw. Note that tells you how an "ellipsoid" object must look like. (they *are* defined in terms of cov()-matrix , center and "radius^2") and also has examples. b. ellipsoidPoints() is the function you can really directly use: A version of the following example will be in the next version of cluster: ## Robust vs. L.S. covariance matrix x <- rt(200, df=3) y <- 3*x + rt(200, df=2) plot(x,y, main="non-normal data (N=200)") X <- cbind(x,y) C.ls <- cov(X) ; m.ls <- colMeans(X) Cxy <- cov.rob(cbind(x,y)) lines(ellipsoidPoints(C.ls, d2 = 2, loc=m.ls), col="green") lines(ellipsoidPoints(Cxy$cov, d2 = 2, loc=Cxy$center), col="red") 2) The 'sfsmisc' package has a complementary useful ellipsePoints() function for ellipses ``given by geometry'' the help of which starts with >> Compute Radially Equispaced Points on Ellipse >> Description: >> Compute points on (the boundary of) an ellipse which is given by >> elementary geometric parameters. >> Usage: >> ellipsePoints(a, b, alpha = 0, loc = c(0, 0), n = 201) >> Arguments: >> a,b: length of half axes in (x,y) direction. >> alpha: angle (in degrees) giving the orientation of the ellipse, >> i.e., the original (x,y)-axis ellipse is rotated by 'angle'. >> loc: center (LOCation) of the ellipse. >> n: number of points to generate. has a "nice" example of ellipse drawing, even a movie of a rotating ellipse... Martin Maechler, ETH Zurich More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2004-October/059856.html","timestamp":"2014-04-19T19:36:02Z","content_type":null,"content_length":"5847","record_id":"<urn:uuid:377630c6-8591-48a7-9452-93d82b6db2da>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Valid example of proof by contradiction? May 9th 2011, 11:30 PM Valid example of proof by contradiction? I'm studying "How To Prove It" (Velleman) and I'm on exercise 3.7. The solution I gave is a "proof by contradiction" and I would like to verify that the method of proof and result are valid. $\\\mbox{Suppose that }a\mbox{ is a real number. Prove that if }a^3 > a\mbox{ then }a^5 > a\mbox{.}\\\\\mbox{Suppose }a^5 \leq a\mbox{. Then }a^5 - a \leq 0\mbox{ and }a^5 - a = (a^3 - a)(a^2 + 1) \leq 0\mbox{.}\\\mbox{However, we know that }a^3 > a\mbox{ so the first factor }a^3 - a > 0\mbox{.}\\\mbox{The second factor is also positive since }x e 0 \rightarrow x^2 > 0\mbox{ for all }x \in \mathbb{R}\mbox{,}\\\mbox{which implies that }a^2 + 1 > 0\mbox{ as well. }\\\mbox{This leads to a contradiction however, therefore }a^3 > a \rightarrow a^5 > a\mbox{.}$ Is this a well formed proof? How explicit do I have to be? For example, is it OK for me to leave out the fact that for a, b in R, a > 0 and b > 0 implies that ab > 0? If so, then what else could I have left out? Maybe these things will be more clear once working on more complicated proofs? May 9th 2011, 11:58 PM Yes, it is. How explicit do I have to be? For example, is it OK for me to leave out the fact that for a, b in R, a > 0 and b > 0 implies that ab > 0? If so, then what else could I have left out? Depends on the context. For example I suppose you are allowed to use all the properties appearing in your proof. May 10th 2011, 01:01 AM Thanks for the feedback! Could you give me an example of a situation where you can't use certain properties (besides the obvious ones of, you are proving that particular property or using a property that is an extension of the one you are proving)? May 10th 2011, 01:10 AM I agree that your proof is fine. To be extremely picky, I would only make some stylistic changes. Suppose that a is a real number. Prove that if a^3 > a then a^5 > a. Suppose that a^3 > a, but a^5 <= a. Then a^5 - a <= 0 and a^5 - a = (a^3 - a)(a^2 + 1) <= 0. However, we know that a^3 > a, so the first factor a^3 - a > 0. The second factor is also positive since x^2 >= 0 for all x in R, which implies that a^2 + 1 > 0 as well. This leads to a contradiction with the fact that (a^3 - a)(a^2 + 1) <= 0, however; therefore, a^3 > a -> a^5 > a. In particular, one feature of a good proof is uniform complexity, when the amount of reasoning required to go from one statement to the next is about the same throughout the proof. I hate it when one particular step in some textbook proofs is much more complicated than others; it has to be broken into several steps. Here the "proof speed" is supposed to be pretty low, i.e., even rather simple steps need to be explained. I found the following sentence: The second factor is also positive since x != 0 -> x^2 > 0 for all x in R,which implies that a^2 + 1 > 0 as well. to be more complicated than others. First, it was not stated that a != 0 (though it is obvious from the fact that a^3 - a > 0) and, second, a^2 + 1 > 0 even when a = 0. May 10th 2011, 01:24 AM Thanks for the excellent feedback! It gives me a much better idea of what's involved in a proof. My original conception was that a proof must follow some very specific steps, otherwise it isn't a valid proof (maybe this idea comes from my background in programming). However, it seems that as long as there are no errors in logic, simply transforming the premise into the conclusion with a well-worded explanation is fine. May 10th 2011, 01:54 AM an alternate, and much (in my opinion) more direct contradiction would be: ....then (a^3 - a)(a^2 + 1) ≤ 0. since a^2 + 1 > 0 for all a, we have that a^3 - a ≤ 0, that is, a^3 ≤ a, contradicting that a^3 > a. (assuming, of course, that one has proved already that a^2 ≥ 0, so that a^2 + 1 > a^2 ≥ 0). May 10th 2011, 02:26 AM My original conception was that a proof must follow some very specific steps, otherwise it isn't a valid proof (maybe this idea comes from my background in programming). Ultimately, this is correct. In fact, you'll be happy to know that proofs are programs. However, when proofs are intended to be read by people, some steps are omitted and proofs are presented as you say.
{"url":"http://mathhelpforum.com/discrete-math/180070-valid-example-proof-contradiction-print.html","timestamp":"2014-04-17T05:28:44Z","content_type":null,"content_length":"11398","record_id":"<urn:uuid:df447a23-efb6-403d-a393-e9209d3028b2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Bessel [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Bessel From Jhilbe@aol.com To statalist@hsphsun2.harvard.edu Subject st: RE: Bessel Date Sun, 2 Oct 2005 13:53:31 EDT Thanks for the feedback. I'll check with Bobby. For those who don't know my purpose in asking, I'm interested in programming a PIG (or Holla) model in Stata, which is a Poisson-Inverse Gaussian mixture model. I did it using MathStatica 1.5, which is a statistical add-on package to Mathematica 5.2. But since a type 2 Bessel function is involved in both the PMF and LL functions, it may be difficult to program it using Stata facilities. Hence my query about any good approximations - or even a programmed Bessel type 2 function itself. Since the Bessel is based on the Gamma, I thought perhaps someone might have found a shortcut to the Bessel using Stata's gamma - or a related - function. I found an approximation to the Bessel Type 2 in an old journal article. Since it is an approximation anyhow, I further simplified it by calculating pi/2 and pi/4 to their numeric values (to 4 decimal points), which are 1.5708 and 0.7854 respectively. The actual approximation I found was given as: J_n(x) ~ sqrt(2/(pi*x)) * sin(x-(2*n+1)*(pi/4). Kit Baum provided me with a reference to Fortran source code for the function. I'll take a look at that as well. It just may be that there will be too much work involved to program the PIG at this time. However, it is has useful properties for modeling the response as the values of the counts slowly decrease with their increasing value. Joe Hilbe Date: Sat, 1 Oct 2005 17:43:21 +0100 From: "Nick Cox" <n.j.cox@durham.ac.uk> Subject: st: RE: RE: Bessel Bobby Gutierrez needed some Bessel function for some purpose and wrote an undocumented helper program. I can't remember which and can't find the code. I once wrote a rather specific Stata program for I_0, not what you specify here. 1.5708 means _pi/2 here. Odds are you need to transliterate formulae from Abramowitz and Stegun's handbook to Stata or Mata. > To be more specific, has anyone made a Bessel of the 2nd > kind function in > Stata? An approximation is sqrt(1.5708/x) * > sin(x-0.7854(2y+1)) for values > closer to the max values and > -[ (2^n)(n-1)! /pi ] * x^(-n) for values closer to 0. If > someone has a > better approximation, or the "exact" function in Stata, I'd > love to see it. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-10/msg00021.html","timestamp":"2014-04-20T09:28:20Z","content_type":null,"content_length":"6884","record_id":"<urn:uuid:e21245da-deed-4526-958c-94ff7d13abff>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 11 To draw a straight line perpendicular to a given plane from a given elevated point. Let A be the given elevated point, and the plane of reference the given plane. It is required to draw from the point A a straight line perpendicular to the plane of reference. Draw any straight line BC at random in the plane of reference, and draw AD from the point A perpendicular to BC. Then if AD is also perpendicular to the plane of reference, then that which was proposed is done. But if not, draw DE from the point D at right angles to BC and in the plane of reference, draw AF from A perpendicular to DE, and draw GH through the point F parallel to BC. Now, since BC is at right angles to each of the straight lines DA and DE, therefore BC is also at right angles to the plane through ED and DA. And GH is parallel to it, but if two straight lines are parallel, and one of them is at right angles to any plane, then the remaining one is also at right angles to the same plane, therefore GH is also at right angles to the plane through ED and DA. And GH is parallel to it, but if two straight lines are parallel, and one of them is at right angles to any plane, then the remaining one is also at right angles to the same plane, therefore GH is also at right angles to the plane through ED and DA. Therefore GH is also at right angles to all the straight lines which meet it and are in the plane through ED and DA. But AF meets it and lies in the plane through ED and DA, therefore GH is at right angles to FA, so that FA is also at right angles to GH. But AF is also at right angles to DE, therefore AF is at right angles to each of the straight lines GH and DE. But if a straight is set up at right angles to two straight lines which cut one another at their intersection point, then it also is at right angles to the plane through them. Therefore FA is at right angles to the plane through ED and GH. But the plane through ED and GH is the plane of reference, therefore AF is at right angles to the plane of reference. Therefore from the given elevated point A the straight line AF has been drawn perpendicular to the plane of reference.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookXI/propXI11.html","timestamp":"2014-04-17T21:39:22Z","content_type":null,"content_length":"6128","record_id":"<urn:uuid:3ab5d719-3db6-419a-b324-14169875ee67>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluation of Allele Frequency Estimation Using Pooled Sequencing Data Simulation The Scientific World Journal Volume 2013 (2013), Article ID 895496, 9 pages Research Article Evaluation of Allele Frequency Estimation Using Pooled Sequencing Data Simulation ^1Vanderbilt Ingram Cancer Center, Center for Quantitative Sciences, Nashville, TN, USA ^2Center for Human Genetics Research, Vanderbilt University Medical Center, Nashville, TN, USA ^3VANTAGE, Vanderbilt University, Nashville, TN, USA Received 15 November 2012; Accepted 30 December 2012 Academic Editors: L. Han, X. Li, and Z. Su Copyright © 2013 Yan Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Next-generation sequencing (NGS) technology has provided researchers with opportunities to study the genome in unprecedented detail. In particular, NGS is applied to disease association studies. Unlike genotyping chips, NGS is not limited to a fixed set of SNPs. Prices for NGS are now comparable to the SNP chip, although for large studies the cost can be substantial. Pooling techniques are often used to reduce the overall cost of large-scale studies. In this study, we designed a rigorous simulation model to test the practicability of estimating allele frequency from pooled sequencing data. We took crucial factors into consideration, including pool size, overall depth, average depth per sample, pooling variation, and sampling variation. We used real data to demonstrate and measure reference allele preference in DNAseq data and implemented this bias in our simulation model. We found that pooled sequencing data can introduce high levels of relative error rate (defined as error rate divided by targeted allele frequency) and that the error rate is more severe for low minor allele frequency SNPs than for high minor allele frequency SNPs. In order to overcome the error introduced by pooling, we recommend a large pool size and high average depth per sample. 1. Introduction Over the last decade, large-scale genome-wide association studies (GWAS) based on genotyping arrays have helped researchers to identify hundreds of loci harboring common variants that are associated with complex traits. However, multiple disadvantages have limited genotyping arrays’ ability for disease association detection. A major disadvantage of genotyping arrays is the limited power for detecting rare disease variance. Rare variants with minor allele frequency (MAF) less than 1% are not sufficiently captured by GWAS [1]. Such low MAF variants may have substantial effect sizes without showing Mendelian segregation. The lack of a functional link between the majority of the putative risk variants and the disease phenotypes is another major drawback for genotyping array-based GWAS [2]. The most popular genotyping chip, the Affymetrix 6.0 array, contains nearly 1 million SNPs, yet only one-third of these SNPs resides in the coding regions. Even though many GWAS-identified statistically significant SNPs lie in the intron or intergenic regions, [3–5] their biological function remains difficult to explain. Another limitation of genotyping arrays is that, because the SNPs are predetermined on the array, no finding of novel SNPs is possible. Most of the above limitations can be overcome by using high throughput NGS technology [6]. NGS can target a specific region of interest, such as the exome or the mitochondria. Often, the functions of variants identified in coding regions of interest are much easier to explain than those of variants identified in the intron or intergenic regions. Also, by targeting the exome, we can effectively examine nearly 30 million base pairs in the coding region rather than just 0.3 million SNPs on the Affymetrix 6.0 array. Sequencing technology has been used to detect rare variants in many studies [7 –10], with rare variants defined as 1%–5% frequency. Due to the large sample size needed to detect such low frequency variants, detection of rare variants less than 1% can still pose a significant challenge for NGS technology. One way to overcome this limitation is by doing a massive genotyping catalogue such as the 1000 Genomes Project [11]. Researchers are often too limited financially to conduct a genotyping study on such a large scale. DNA pooling is a strategy often used to reduce the financial burden in such cases. The concept of pooling in genetic studies began in 1985 with the first genetic study to apply a pooling strategy [12]. Since then, pooling has been extensively applied in linkage studies in plants [ 13], allele frequency measurements of microsatellite markers and single nucleotide polymorphisms (SNPs) [10, 14–18], homozygosity mapping of recessive diseases in inbred populations [19–22], and mutation detection [23]. Even though pooling has also been used widely with NGS technology [24–26], the effectiveness of the pooling strategy has long been debated. On the one hand, several studies have claimed that data generated from pooling studies are accurate and reliable. For example, Huang et al. claimed that the minor allele odds ratio estimated from pooled DNA agreed fairly well with the minor allele odds ratio estimated from individual genotyping [27]. Docherty et al. demonstrated that pooling can be effectively applied to the genome-wide Affymetrix GeneChip Mapping 500K Array [28]. Some studies have even found that pooling designs have an advantage in the detection of rare alleles and mutations, such as the study by Amos et al., which suggested that mutations in individuals could be more efficiently detected using pools [23]. On the other hand, several studies have argued that, when compared with individual sequencing, pooled sequencing can generate variant calls with high false-positive rates [29]. Other studies also found that the ability to accurately estimate the allele frequency from pooled sequencing is limited [30, 31]. Usually two different kinds of pooling paradigms are involved. The first is multiplexing (also known as barcoding). On an Illumina HiSeq 2000 sequencer, one lane can generate, on average, from 100 to 150 million reads per run. For exome sequencing, from 30 to 40 million reads per sample are needed to generate reliable coverage in the exome for variant detection. Thus, the common practice is to multiplex from 3 to 4 samples per lane to reduce cost. Using multiplexing with barcode technology, we are able to identify each read’s origination. The disadvantage of multiplexing with barcoding is the extra cost of barcoding and labor. The cheaper alternative to pooling with multiplexing is pooling without multiplexing, which prevents us from identifying the origin of each read. In this study, we focused on pooling without multiplexing. By using comprehensive and thorough simulations, we tried to determine the effectiveness of estimating allele frequency from pooled sequencing data. In our simulation model we considered important factors of pooled sequencing, including overall depth, the average depth per sample, pooling variation, sampling variation, and targeted minor allele frequency (MAF). Another important issue we addressed in our simulation is the reference allele preferential bias, which is a phenomenon during alignment when there is preference toward the reference allele. We used real data to show the effect of reference allele bias and adjusted our simulation model accordingly. We describe our simulation model in detail and present the results from the simulation. 2. Materials and Methods We designed a thorough simulation model to closely reflect the real-world pooled sequencing situation. Our simulation model includes notations which we have defined as follows: let be the allele frequency estimated from pooled sequencing data, and let be the true allele frequency in the pool. Under the ideal assumption, all samples’ contributions to the pool are equal. However, in practice, factors such as human error and library preparation variation can affect a sample’s contribution to the pool. Very likely, each time a sample is added to the pool, an error is introduced. We let denote the error of sample during the pooling process, and should follow a normal distribution , where , and denotes the variance of error in the pool. We assume that the amount of DNA added to the pool for each sample follows a normal distribution , where is a constant and denotes the ideal constant contribution to the pool by each sample. The probability that a read is contributed by sample can be represented as , where denotes the number of samples in the pool. The contribution of each sample in the pool to a SNP can be modeled as a multinomial distribution , where equals the depth at this SNP and represents the reads contributed by sample for this SNP. The depth follows a possion distribution ), where equals the average depth for the exome regions. For sequencing data, the reads at heterozygous SNPs should have an allele balance of 50%, meaning 50% of the read should support the reference allele while the other 50% of the read should support the alternative allele. Thus the reads that support the alternative allele should follow a binomial distribution . In our study we estimated the average depth for the exome regions as follows: In general the read output for 1 lane on an Illumina HiSeq 2000 sequencer is around 120 million reads. The most popular exome capture kits including Illumina TruSeq, Agilent SureSelect, and NimbleGen SeqCap EZ capture almost 100 percent of all known exons (about 30 million base pairs). Most capture kits claim that they have capture efficiency of at least 70 percent, but, in practice, it has been shown that the capture efficiency of all these capture kits are only around 50 percent [32], which implies that if a sample is sequenced for 120 million reads, only around 60 million reads will be aligned to exome regions. After filtering for mapping quality, the number of reads aligned to exome regions will be even smaller. However, to simplify, we ignored the reads that failed the mapping quality filter. There are about 180,000 exons [33]. Based on (1), for exome sequencing on 1 Illumina HiSeq lane, the average depth is expected to be roughly 400. To measure the accuracy of the allele frequency estimated from pooled sequencing data, we computed the relative root mean square error (RMSE) as follows: where is the number of simulations we performed to estimate the target allele frequency . In our simulations, we set . Unlike the traditional RMSE, we divided it by the target allele frequency to make the result relative to the allele frequency we were simulating, so we could compare RMSE for allele frequencies as small as 0.5% and as large as 50%. Reference allele preferential bias is a phenomenon during alignment when there is preference toward the reference allele. Degner et al. described such bias in RNA-seq data [34]. To examine whether this bias also exists in DNAseq data, we measured allele balance (defined as reads that support the alternative allele divided by total reads) of three independent DNA sequencing datasets. The three datasets were sequenced at different facilities (Broad Institute, HudsonAlpha, Illumina), at different time points, and using different capture methods (Agilent SureSelect, Illumina TruSeq, and Array Based Capture). The theoretical allele balance for heterozygous SNPs should be around 50%. In real data, we observed that the mean allele balance for all heterozygous SNPs for all samples is 0.483 (range: 0.447–0.499) (Table 1). Thus, we modified our previously defined read distribution at heterozygous site to where follows a normal distribution , where and are estimated by the empirical mean allele balance we observed in real data. Three simulations were conducted to evaluate the accuracy of allele frequency estimation from pooled sequencing data. The detailed descriptions of the three simulations are as follows. Simulation 1. The goal of Simulation 1 was to study the relationship between different levels of and relative RMSE under different pool sizes () and different MAF (MAF 0.5%, 1%, %5, 10%, 20%, 30%, 40%,and 50%). Each sample’s DNA contribution to the pool follows a normal distribution . For simulation purpose, we set an arbitrary value units; the actual value of does not affect the outcome of the simulations, because the simulation merely scales around it. To best represent the scenario in practice, we used several different standard deviations values for the distribution of sample contribution to the pool. For the ideal situation, we set to a very small number ; then, we increase to 1, 2, and 4 (10%, 20%, and 40% of ) to see the effect of larger error variance on the accuracy of allele frequency estimation using pooled sequencing data. Each allele frequency was simulated 10,000 times. Simulation 2. The goal of Simulation 2 was to study the relationship between depth and relative RMSE. The average depth of exome coverage can be estimated using the number of lanes. Instead of looking directly at average depth in the exome regions, we looked at average depth per sample (i.e., average depth divided by pool size). If the average depth of exome regions for the pool of 200 people is 600x, then the depth per sample is 3x. In this simulation, we used , pool size , and MAF = 0.5%, 1%, %5, 10%, 20%, 30%, 40%, and 50%. Each allele frequency was simulated 10,000 times. Simulation 3. The goal of Simulation 3 was to determine the overall performance of a pooled exome sequencing study. In practice, we cannot measure a SNP 10,000 times and then compute the average allele frequency as we did in Simulations 1 and 2. We are limited with one measurement only at a given SNP. It is important that we look at the overall performance too rather than just at a single SNP. Based on the released data of the 1000 Genomes Project, we built an empirical MAF distribution. This distribution should represent an overall picture of MAF distribution in the population. A typical exome study will yield 10,000–100,000 SNPs after filtering, with the number of SNPs heavily dependent on the number of samples sequenced in the study. Following the empirical distribution of the MAF, we randomly drew 10,000 SNPs from this distribution to simulate an exome sequencing dataset and computed an overall error rate. The error rate is defined as . We further repeated this simulation 1000 times and computed the median error rates. 3. Results Simulation 1. We assume that each sample’s DNA contribution to the pool follows a normal distribution . In an ideal situation, is small, and if we fix overall depth, the pool size does not make a significant difference for the RMSE. For example, in an ideal situation, for MAF = 0.5, the relative RMSE for pool size equals 0.023, 0.024, 0.024, and 0.020, respectively. However, if we increase , pools with greater size tend to have lower RMSEs. For example, when is increased to 4, for MAF = 50%, the relative RMSEs for pool size equals 0.028, 0.0250, 0.023, and 0.022, respectively. Increasing clearly also increased relative RMSE for all MAFs and for all pool sizes. For example, for pool size , MAF = 50%, , the relative RMSE are 0.020, 0.021, 0.023, and 0.028, respectively. Also lower MAF tended to have high relative RMSE than high MAF. For example, in an ideal situation, for pool size , the relative RMSEs for MAF = 0.5%, 1%, 5%, 10%, 20%, 30%, 40%, and 50% are equal to 0.289, 0.202, 0.088, 0.061, 0.041, 0.031, 0.030, and 0.020, respectively. The results of Simulation 1 can be viewed in Figure 1. Simulation 2. In this simulation, the goal was to examine the relationship between average depth per sample and pool size . We found that, with the same average depth per sample , higher pool sizes will generate lower relative RMSEs. Also, as the MAF increases, the relative RMSE decreases. For example, for MAF = 50% and average depth per sample , the relative RMSEs for are 0.071, 0.050, 0.036, and 0.025, respectively. If we can infinitely increase pool size or average depth per sample while fixing the other, the RMSE will reach zero. The result of Simulation 2 can be viewed in Figure 2. In our study, we performed simulations at each MAF 10,000 times. However, in practice, we do not have the resources to measure a SNP 10,000 times and then take the average. In real exome sequencing, each SNP is only measured one time. Table 2 shows the quantile information for simulating MAF = 0.5%, 1%, 5%, 10%, 20%, 30%, 40%, and 50% 10,000 times. The mean and median of the estimated MAF are very close to the targeted MAF value. When MAF increases, the variance also increases. If we account for relative RMSE, the simulations still produced more accurate results for larger MAFs. Simulation 3. In this simulation, we simulated the scenario of pooled exome sequencing. Using data from the 1000 Genomes Project as prior information that contains genotyping data from 1092 individuals, we built an empirical distribution of MAF (Figure 3). Based on this empirical distribution, we simulated the pooled exome sequencing with pool size 1000 times and computed the median error rate for each simulation (Figure 4). The results clearly indicate that higher depth is required to produce an acceptable error rate (>5%). For standard exome sequencing, pooled DNA from 1000 subjects will require roughly 16 Illumina HiSeq lanes to produce results with an acceptable error rate. Financial Implication. The ultimate goal of pooling is to ease the financial burden on large association studies. Based on the most up-to-date pricing information on NGS, we compared the total cost of conducting association studies using pooling at different pool sizes with individual sequencing using Illumina HighSeq 2000 sequencer, which contains 2 flow cells, and each flow cell contains 16 lanes. Table 3 shows the price difference between pooling and individual sequencing. The savings using pooling is more substantial when pool sample size is large. When using all 16 lanes, the savings for 200 samples is roughly 500% over individual sequencing and, for 1000 samples, a 2300% saving. 4. Discussion Our simulation showed that there are several important factors to consider when designing a pooling study. Those factors include sample size, targeted MAF, and, most importantly, the depth. The sample size directly affects the ability to detect rare SNPs. Larger pool size will increase the accuracy of MAF estimation with the same per sample depth but will not have much effect with the same overall depth. Similarly, with the same pool size, increasing depth will decrease relative RMSE. Our simulation also showed that pooled sequencing is not ideal for estimating the MAF of rare SNPs. The relative RMSE is much higher for SNPs with MAF < 1% compared to SNPs with MAF > 5% (Figure 1). Sequencing pooled DNA will ease financial burdens and make large association possible. At the same time, however, pooling introduces additional errors. A majority of the errors are caused by the unequal representation of each sample’s DNA in the pool. This unequal representation could be due to human or machine error, which we have considered in our simulation. There are other factors which can also cause the unequal representation, such as a sample’s DNA quality and variation introduced in the PCR/amplification stage. Unfortunately, we can only minimize such errors and variation using more sophisticated lab techniques. Even if every sample is equally represented in the pool, the sequenced data still do not truly reflect the equality due to sampling variance. Based on our simulation results, when designing a pooling study, we recommend the following: larger pool size is better, and higher depth is better. More elaborately, it is better to keep balance between pool size and depth. We recommend keeping the average depth per sample at 10 minimum if rare SNPs are not of interest; otherwise, average depth per sample at 20 minimum is highly recommended. The authors would like to thank Peggy Schuyler and Margot Bjoring for their editorial support. 1. M. I. McCarthy and J. N. Hirschhorn, “Genome-wide association studies: potential next steps on a genetic journey,” Human Molecular Genetics, vol. 17, no. 2, pp. R156–R165, 2008. View at Publisher · View at Google Scholar · View at Scopus 2. J. McClellan and M. C. King, “Genetic heterogeneity in human disease,” Cell, vol. 141, no. 2, pp. 210–217, 2010. View at Publisher · View at Google Scholar · View at Scopus 3. D. B. Hancock, I. Romieu, M. Shi et al., “Genome-wide association study implicates chromosome 9q21.31 as a susceptibility locus for asthma in Mexican children,” PLoS Genetics, vol. 5, no. 8, Article ID e1000623, 2009. View at Publisher · View at Google Scholar · View at Scopus 4. F. A. Wright, L. J. Strug, V. K. Doshi et al., “Genome-wide association and linkage identify modifier loci of lung disease severity in cystic fibrosis at 11p13 and 20q13.2,” Nature Genetics, vol. 43, no. 6, pp. 539–546, 2011. View at Publisher · View at Google Scholar · View at Scopus 5. E. Einarsdottir, M. R. Bevova, A. Zhernakova et al., “Multiple independent variants in 6q21-22 associated with susceptibility to celiac disease in the Dutch, Finnish and Hungarian populations,” European Journal of Human Genetics, vol. 19, no. 6, pp. 682–686, 2011. View at Publisher · View at Google Scholar · View at Scopus 6. T. A. Manolio, F. S. Collins, N. J. Cox et al., “Finding the missing heritability of complex diseases,” Nature, vol. 461, no. 7265, pp. 747–753, 2009. View at Publisher · View at Google Scholar · View at Scopus 7. W. Ji, J. N. Foo, B. J. O'Roak et al., “Rare independent mutations in renal salt handling genes contribute to blood pressure variation,” Nature Genetics, vol. 40, no. 5, pp. 592–599, 2008. View at Publisher · View at Google Scholar · View at Scopus 8. S. Nejentsev, N. Walker, D. Riches, M. Egholm, and J. A. Todd, “Rare variants of IFIH1, a gene implicated in antiviral responses, protect against type 1 diabetes,” Science, vol. 324, no. 5925, pp. 387–389, 2009. View at Publisher · View at Google Scholar · View at Scopus 9. S. Romeo, W. Yin, J. Kozlitina et al., “Rare loss-of-function mutations in ANGPTL family members contribute to plasma triglyceride levels in humans,” The Journal of Clinical Investigation, vol. 119, no. 1, pp. 70–79, 2009. View at Publisher · View at Google Scholar · View at Scopus 10. M. A. Rivas, M. Beaudoin, A. Gardet, et al., “Deep resequencing of GWAS loci identifies independent rare variants associated with inflammatory bowel disease,” Nature Genetics, vol. 43, pp. 1066–1073, 2011. 11. R. M. Durbin, D. L. Altshuler, G. R. Abecasis, et al., “A map of human genome variation from population-scale sequencing,” Nature, vol. 467, pp. 1061–1073, 2010. 12. N. Arnheim, C. Strange, and H. Erlich, “Use of pooled DNA samples to detect linkage disequilibrium of polymorphic restriction fragments and human disease: studies of the HLA class II loci,” Proceedings of the National Academy of Sciences of the United States of America, vol. 82, no. 20, pp. 6970–6974, 1985. View at Scopus 13. R. W. Michelmore, I. Paran, and R. V. Kesseli, “Identification of markers linked to disease-resistance genes by bulked segregant analysis: a rapid method to detect markers in specific genomic regions by using segregating populations,” Proceedings of the National Academy of Sciences of the United States of America, vol. 88, no. 21, pp. 9828–9832, 1991. View at Scopus 14. P. Pacek, A. Sajantila, and A. C. Syvanen, “Determination of allele frequencies at loci with length polymorphism by quantitative analysis of DNA amplified from pooled samples,” PCR Methods and Applications, vol. 2, no. 4, pp. 313–317, 1993. View at Scopus 15. L. F. Barcellos, W. Klitz, L. L. Field et al., “Association mapping of disease loci, by use of a pooled DNA genomic screen,” American Journal of Human Genetics, vol. 61, no. 3, pp. 734–747, 1997. View at Scopus 16. J. Daniels, P. Holmans, N. Williams et al., “A simple method for analyzing microsatellite allele image patterns generated from DNA pools and its application to allelic association studies,” American Journal of Human Genetics, vol. 62, no. 5, pp. 1189–1197, 1998. View at Publisher · View at Google Scholar · View at Scopus 17. S. H. Shaw, M. M. Carrasquillo, C. Kashuk, E. G. Puffenberger, and A. Chakravarti, “Allele frequency distributions in pooled DNA samples: applications to mapping complex disease genes,” Genome Research, vol. 8, no. 2, pp. 111–123, 1998. View at Scopus 18. M. Krumbiegel, F. Pasutto, U. Schlötzer-Schrehardt et al., “Genome-wide association study with DNA pooling identifies variants at CNTNAP2 associated with pseudoexfoliation syndrome,” European Journal of Human Genetics, vol. 19, no. 2, pp. 186–193, 2011. View at Publisher · View at Google Scholar · View at Scopus 19. V. C. Sheffield, R. Carmi, A. Kwitek-Black et al., “Identification of a Bardet—Biedl syndrome locus on chromosome 3 and evaluation of an efficient approach to homozygosity mapping,” Human Molecular Genetics, vol. 3, no. 8, pp. 1331–1335, 1994. View at Scopus 20. R. Carmi, T. Rokhlina, A. E. Kwitek-Black et al., “Use of a DNA pooling strategy to identify a human obesity syndrome locus on chromosome 15,” Human Molecular Genetics, vol. 4, no. 1, pp. 9–13, 1995. View at Scopus 21. A. Nystuen, P. J. Benke, J. Merren, E. M. Stone, and V. C. Sheffield, “A cerebellar ataxia locus identified by DNA pooling to search for linkage disequilibrium in an isolated population from the Cayman Islands,” Human Molecular Genetics, vol. 5, no. 4, pp. 525–531, 1996. View at Publisher · View at Google Scholar · View at Scopus 22. D. A. Scott, R. Carmi, K. Elbedour, S. Yosefsberg, E. M. Stone, and V. C. Sheffield, “An autosomal recessive nonsyndromic-hearing-loss locus identified by DNA pooling using two inbred bedouin kindreds,” American Journal of Human Genetics, vol. 59, no. 2, pp. 385–391, 1996. View at Scopus 23. C. I. Amos, M. L. Frazier, and W. Wang, “DNA pooling in mutation detection with reference to sequence analysis,” American Journal of Human Genetics, vol. 66, no. 5, pp. 1689–1692, 2000. View at Publisher · View at Google Scholar · View at Scopus 24. P. Benaglio, T. L. Mcgee, L. P. Capelli, S. Harper, E. L. Berson, and C. Rivolta, “Next generation sequencing of pooled samples reveals new SNRNP200 mutations associated with retinitis pigmentosa,” Human Mutation, vol. 32, no. 6, pp. E2246–E2258, 2011. View at Publisher · View at Google Scholar · View at Scopus 25. A. A. Out, I. J. H. M. van Minderhout, J. J. Goeman et al., “Deep sequencing to reveal new variants in pooled DNA samples,” Human Mutation, vol. 30, no. 12, pp. 1703–1712, 2009. View at Publisher · View at Google Scholar · View at Scopus 26. M. A. Rivas, M. Beaudoin, A. Gardet, et al., “Deep resequencing of GWAS loci identifies independent rare variants associated with inflammatory bowel disease,” Nature Genetics, vol. 43, pp. 1066–1073, 2011. 27. Y. Huang, D. A. Hinds, L. Qi, and R. L. Prentice, “Pooled versus individual genotyping in a breast cancer genome-wide association study,” Genetic Epidemiology, vol. 34, no. 6, pp. 603–612, 2010. View at Publisher · View at Google Scholar · View at Scopus 28. S. J. Docherty, L. M. Butcher, L. C. Schalkwyk, and R. Plomin, “Applicability of DNA pools on 500 K SNP microarrays for cost-effective initial screens in genomewide association studies,” BMC Genomics, vol. 8, article 214, 2007. View at Publisher · View at Google Scholar · View at Scopus 29. M. Harakalova, I. J. Nijman, J. Medic et al., “Genomic DNA pooling strategy for next-generation sequencing-based rare variant discovery in abdominal aortic aneurysm regions of interest—challenges and limitations,” Journal of Cardiovascular Translational Research, vol. 4, no. 3, pp. 271–280, 2011. View at Publisher · View at Google Scholar · View at Scopus 30. A. G. Day-Williams, K. McLay, E. Drury et al., “An evaluation of different target enrichment methods in pooled sequencing designs for complex disease association studies,” PLoS One, vol. 6, Article ID e26279, 2011. 31. X. Chen, J. B. Listman, F. J. Slack, J. Gelernter, and H. Zhao, “Biases and errors on Allele frequency estimation and disease association tests of next-generation sequencing of Pooled samples,” Genetic Epidemiology, vol. 36, no. 6, pp. 549–560, 2012. 32. Y. Guo, J. Long, J. He, et al., “Exome sequencing generates high quality data in non-target regions,” BMC Genomics, vol. 13, article 194, 2012. 33. S. B. Ng, E. H. Turner, P. D. Robertson et al., “Targeted capture and massively parallel sequencing of 12 human exomes,” Nature, vol. 461, no. 7261, pp. 272–276, 2009. View at Publisher · View at Google Scholar · View at Scopus 34. J. F. Degner, J. C. Marioni, A. A. Pai et al., “Effect of read-mapping biases on detecting allele-specific expression from RNA-sequencing data,” Bioinformatics, vol. 25, no. 24, pp. 3207–3212, 2009. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/tswj/2013/895496/","timestamp":"2014-04-17T17:59:19Z","content_type":null,"content_length":"143177","record_id":"<urn:uuid:a0f54c04-39dc-4a5b-a4e7-ac4372c2eb77>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Ally Bank's New Online CD Ladder Tool - CD Ladders Still Useful? Ally Bank launched an online tool yesterday which is intended to help customers plan a CD ladder. As described at the Ally Bank blog, the tool "helps you estimate the savings created by your CD ladder by guiding you through the projected timeline of your ladder and giving step-by-step instructions for when you need to take action." I gave Ally's CD ladder tool a try, and it works well in helping you plan a basic CD ladder with Ally Bank CDs. CD Ladders Still Make Sense? In May I asked the question if CD ladders still made sense in today's low interest rate world. For those who responded in the poll, 42% said they are still using CD ladders. However, 22% said they have made changes to their CD ladders. I thought it would be interesting to compare two other savings strategies with a standard CD ladder as described by Ally Bank. Five 5-Year CDs Instead of a 5-year CD Ladder One strategy is to just put all of your money into Ally Bank 5-year CDs. Instead of starting with a 1-, 2-, 3-, 4- and 5-year CDs, you would start with 5 equal 5-year CDs. The benefit of this is that the 5-year CDs have higher rates. So you'll maximize your returns. The downside as compared to a standard CD ladder is that you will have to worry about early withdrawal penalties at the annual intervals. As we know, the current Ally Bank EWP is small, so this can be a better deal even with the penalties. One potential problem with using all 5-year CDs is that if you don't break the CDs early, all of them will mature at the same time in 5 years. One benefit of a CD ladder is that the CDs renew in regular intervals (like every year). That eliminates the need to guess about future rates. If rates go up, you slowly benefit with higher rates as you renew each CD. How much more interest will you have after 10 years if you just invest in 5-year CDs instead of a CD ladder with 1-, 2-, 3-, 4- and 5-year CDs? If you assume a total $10,000 deposit and the 5-year CD rate remains the same for the next 9 years, the final balances are: • A) Total from 5-yr CD ladder after 10 yrs: $11,737 • B) Total from 5-yr CDs after 10 yrs (early closures to mimic ladder): $11,799 • C) Total from 5-yr CDs after 10 yrs (no early closures): $11,824 As expected, the CD ladder (A) results in the smallest total due to lower rates of the short-term CDs. Using all 5-year CDs and mimicking short-term CDs by closing those 5-year CDs after 1, 2, 3 and 4 years (B) results in a slightly higher total return. Finally, using all 5-year CDs and keeping them until maturity and renewing them (C) results in the highest total return. However, the differences between these 3 strategies are not much. So if you want to keep things simple, you may want to consider a basic CD ladder. Keeping Everything in a Savings Account Another simple strategy is to just keep everything in a savings or money market account. This gives you maximum liquidity. It also gives you more flexibility if hot deals become available. The downside is that may result in the lowest return. If you assume today's Ally Bank savings account yield of 0.95% remains the same for the next 10 years, the total return for an initial $10,000 deposit is $10,992. My Take on CD Ladders It might seem unwise to renew CDs that mature today into new long-term CDs which have such low rates. However, many had those same concerns two years ago. I'm sure many readers who opened long-term CDs two years ago are glad about their choices. The problem is no one knows about how interest rates will change. The CD ladder provides a strategy that doesn't require you to guess about future CDs don't have to make up one's entire portfolio. For the fixed-income part of your portfolio that you want to keep safe, CDs and CD ladders can still make sense. For example, Allan Roth who writes for CBS News has written that he has "roughly" 70% of his "fixed-income portfolio in high-paying CDs that have easy early withdrawal penalties." CD Ladder Overview, Strategies and Tips We have more CD ladder info in our CD ladder overview with an infographic. There are ways you can tweak a CD ladder. I described some of these ways in my post on Alternatives to CD Ladders. There are issues that can mess up your CD ladder and reduce your CD earnings. I reviewed some of these in my posts Issues to Consider for Your CD Ladder and 10 Gotchas to Avoid for Bank CD Investors. 6 comments. Comment #1 by Anonymous posted on It looks to me like trying to build a ladder out of a maturing Ally CD won't work. The bank currently offers a 25 basis point "loyalty reward" for CD renewals but, if I correctly interpret past e-mails I've received from it, the bonus applies only to the account number of the maturing CD. So, if I try to take a $100k maturing CD and break it into a ladder of 5 $20k CDs, I'll only get the loyalty reward added to the $20k CD having the same account number as the original CD. Does anyone have a different take based on experience with Ally? Comment #2 by Kaight posted on Dunno about that "buy all five year CDs" strategy. I'm afraid that would only work in very limited circumstances . . . for example . . . if the Federal Reserve promised "near zero" short rates ad Comment #3 by Bozo posted on Building a 5-year CD ladder from scratch is something I posited over at Bogleheads forum. It is not terribly complicated, and involves shifting bond fund assets to 5-year CDs over a 5 year period. As low as CD rates are, the rates on 5-year CDs still tend to outpace the SEC yields on similar bond funds. 2% is pretty much the "going rate" these days. As in, "you're not paying 2%, I'm going." Comment #4 by Anonymous posted on IMO CD Ladders still work. I have one running through January 2021 and it is currently paying over 5%. It is composed of some very unequal rungs and maturities which has been because deals on CD's have not been around as much. A big help is that we never cash in early and basically stick with Pentagon FCU and Navy FCU. Just purchased $8K from NFCU paying 4% for one year and then it will mature into a 3% (current rate) one year CD - so in this environment it is not too bad to get $8K CD at a 2 years blended APY of 3.5%. Comment #5 by lou posted on Anonymous #4, I wasn't aware that Navy Federal was currently paying 3% for one year CD's. Can you show me any evidence of this? Comment #6 by lou posted on Anonymous 4, are you talking about the following: "Limit one Special EasyStart Certificate per member. This offer, including the stated APY, is effective March 28, 2011. $3,000 maximum balance. Certificate owner(s) age 18 and older must have Direct Deposit of Net Pay (minimum $300 per Direct Deposit)" So, I am not sure why you think you will get 3% one year from now for $8,000 of certificates. According to this language, it would only be available for a $3,000 certificate, and you would only qualify if this offer is still in effect one year from now. You also have to do a $300 direct deposit into a Navy Fed checking account.
{"url":"http://www.depositaccounts.com/blog/2012/10/ally-banks-new-online-cd-ladder-tool-cd-ladders-still-useful.html","timestamp":"2014-04-18T20:45:12Z","content_type":null,"content_length":"30918","record_id":"<urn:uuid:370f1cb3-a6f6-4b6c-9d72-ea0569facb26>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
right triangle Re: right triangle You use SOHCAHTOA. The right angle is opposite the hypotenuse. Last edited by bobbym (2013-03-14 15:02:49) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=257298","timestamp":"2014-04-19T14:37:33Z","content_type":null,"content_length":"9583","record_id":"<urn:uuid:2e16515d-4d35-4229-9c89-ede69d0e53e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Factor Calculations Power Factor Calculations. Power factor is the ratio between the KW and the KVA drawn by an electrical load where the KW is the actual load power and the KVA is the apparent load power. It is a measure of how effectively the current is being converted into useful work output and more particularly is a good indicator of the effect of the load current on the efficiency of the supply system. There are two types of power factor, displacement power factor and distortion power factor. Only displacement power factor can be corrected by the addition of capacitors. Displacement Power factor. The Line Current comprises two components of current, a real component indicating work current, and a reactive component which is 90 degrees out of phase. The reactive current indicates either inductive or capacitive current and does not do any work. The Real current, or in phase current, generates Power (KW) in the load and reactive current does not generate power in the load. The effect of the reactive curent is measured in KVARs. The composite line current is measured in KVA. The vectors can be represented as two equivilant triangles, one triangle being the real current, the reactive current and the composite (line) current. The cosine of the angle between the line current phasor and the real current represents the power factor. The second identical triangle is made up of the KW KVA and KVAR vectors. For a given power factor and KVA (line current) the KVAR (reactive current) can be calculated as the KVA times the sine of the angle between the KVA and KW. Three phase calculations: KVA = Line Current x Line Voltage x sqrt(3) / 1000 KVA = I x V x 1.732 / 1000 KW = True Power pf = Power Factor = Cos(Ø) KW = KVA x pf = V x I x sqrt(3) x pf KVAR = KVA x Sine(Ø) = KVA x sqrt(1 -pf x pf) Single phase calculations: KVA = Line Current x phase Voltage /1000 KVA = I x V / 1000 KW = True Power pf = Power Factor = Cos(Ø) KW = KVA x pf = V x I x sqrt(3) x pf KVAR = KVA x Sine(Ø) = KVA x sqrt(1 -pf x pf) To calculate the correction to correct a load to unity, measure the KVA and the displacement power factor, calculate the KVAR as above and you have the required correcion. To calculate the correction from a known pf to a target pf, first calculate the KVAR in the load at the known power factor, than calculate the KVAR in the load for the target power factor and the required correction is the difference between the two. i.e. Measured Load Conditions: KVA = 560 pf = 0.55 Target pf = 0.95 (1) KVAR = KVA x sqrt(1 - pf x pf) = 560 x sqrt(1 - 0.55 x 0.55) = 560 x 0.835 = 467.7 KVAR (2) KVAR = KVA x sqrt(1 - pf x pf) = 560 x sqrt(1 - 0.95 x 0.95) = 560 x 0.3122 = 174.86 KVAR (3) Correction required to correct from 0.55 to 0.95 is (1) - (2) = 292.8 KVAR (= 300 KVAR) To calculate the reduction in line current or KVA by the addition of power factor correction for a known initial KVA and power factor and a target power factor, we first calculate the KW from the known KVA and power factor. From that KW and the target power factor, we can calculate the new KVA (or line current). i.e. Initial KVA = 560 Initial pf = 0.55 Target pf = 0.95 (1) KW = KVA x pf = 560 x 0.55 = 308 KW (2) KVA = KW / pf = 308 / 0.95 = 324 KVA => KVA reduction from 560 KVA to 324 KVA => Current reduction to 57% (43% reduction) Bookmark This Page! Please post a link to this page using the following HTML coding... New Zealand supplier of Power Factor Equipment Power Factor Correction Power Factor Controllers
{"url":"http://www.lmphotonics.com/pwrfact1.htm","timestamp":"2014-04-20T00:40:18Z","content_type":null,"content_length":"20458","record_id":"<urn:uuid:aefd49f7-8add-4a2e-922b-9c9c72d9e06b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of elliptic-function complex analysis , an elliptic function is a defined on the complex plane which is in two directions (a doubly-periodic function ). Historically, elliptic functions were discovered as inverse functions of elliptic integrals ; these in turn were studied in connection with the problem of the arc length of an , whence the name derives. Formally, an elliptic function is a meromorphic function f defined on for which there exist two non-zero complex numbers , such that f(z + a) = f(z + b) = f(z) for all z in C is defined. From this it follows that f(z + ma + nb) = f(z) for all z in C and all integers m and n. In developments of the theory of elliptic functions, modern authors mostly follow Karl Weierstrass: the notations of Weierstrass's elliptic functions based on his $wp$-function are convenient, and any elliptic function can be expressed in terms of these. Weierstrass became interested in these functions as a student of Christoph Gudermann, a student of Carl Friedrich Gauss. The elliptic functions introduced by Carl Jacobi, and the auxiliary theta functions (not doubly-periodic), are more complicated but important both for the history and for general theory. The primary difference between these two theories is that the Weierstrass functions have second-order and higher-order poles located at the corners of the periodic lattice, whereas the Jacobi functions have simple poles. The development of the Weierstrass theory is easier to present and understand, having fewer complications. More generally, the study of elliptic functions is closely related to the study of modular functions and modular forms, a relationship proven by the modularity theorem. Examples of this relationship include the j-invariant, the Eisenstein series and the Dedekind eta function. Any complex number ω such that + ω) = ) for all is called a . If the two periods are such that any other period ω can be written as ω = integers m , then are called fundamental periods . Every elliptic function has a pair of fundamental periods , but this pair is not unique, as described below. If a and b are fundamental periods describing a lattice, then exactly the same lattice can be obtained by the fundamental periods a' and b' where a' = p a + q b and b' = r a + s b where p, q, r and s being integers satisfying p s − q r = 1. That is, the matrix $begin\left\{pmatrix\right\} p & q r & s end\left\{pmatrix\right\}$ has determinant one, and thus belongs to the modular group. In other words, if a and b are fundamental periods of an elliptic function, then so are a' and b' . If a and b are fundamental periods, then any parallelogram with vertices z, z + a, z + b, z + a + b is called a fundamental parallelogram. Shifting such a parallelogram by integral multiples of a and b yields a copy of the parallelogram, and the function f behaves identically on all these copies, because of the periodicity. The number of poles in any fundamental parallelogram is finite (and the same for all fundamental parallelograms). Unless the elliptic function is constant, any fundamental parallelogram has at least one pole, a consequence of Liouville's theorem. The sum of the orders of the poles in any fundamental parallelogram is called the order of the elliptic function. The sum of the residues of the poles in any fundamental parallelogram is equal to zero, so in particular no elliptic function can have order one. The number of zeros (counted with multiplicity) in any fundamental parallelogram is equal to the order of the elliptic function. The set of all elliptic functions with the same fundamental periods form a field. The derivative of an elliptic function is again an elliptic function, with the same periods. The Weierstrass elliptic function $wp$ is the prototypical elliptic function, and in fact, the field of elliptic functions with respect to a given lattice is generated by $wp$ and its derivative • (only considers the case of real invariants). • Naum Illyich Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN • Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Springer-Verlag, New York, 1976. ISBN 0-387-97127-0 (See Chapter 1.) • Albert Eagle, The elliptic functions as they should be. Galloway and Porter, Cambridge, England 1958. • E. T. Whittaker and G. N. Watson. A course of modern analysis, Cambridge University Press, 1952 External links
{"url":"http://www.reference.com/browse/elliptic-function","timestamp":"2014-04-18T17:12:11Z","content_type":null,"content_length":"82158","record_id":"<urn:uuid:690027ee-f19c-42fc-9aa3-223d8ecdef5d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert liters to barrel [US, petroleum] - Conversion of Measurement Units ›› Convert liter to barrel [US, petroleum] ›› More information from the unit converter How many liters in 1 barrel [US, petroleum]? The answer is 158.9872956. We assume you are converting between liter and barrel [US, petroleum]. You can view more details on each measurement unit: liters or barrel [US, petroleum] The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 1000 liters, or 6.28981074385 barrel [US, petroleum]. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between liters and barrels. Type in your own numbers in the form to convert the units! ›› Definition: Litre The litre (spelled liter in American English and German) is a metric unit of volume. The litre is not an SI unit, but (along with units such as hours and days) is listed as one of the "units outside the SI that are accepted for use with the SI." The SI unit of volume is the cubic metre (m³). ›› Definition: Barrel This unit is used in North America for crude oil or other petroleum products. Elsewhere oil is more commonly measured in cubic metres (m³) and less commonly in tonnes. The Standard Oil Company shipped its oil in barrels that always contained exactly 42 U.S. gallons. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0029 seconds.
{"url":"http://www.convertunits.com/from/liters/to/barrel+%5BUS,+petroleum%5D","timestamp":"2014-04-18T03:37:29Z","content_type":null,"content_length":"20726","record_id":"<urn:uuid:acfc2eac-2208-4943-8419-1719d2c89f73>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the classification of angles 1 and 2? (Picture in comments) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/505c700fe4b0583d5cd116a3","timestamp":"2014-04-18T03:28:05Z","content_type":null,"content_length":"49967","record_id":"<urn:uuid:523d1b57-0892-47ad-91a8-e0c7dcde79d6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
How Image Stacking Works visitors since &copy 1998 - 2013 Keith Wiley All material on this website is copyrighted and may not be used without first obtaining permission from the author. Thank you. How Image Stacking Works Image stacking is a popular method of image processing amongst astrophotographers, although the exact same technique can be applied to any situation where identical images can be captured over a period of time, in other words in situations where the scene isn't changing due to motion or varying light and shadow. Astrophotography happens to be perfectly suited in this manner, in that astronomical objects are effectively static for reasonable durations of time. In the case of deep sky objects, the objects are virtually permanant. In the case of planetary images, they change slowly enough that a series of images spanning at least a few minutes can be acquired without observable motion. The first time I witnessed the effects of image stacking, I was completely blown away by the result. It seems almost magical that so much real information can be gleaned from such horrible original images. But of course the real explanation is quite simple to understand. Image stacking does two very different things at once. It increases the signal-to-noise ratio and increases the dynamic range. I will discuss each of these separately. One point of confusion that should be resolved early on is whether there is a difference between averaging and summing. Since this remains an issue of contention I can only claim that my explanation makes sense. If one doesn't follow my explanation, then one might disagree with me. The short answer is that they are identical. It doesn't make any difference whether you stack into a sum or an average. This claim assumes that an average is represented using floating point values however. If you average into integer values then you have thrown away a lot of detailed information. More precisely, I maintain that there is a continous range of representations of a stack varying between a sum and an average, which simply consist of dividing the sum by any number between one and the number of images stacked. In this manner, it is obvious that summing and averaging are identical and contain the same fundamental information. Now, in order to actually view a stack, the values must somehow be transformed into integer components of an image's brightness at each pixel. This isn't easier or harder to accomplish with a sum or a stack, as neither properly fits the necessary requirements of standard image representations. The sum contains values that are way off the top of the maximum possible value that can be represented, and the average contains floating point values which cannot be immediately interpretted as image pixels without conversion to integers first. The solution in both cases is the exact same mathematical operation. Simply find the necessary divisor to represent the brightest pixel in the stack without saturating, and then divide all pixels in the image by that divisor and convert the divided values to integers. Again, since the transformation is identical in both cases, clearly both forms contain the same information. The only reason I harp on this so much is that it must be properly understood before one can really comprehend what stacking is doing, which is actually extremely simple once you get down to it. The classic application of image stacking is to increase the signal-to-noise ratio (snr). This sounds technical and confusing at first, but it is really simple to understand. Let's look at it in parts and then see how the whole thing works. The first thing you must realize is that this is a pixel-by-pixel operation. Each pixel is operated on completely independent of all other other pixels. For this reason, the simplest way to understand what is going on is to imagine that your image is only a single pixel wide and tall. I realize this is strange, but bear with me. So your image is a single pixel. What is that pixel in each of your raw frames? It is the "signal", real photons that entered the telescope and accumulated in the CCD sensor of the camera, plus the thermal noise of the CCD and the bias along with any flatfield effects...plus some random noise thrown in for good measure. It is this last element of noise that we are concerned with. The other factors can be best handled through operations such as darkframe subtraction and flatfield division. However, it is obvious that after performing such operations to a raw, we still don't have a beautiful image, at least compared to what can be produced by stacking. Why is this? The problem is that last element of random noise. Imagine the following experiment: pick random numbers (positive and negative) from a Gaussian distribution centered at zero. Because the distribution is Gaussian, the most likely value is exactly zero, but on each trial (one number picked), you will virtually never get an actual zero. However, what happens if you take a whole lot of random numbers and average them. Clearly, the average of your numbers approaches zero more and more closely, the more numbers you pick, right? This occurs for two reasons. First, since the Gaussian is symmetrical and centered at zero, you have a one in two changes of picking a positive or negative number on each trial. On top of that, you have a greater chance of picking numbers with a low absolute value due to the shape of the Gaussian. When combined, these two reasons demonstrate clearly that the average of a series of randomly chosen numbers (from this distribution) will converge assymptotically toward zero (without every truly reaching zero of course). Now imagine that this Gaussian distribution of random numbers represents noise in your pixel sample. If you are also gathering real light at the same time as the noise, then the center of the Gaussian won't be zero. It will be the true value of the object you are imaging. In other words, the value you record with the CCD in a single image equals the true desired value plus some random Gaussian-chosen value, which might make the recorded value less than the true value or might make it greater than this value. ...but we just established that repeated samples of the noise approach zero. So what stacking really does is repeatedly sample the value in question. The real true value never actually changes, in that the number of photons arriving from the object is relatively constant from one image to the next. Meanwhile, the noise component converges on zero, which allows the stacked value to approach the true value over a series of stacked samples. That's it as far as the snr issue is concerned. It's pretty simple isn't it. Another task that stacking accomplishes, which is not toted too much in the literature but which is of great importance to deep sky astrophotographers, is increase the dynamic range of the image. Of course this can only be understood if you already understand what dynamic range is in the first place. Simply put, dynamic range represents the difference between the brightest possible recordable value and the dimmest possible recorded value. Values greater than the brightest possible value saturate (and are therefore ceilinged as the brightest possible recordable value instead of their actual value), while values dimmer than the dimmest possible value simply drop off the bottom and are recorded as 0. First understand how this works in a single raw frame captured with a CCD sensor. CCDs have an inherant sensitivity. Light that is too dim for their sensitivity simply isn't recorded at all. This is the lower bound, the dimmest possible value that can be recorded. The simplest solution to this problem is to exposure for a longer period of time, to get the light value above the dimmest recordable value so it will in fact be recorded. However, as the exposure time is increased, the value of the brightest parts of an image increases along with the value of the dimmest parts of the image. At the point where parts of the image saturate, and are recorded as the brightest possible value instead of their true (brighter) value, the recording is overloaded and crucial information is lost. Now you can understand what dynamic range means in a CCD sensor and a single image. Certain objects will have a range of brightness that exceeds the range of brightness that can be recorded by the CCD. The range of brightness of the object is its actual dynamic range, while the range of recordable brightness in the CCD is the CCD's recordable dynamic range. The following illustration shows the concepts described above. Notice that there is no one perfect exposure time for an object. It depends on whether you are willing to lose the dim parts to prevent saturation of the bright parts or whether you are willing to saturate the bright parts to get the dim parts. Stacking only aids this problem to a limited degree, as described below. Once the limits of stacking have been reached in this regard more complicated approaches must be used, such as mosaicing, in which a short exposure stack is blended with a long exposure stack, such that each stack only contributes the areas of the image in which it has useful information. CCDs are analog devices (or digital at the scale of photons in the CCD wells and electrons in the wires sending electrical signals from the CCD to the computer). However, analog devices send their signals through analog/digital converters (A/D converters) before sending the digial information to the computer. This is convenient for computers, but it introduces an arbitrary point of dynamic range constraint into the imaging device that theoretically doesn't need to be there. An analog device would theoretically have great dynamic range, but suffers from serious noise problems (this is why digital long distance and cellular phones sound better than analog ones). The question is, how does the A/D converter affect the dynamic range, or in other words, since all we care about is the end product, what exactly is the dyamic range of the image coming out of the A/D converter. The answer is that different cameras produce different numbers of digital bits. Webcams usually produce 8 bits while professional cameras usually produce twelve to sixteen bits. This means that professional cameras have sixteen to 256 times more digitized values with which to represent brightnesses compared to a webcam, which means that as you crank up the exposure time to get the dim parts of an object within the recordable range, you have more room left at the top of your range to accomodate the brightest parts of the object before they saturate. So what does stacking do? The short answer is that it increases the number of possible digitized values linearly with the number of images stacked. So you take a bunch of images that are carefully exposed so as not to saturate the brightest parts. This means you honestly risk losing the dimmest parts. However, when you perform the stack, the dimmest parts accumulate into higher values that escape the floor of the dynamic range, while simultaneously increasing the dynamic range as the brightest parts get brighter and brighter as more images are added to the stack. It is as if the max possible brightest value keeps increasing just enough to stay ahead of the increasing brightness of the stacked values of the brightest pixels, if that makes sense. In this way, the stacked image contains both dim and bright parts of an image without losing the dim parts off the bottom or the bright parts off the top. Now, it should be immediatel obvious that there is something slightly wrong here. If the raw frames were exposed with a short enough time period to not gather the dim parts at all, because the dim parts were floored to zero, then how were they accumulated in the stack? In truth, if the value in a particular raw falls to zero, it will contribute nothing to the stack. However, imagine that the true value of a dim pixel is somewhere between zero and one. The digitization of the A/D converter will turn that value into a zero, right? Not necessarily. Remember, there is noise to contend with. The noise is helpful here, in that the recorded value of such a pixel will sometimes be zero and sometimes be one, and occasionally even two or three. This is true of a truly black pixel with no real light of course, but in the case of a dim pixel, the average of the Gaussian will be between zero and one, not actually zero. When you stack a series of samples of this pixel, some signal will actually accumulate, and the value will bump up above the floor value of the stacked image, which is simply one of course. Interestingly, it is easy to tell which parts of an image have a true value that is lower than one in a each raw frame. If the summed value of a pixel is less than the number of images in the stack, or if the average value of the pixel is a floating point value below one, then clearly the true value must be below one in the raw frames because some of the raw frames must have contributed a zero to the stack in order for the stacked value to be less than the number of images stacked after a sum is produced. (This does not take into account that there is of course some noise at play here as well, which means a pixel with a true value of 1.5 might get a zero from some raw frames, but the stacked value should, in theory, be greater than one in the averaged stack of course). There is another factor at play here too. The Gaussian distribution is about the same shape (variance or standard deviation) regardless of the brightess of the actual pixel, which means the noise component of a pixel is much more severe for dim pixels than for bright pixels. Therefore, stacking allows you to bright the value of dim pixels up into a range where they wont be drowned out by the noise...while at the same time decreasing the noise anyway, as per the description in the first half of this article. This is another crucial aspect of how stacking allows dim parts of an image to become apparent. It is for this same reason that, in each raw frame, the bright parts, although noisy, are quite discernable in their basic structure, while the dim parts can appear virtually So that's what stacking does. Pretty neat, huh?
{"url":"http://keithwiley.com/astroPhotography/imageStacking.shtml","timestamp":"2014-04-21T04:31:53Z","content_type":null,"content_length":"20645","record_id":"<urn:uuid:3c7fb824-da79-4675-8385-461fec2e3cf5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by aneme on Tuesday, February 12, 2008 at 11:22pm. use the substitution method to solve the linear system. please show me an example of how to do it. • algebra 1 - polo, Tuesday, February 12, 2008 at 11:36pm what is the slope of the line perpendicular to y= 3x-7? • algebra 1 - DrBob222, Tuesday, February 12, 2008 at 11:38pm Solve one equation for one of the variables in terms of the other; for example, solve equation 1 for y. 6y = 3-9x and y = (3-9x)/6 which I would then reduce to y = (1-3x)/2 then substitute this y for y in equation 2. 3x -7*[(1-3x)/2] = -26 3x -(7+21x)/2 = -26 You see that y has been eliminated. Continue and solve for x, then substitute this value into the other equation and solve for y. Finally, substitute both x and y into one of the equations to make sure those values satisfy the equation. Check my work. □ algebra 1 - aneme, Tuesday, February 12, 2008 at 11:43pm Related Questions algebra - 9x+6y=3 and 3x-7y=-26 using the substitution method algebra - 9x+6y=3 and 3x-7y=-26 using the substitution method algebra - 9x+6y=3 and 3x-7y=-26 using the substitution method Math 116 algebra please check - Solve by the substitution method 3x + 7y =15 x=... Algebra - Solving system of linear equations by using the apropiate method name ... Algebra II - Solve the given system by using a substitution method 9x-2y=3 3x-6=... algebra - Requesting help. (I must reckon, that X is a variable, and y is y-... maths - use the substitution method to solve the linear system. 1. 3x=9 -x+2y=9 ... MATH - Solve system of equations algebraically. 1. 3x-3y=24 and 2x+5y=-37 2. -9x... Algebra - -3x + 2y = 8 x + 4y = -4 use substitution method to solve the linear ...
{"url":"http://www.jiskha.com/display.cgi?id=1202876535","timestamp":"2014-04-17T20:08:21Z","content_type":null,"content_length":"9486","record_id":"<urn:uuid:b99ee887-6930-48e4-9205-5246f31509ea>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
College Point Math Tutor Find a College Point Math Tutor ...I am exceptionally patient and understanding of all students needs. My best subjects include: Pre-Algebra, Algebra I, Algebra II, Pre-Calculus, and Calculus (including both Calculus AB & BC). I am also able to help students in (math) test prep for: SAT, PSAT, SHSAT, SSAT, TACHS and the ACT.Calc... 19 Subjects: including trigonometry, algebra 1, algebra 2, biology ...I have plenty of experience as a teacher, tutor, and test prep instructor. The majority of my experience is in K-12 public education but am also capable of tutoring college level courses up to Calculus level. I was a tutor and Teacher Assistant as an undergraduate which is how I actually got involved in the education field and eventually became a teacher. 12 Subjects: including calculus, geometry, guitar, probability I am determined to discover how a person thinks, not just what they know. Reasoning ability is a central component of my educational philosophy. When you work with me, you are working with a seasoned math expert and an enthusiastic (slightly nerdy) engineering professional. 11 Subjects: including statistics, algebra 1, algebra 2, calculus ...Also an actress, I can help you with audition prep and any kind of public speaking. I look forward to hearing from you!I have been teaching Algebra 1 for 10 years. Students to whom I have taught Algebra I have improved their scores across the board. 35 Subjects: including algebra 2, SAT math, prealgebra, algebra 1 ...Firstly, I worked for four years as an assistant teacher in private pre-schools on Long Island, as well as two summers as the adult group leader at a summer camp for pre-school children. In addition, I have NYS certification as a Level 1, teaching assistant. Furthermore, during my 3.5 years of ... 10 Subjects: including algebra 1, vocabulary, grammar, French
{"url":"http://www.purplemath.com/College_Point_Math_tutors.php","timestamp":"2014-04-21T05:15:14Z","content_type":null,"content_length":"23924","record_id":"<urn:uuid:664b0b08-5690-4fb2-8332-71312d4e3ac6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining whether three points lie on a straight line in three dimension 1. The problem statement, all variables and given/known data Determine whether the points lie on straight line A(2, 4, 2) B(3, 7, -2) C(1, 3, 3) 2. Relevant equations 3. The attempt at a solution I've looked up at the equation for lines in three dimension, and it appears to be i tried to take the x y z for A and B and try to solve for a, b, c. Then if the same a, b, c work for BC, then ABC is on a line. That is my thought, but i can't manage to do the first part. I don't know how to use the information given and the equations to start with... Anyone please help me with this. This is my first time working with 3-dimensional coordinate system...
{"url":"http://www.physicsforums.com/showthread.php?t=252141","timestamp":"2014-04-16T07:33:26Z","content_type":null,"content_length":"36045","record_id":"<urn:uuid:08c3cf8c-1bfc-43ac-aa00-1f1ab98624c8>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
I'm fighting a problem with brachistochrone program. I'd like to find the path between two chosen points, A and B, in the uniform gravitation field, which is the fastest one. Brachistochrone is a well known problem solved by calculus of variation and it's known that the analytical solution is a cycloid. But I don't want to solve it analytically, I would like to find the solution through a program. I've tried it very simple way and it didn't work. I've tried to find some programs at the intnernet for inspiration but I found only those inspired by cycloid or in mathematica/matlab solving problem by solving differential equations. But I would need solution writable in C/C++ or Pascal. (I've tried it in Pascal but as for me I don't mind about which one it is since I'm not advanced programmer and I know both languages only a little) I thought it may work by linesearch. I tried to rewrite one maybe useful program in matlab but I don't understand the core of the routine what'S a big mistake and obstacle. THe program gives me only the travel time and not the discrete point of the path. If you can help me to get into the code that I'd able to make the program draw the path... or if you have any idea how to find the path more easily for me (it could take more computing time but I would be able to write it and understand it Thank you the xend, yend are the coordinates of the ending point, the first call for function Find is for x= x start and y= y start, the it should itterate and after succesful itteration get true into the B_test array which is boolean and full of false at the beginning function Find(x,y:real):real; if yi >1 then if (B_test[yi-1,xi])=true then if (yiMin1> yiMin2)=true then else yiMin:=round(yiMin2); writeln(textt,B_x[yi,xi],' ',B_y[yi,xi]); for yi:=yiMin to Nya do begin if (B_test[yi,xi])=false then deltat:=Tr(y, B_y[yi,xi]); if (tdoc>chooseT)=true then else begin writeln(B_x[optim_Y,xi],' ',B_y[optim_Y,xi]); writeln(textt,B_x[optim_Y,xi],' ',B_y[optim_Y,xi]); Edited by macosxnerd101 : Fixed end code tag. Please,
{"url":"http://www.dreamincode.net/forums/topic/188207-brachistochrone-attack/","timestamp":"2014-04-19T15:57:18Z","content_type":null,"content_length":"75999","record_id":"<urn:uuid:d169468e-e477-41dc-a973-10235fb0dcd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Assessment of Materials for Engaging Students in Statistical Discovery Amy G. Froelich W. Robert Stephenson Iowa State University William M. Duckworth Creighton University Journal of Statistics Education Volume 16, Number 2 (2008), www.amstat.org/publications/jse/v16n2/froelich.html Copyright © 2008 by Amy G. Froelich, W. Robert Stephenson and William M. Duckworth all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor. Key Words:Activities, Introductory statistics, Statistical concepts As part of an NSF funded project we developed new course materials for a general introductory statistics course designed to engage students in statistical discovery. The materials were designed to actively involve students in the design and implementation of data collection and the analysis and interpretation of the resulting data. Our overall goal was to have students begin to think like statisticians, to construct ways of thinking about data collection and analysis, to solve problems using data in context. During their development, the materials and related activities were field tested in a small special section of an introductory statistics course for two semesters. This field testing was a ``proof of concept,'' that is that the materials could work in the laboratory setting and that the materials showed promise for improving students' learning. As a first step in evaluating these materials, students who enrolled in regular sections of the introductory course were used as a comparison group. In this paper, the development and use of the course materials will be discussed briefly. The strategy for evaluating the materials while they were being developed and analysis of students' performance on common assessment questions and the course project will be presented. In addition, the relationship between student attitudes toward statistics and students' performance will be examined. ``Declare the past, diagnose the present, foretell the future; practice these acts. As to diseases, make a habit of two things - to help, or at least to do no harm.'' Hippocrates from Epidemics, Bk. I, Sect. XI. 1. Introduction During the past decade there has been a dramatic change in the way statisticians view statistics education. Statistics educators are focusing more on statistical concepts in a constructivist atmosphere. In general, constructivism encourages students to explore, think and construct mechanisms that help them understand (Brooks and Brooks 1993). Constructivist approaches to teaching probability and statistics and their relationship to attitudes and performance of students have been studied in recent years. See Quinn and Weist (1998), Mvududu (2003). Efforts to move from lecture (passive) to active learning and from emphasis on procedural to emphasis on conceptual knowledge are throughout the emerging statistics education literature. See Keeler and Steinhorst (1995), Steinhorst and Keeler (1995), Kvam (2000), Weinberg and Abramowitz (2000), and Anderson-Cook and Dorai-Raj (2001). Many of these efforts have come as a result of National Science Foundation (NSF) funded projects to develop hands-on activities, real-world data sets and simulation-based learning (Cobb 1993). The NSF grant that funded the development of the new course materials was for a ``proof of concept'' project. The focus of the project was to develop the materials and assess them to show that the concept was viable, that is that the materials could work in the laboratory setting and that the materials showed promise for improving students' learning. Through field testing and assessment we found that the materials worked well in the laboratory setting and that there was some improvement in student performance on the general topic of regression. The materials were developed for a general introductory statistics course, Stat 101, at Iowa State University. Stat 101 is a four credit course designed for general majors on campus and is comprised of three hours of lecture and two hours of laboratory each week during the semester. Current laboratory activities are very prescribed. For example, the laboratory on the Normal model involves working through standard problems solving for the probability that one gets a value from a Normal model in a particular range or finding a cut-off value associated with a given probability. Similarly, the current lab involving regression looks at the basics of plotting lines and evaluating the strength of a relationship using the correlation coefficient. Students can, and often do, complete the current activities with not much thought given to the underlying statistical concepts. The new materials we developed are lab activities that actively involve students in the design and implementation of data collection and the analysis and interpretation of the resulting data. Our overall goal was to have students begin to think like statisticians, to construct ways of thinking about data collection and analysis, to solve problems using data in context. Rather than follow steps set down for them, students would take ownership by making decisions about what data to collect and how to organize that data and interpret results within the context of the problem. We had identified several areas where students had struggled in the past and tried to approach those problem areas in new ways in the activities. One of those problem areas involves the concept of a distribution. Students often struggle with what a histogram is really depicting and how numerical summaries of data relate to the shape and spread of a distribution. A second problem area is the Normal model. Again, many students struggle with the abstract concept of a model for the distribution of a population. Activity #2 was designed to allow students to explore distributions and the Normal model in the context of deciding how to appropriately label a Fun Size bag of M&Ms. We view the ideas of correlation and regression to be of fundamental importance in an introductory statistics course. Students should understand how to interpret the least squares regression line within the context of the data. This interpretation should go beyond the simple algebraic definition of slope and intercept to include the statistical idea of variation and that the regression line fits within a meaningful context. Over the years we have seen many students have difficulty with correctly interpreting the slope and intercept, understanding the applications of correlation and regression and the limitations of these procedures. Activities #3, 4 and 5 explore various aspects of correlation and regression. In the past students have been able to learn the terms related to experimental design but have had difficulty putting the ideas of experimental design into practice when asked to design an experiment to collect data. Activity #5 was developed to allow students to design a simple experiment and analyze the resulting data. Finally, we have noticed students struggle with the course project in Stat 101. This project combines development of an experimental plan with data collection and a full descriptive analysis of the resulting data. The activities were designed to give students more experience in conducting studies of this nature. Because the activities were being developed and tested for the first time, it was necessary to come up with a plan that would facilitate the further development of the materials, field test the materials, allow for assessment of students, those who used the new materials and those who didn't, while meeting the constraints of teaching over 500 students in the introductory statistics course each semester. Here Hippocrates admonition to ``do no harm'' is an important consideration. We did not want to expose large numbers of students to untested materials in their first and, for most, only experience with statistics. In Section 2 of this paper, we provide a short description of the new course materials, together with learning outcomes, that we developed for an introductory course in statistics. Section 3 discusses the plan for development and preliminary assessment of the materials. Section 4 gives a description of the items used to assess student learning and how they relate to learning objectives for the activities. In Section 5 we give a description of the participants involved in the study. In Section 6 we present the results of the assessments and a statistical analysis of those results. We discuss our findings in Section 7. Finally we indicate directions for further work in Section 8. 2. A Short Description of the New Course Materials The new course materials developed for the introductory course in statistics consist of new laboratory and classroom activities dealing with data collection, data interpretation, distributions, simple linear regression and statistically designed experiments. Many of the laboratory activities revolve around data collected from Fun Size bags of M&Ms (which do not have a label weight). In the first activity, students determine the variables that could be used to describe the bags of M&Ms and then collect data on those variables. The learning outcome for this activity is to have students identify categorical and numerical variables that would be helpful in learning more about the bags of M&Ms and to carefully collect data. In the second activity, students look at the distribution of the weight of the bags of M&Ms using the statistical software package JMP. The learning outcome for this activity is to reinforce the ideas of shape, center, spread and unusual values when describing the distribution of a numerical variable. It also serves to introduce the idea of using a Normal model to represent the distribution of the weights of bags of M&Ms. Students then use the distribution of the weight of the Fun Size bags to determine a reasonable label weight for these bags. Data on larger (labeled) bags of M&Ms are used to further motivate the use of a Normal model for package weights and students discover that the mean weight is not used as the labeled weight of a package. This leads to a discussion of how to use the Normal model to establish a label weight for the Fun Size bags. The learning outcome for this part of Activity # 2 is to see how to use a Normal model to approximate the distribution of a numerical variable and to use that Normal model in a practical application. In the third activity, students use simple linear regression to predict the gross weight of a bag based on the number of M&Ms in the bag. The fourth activity has the students look at the regression of gross weight on net weight and of net weight on number to determine if estimates of the slope and intercept from these regression equations would give reasonable estimates of the weight of a single M&M and the weight of a single empty bag. In both activities, students apply the concepts of slope and intercept and discover when they are interpretable within the context of a problem. These activities also highlight the danger of extrapolation. Finally, in the fifth activity, students design an experiment to determine the weight of a single peanut M&M and the weight of an empty bag using regression estimates of the slope and intercept. Through the previous activities the students have discovered difficulties with the observational study of existing packages of M&Ms especially with having the estimated slope and intercept give reasonable estimates of the average weight of an M&M and the average weight of an empty bag. In Activity # 5 they now use the ideas of experimental design to create a study that avoids those difficulties. A more complete description of the activities with learning outcomes and suggestions for extensions can be found in Froelich and Stephenson (2008) and at http://stated.stat.iastate.edu/ 3. Development and Assessment Plan At the beginning of the project we had drafts of the activities but we had not field tested them in a classroom environment. Because we were developing the materials for our general introductory statistics course (Stat 101) we wanted to use students from this course. During a typical semester there are five lecture sections of Stat 101 each with an enrollment of 100 students. Each lecture section is split into laboratory sections of 50 students each. Four of the five lecture sections are taught by experienced graduate teaching assistants. The ten laboratory sections are conducted by first year graduate teaching assistants. Because of the relative inexperience of the laboratory instructors we felt it was unwise to introduce the new materials into the regular laboratory sections until they had been field tested with a smaller group of students preferably with an experienced teacher available during the laboratory sessions. Luckily, in addition to the five regular lecture sections each spring we have a special section of Stat 101. This special section is smaller (limited to 50 students but usually with an enrollment of about 25). Although there is a laboratory assistant, the instructor for this section (one of the authors) is present at and can interact with the students during the laboratory sessions. This is an ideal section, logistically, for introducing new activities. This special section was created, in part, to attract students with good math skills to the discipline of statistics. Students are invited to enroll in this section and invitations go to students with ACT math scores of 27 or higher. As such, students in this section have higher ACT math scores, on average, than students in the regular sections of Stat 101. This poses a problem of finding a suitable comparison group within the regular sections of Stat 101. We decided to have two ``Control'' groups for our study. The first consists of students in regular sections of Stat 101 with ACT math scores of 27 and above. This group would at least have a similar ACT math profile compared to the students in the special section. The second ``Control'' group consists of students with ACT math scores below 27. We wanted to collect data on these individuals to establish a baseline for the majority of students who currently enroll in Stat 101. Because we needed access to students records we obtained Institutional Review Board approval and only students who signed an informed consent form were included in our research study. In the first year of the study draft activities were used in the special section of Stat 101. One of the authors was the instructor for this section. This instructor was present at the laboratory sessions where the new activities were first tried. If problems with the implementation of an activity arose, which happened with some frequency, the instructor was there to deal with the problem and suggest changes in the conduct of the activity for the next time it would be used. The control groups for the first year of the study were selected from the regular sections of the course during the same semester. The goal for this first year was to establish the feasibility of the activities and look at preliminary assessment of the efficacy of the activities. This is analogous to Phase I - Feasibility and Phase II - Initial Efficacy in medical research, i.e. clinical trials. For a very nice discussion of the relevance of clinical trial methodology in statistics education research see ``Using Statistics Effectively in Mathematics Education Research'' www.amstat.org/research_grants/pdfs/SMERReport.pdf. Appendix A of that report shows how the medical model and the research model proposed in a RAND (2003) report relate to a proposed framework for education research. Specifically, Phase I/II in the medical model are consistent with the framework's phase - Frame and begin to Examine where small systematic studies are conducted. The next phase in the proposed framework is Examine and Generalize - where larger studies are conducted under varying conditions with proper controls. One of the drawbacks to the first year's assessment plan is the potential confounding variable of instructor. The special section is taught by an experienced faculty member while the regular sections of Stat 101 are taught by experienced graduate teaching assistants. In order to control for this potential confounding variable, the control groups for the second year of the study included only students enrolled in an experienced faculty member's regular section of Stat 101 in the fall semester. That same faculty member also taught the special section the following spring where the new activities, with revisions from the previous year, were used. The special section again was chosen for the experimental group as additional activities (outside the scope of this paper) were introduced and field tested in the second year. The regular section in fall was not used to do a randomized comparative trial with the revised activities because of the inexperience of the graduate teaching assistants assigned to conduct the laboratory sections. Because many of our graduate students come from undergraduate mathematics programs and have had little or no exposure to the concepts presented in Stat 101, the fall semester is often as much a learning experience for the new graduate teaching assistants as it is for the students enrolled in the course. The goal of the second year was to look more carefully at the initial efficacy of the activities while controlling for the potential instructor effect. The second year's study would still fall into the Phase I/II medical model framework or the Frame and begin to Examine phase. 4. Description of Assessment Materials The materials used for assessment of student learning consisted of a pretest exam on algebra and basic statistical concepts, common exam questions on the first and second exams during the semester, and the common course group project. In the second year of the project we also included the Survey of Attitudes Toward Statistics (SATS), Schau, Stevens, Dauphinee and Del Vecchio (1995). A pretest exam was administered to all students during the first laboratory session of the semester. Questions on the pretest asked students to perform basic algebraic manipulations, calculate such numerical summaries as the mean and median, to describe a histogram of low temperatures for selected cities on a particular day, and to use the boiling point and freezing point of water to develop the equations relating degrees Fahrenheit to degrees Celsius. From the pretest questions, 11 items were assessed dichotomously. If the student answered the question related to an item correctly, the student was determined to have mastered that item. The 11 separate items were combined to form three main skill groups: ability to compute numerical summaries (Pretest Skill 1), ability to interpret a histogram and describe a distribution of numerical values (Pretest Skill 2) and ability in applied algebra (Pretest Skill 3). Each student in the study completed common exam questions on the first and second exams during the semester. The first exam covered course material on describing and summarizing distributions and working with the Normal model. The second exam covered course material on describing the relationship between two quantitative variables using correlation and linear regression (no inference) and data collection either through sampling or experimentation. The specific problems used in the assessments for Years 1 and 2 of the study can be found in the Appendix. The same problems were not used in Years 1 and 2 because the Year 1 questions and answers had been released and appeared on web sites accessible to Year 2 students. For each assessment question, the authors developed a scoring rubric. The questions were blinded so that scorers would not be aware of group membership while grading. Two of the authors then scored all students' responses to each question. Any discrepancies in the scores obtained by the two authors for a particular student were resolved between the two authors. The agreed upon score for each student on each problem was then recorded. The common group project required students to design an experiment to analyze the effect of a specific physical aspect of paper helicopters on an observable aspect of the helicopters' flight. A common choice for several groups was to vary the length of the helicopter wings to determine its effect on the flight time of the helicopters. Students were then required to analyze the resulting data using correlation and regression and to make a conclusion about the effect of the change in the helicopters on the change in the helicopters' flight. A scoring rubric, written by the authors, was used to evaluate the group projects. Students had general knowledge of the requirements of the project, but were not given the specific rubric. At the end of each year of the study, the course projects were blinded, randomly ordered and scored. In Year 1, an independent consultant scored all the projects using the project rubric. In Year 2, one of the authors (who was not involved in teaching the Stat 101 courses in Year 2) scored all the projects using the grading rubric. A complete description of the course project appears in the Appendix. Table 1 summarizes the specific activities, learning objectives and corresponding assessment items. Table 1: Activities, Learning Objectives, and Assessment Items │ │ │ Assessment Items │ │ Activity │ Learning Outcomes │ Year 1 │ Year 2 │ │ #2 │ Distributions and Numerical Summaries │ Exam 1: Q1 & Q2 │ Exam 1: Q1 & Q2 │ │ #2 │ The Normal Model │ Exam 1: Q3 & Q4 │ Exam 1: Q3 & Q4 │ │ #3 │ Regression and correlation │ Exam 2: Q1 │ Exam 2: Q1 │ │ #5 │ Experimental design │ Exam 2: Q2 │ Exam 2: Q2 │ │ #3, #5 │ Connect experimental design │ Project │ Project │ │ │ with regression and correlation │ │ │ For the Year 2 study, the SATS (Schau, et al., 1995) was used to assess student attitudes toward statistics. Each of the 36 items on the SATS is designed to measure one of six different components of student attitudes toward statistics: • Affect - Students' feelings toward statistics. A high score on this component indicates that students like learning statistics, they enjoy taking a statistics course and they are not afraid of or nervous about learning statistics. • Cognitive Competence - Students' attitudes about their intellectual knowledge and skills when applied to statistics. A high score on this component indicates that students believe they have the ability and the mathematical aptitude to learn statistics. • Value - Students' attitudes about the usefulness, relevance, and worth of statistics in personal and professional life. A high score on this component indicates that students understand the usefulness of statistics in the curriculum of their chosen major, in their future profession, and in their daily lives. • Difficulty - Students' attitudes about the difficulty of statistics as a subject. Higher scores in this area indicate a belief that statistics is not difficult, while lower scores indicate that students find statistics to be more difficult. • Interest - Students' level of individual interest in statistics. Higher scores on this component indicate that students are interested on a personal level in learning statistics and communicating statistical information. • Effort - Amount of work the student expends to learn statistics. A high score on this component indicates that students intend to work and study hard during the statistics class and will complete all assignments and attend all class sessions. Each item is measured on a Likert-like scale from 1 to 7 and is scored so that a higher response on the question indicates a more positive attitude toward statistics. A student's score on each of the six attitude components is the mean score on the items within that component. The SATS includes a pre-course version and a post-course version that differ only in terms of the tense of the question. The pre-course version of the SATS was given to all students during the first two weeks of the semester and the post-course was administered during the last week of the semester. 5. Description of the Study Subjects The special section of Statistics 101 was offered during the spring semester in both Year 1 and Year 2. The students in this section were those who responded to a special letter of invitation to enroll in this section. Over 100 freshmen and sophomores with ACT Math Scores of 27 or higher and with majors that require Statistics 101 (such as biology, sociology, psychology, statistics, etc.) or with open majors in the College of Liberal Arts and Sciences were invited to register for the special section for each year of the study. Of these invited students, 20 students in Year 1 and 16 students in Year 2 ultimately signed up for and completed the course. All students enrolled in the special sections of the course agreed to participate in the study and formed the Year 1 and Year 2 Experimental Groups. During Year 1 of the study, students from four of the regular sections of Stat 101 were invited to participate during spring. Each of these four sections was taught by a different graduate teaching assistant with at least one semester of experience teaching Stat 101. 377 students in the regular sections completed the course with a passing grade of D- or better. Of these students, 199 had agreed to participate in the study. Student characteristics: ACT Math score, ACT English score, ACT Composite score, High School Percentile Rank, Cumulative College GPA and Total Number of Hours Completed, were obtained from the university's registrar's office for all 199 students. 41 students of the 199 had ACT Math scores of 27 or higher. The 39 students with high math ACT scores who completed all assignments throughout the semester were designated the High Math Control Group (Control: H M). The remaining 158 students, whose ACT Math scores were 26 or lower, were candidates for the Regular Control Group. Due to limited resources for assessment, the number of students included in the Regular Control Group was reduced. The first reduction was made by removing the 39 students in the Regular Control Group who had completed the course project with at least one of the students from the High Math Control Group. A random sample of 50 students was selected from the 119 remaining students. Two of these 50 students chosen for the sample did not complete all assessments during the semester and were dropped from the study. Thus, the final Regular Control Group (Control: Reg) contained 48 students. During Year 2 of the study, students from a regular section of Statistics 101 for the fall semester, taught by the same instructor as the special section in the following spring semester, were used to form the control groups. 88 students in this section ultimately completed the semester course with a passing grade of D- or higher. Of these students, 72 had agreed to participate in the study at the beginning of the semester. 10 of these 72 students failed to complete one or more of the assessments throughout the semester and were dropped from the study. Student characteristics: ACT Math score, ACT English score, ACT Composite score, High School Percentile Rank, Cumulative College GPA and Total Number of Hours Completed, were obtained from the university's registrar's office for all students in the study. Six out of the remaining 62 students from the regular section of Statistics 101 did not have ACT Math scores on record. These students were dropped from the study, leaving 56 students in the control groups. These remaining students were divided into two groups based on their ACT Math scores. The 17 students with ACT Math Scores of 27 or higher were placed into the High Math Control Group and the other 39 students were placed in the Regular Control Group. For both years we needed to check to see if the Experimental Group and the High Math Control Group were indeed similar in terms of ACT Math scores. Table 2 displays the numbers of students participating, means and standard deviations for ACT Math scores for the three groups in both years. Table 2: ACT Math Scores │ Year 1 │ │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 20 │ 29.95 │ 1.701 │ │ Control: H M │ A │ │ 39 │ 28.97 │ 2.134 │ │ Control: Reg │ │ B │ 48 │ 21.90 │ 3.068 │ │ Year 2 │ │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 16 │ 30.00 │ 2.191 │ │ Control: H M │ A │ │ 17 │ 29.35 │ 1.539 │ │ Control: Reg │ │ B │ 39 │ 21.74 │ 3.354 │ Groups with different letters have differences in mean values that are statistically significant. In both years, the Experimental Group has a slightly higher mean ACT Math score than the High Math Group but this difference is not statistically significant. In both years, the Regular Group does have a mean ACT Math score that is lower than either of the other two groups and this difference is statistically significant. 6. Assessment Results Two different types of measures were used to assess student performance for both the Year 1 and Year 2 studies. The first was an overall measure of course performance consisting of common items on examinations. The overall measure was the total score on all examination items. The second type of measures used the score for each assessment (subsets of common exam questions linked to specific activities and the course project) to assess whether student performance among the three groups varied with the concepts covered on that assessment. For each measure of student performance, an ANOVA was used to determine if the mean scores of the three groups (Experimental, High Math Control and Regular Control) were significantly different and if so, the ordering of the means for the three groups. An ANCOVA was then used to determine if any student characteristics, such as ACT Math score, ACT English score, Cumulative College GPA, Total Hours Earned, High School Percentile Rank, and score on the three pretest skills (Pretest Skill 1, 2, and 3) were significant covariates in the model. Instead of dividing the students into the three membership groups (Experimental, High Math Control and Regular Control Groups) as in the ANOVA, the students were divided into just two groups (Experimental and Control) and the deciding factor of the division of the Control Group (ACT Math score) was included as a covariate in the ANCOVA model. If the overall ANCOVA model was significant, the final model was determined by selecting the model with the highest R^2value having all included variables significant at the 5% level. In addition, in Year 2, an ANOVA was used to determine if the mean scores of the three groups on the six components of student attitudes toward statistics (SATS) were significantly different. An ANCOVA was then used to determine if there was a significant difference in changes in attitudes (post - pre) between the three groups when including the pre-course attitude as a covariate. Finally, we included the six pre-course and six post-course attitude values as covariates in the ANCOVA models to determine if any aspects of attitudes were significant in predicting performance on the course assessments after controlling for student characteristics and group membership. 6.1 Year 1 Results Table 3 below gives the means and standard deviations for the overall measure of student performance during the Year 1 study for the Experimental, High Math Control and Regular Control Groups. Table 3: Year 1 Overall Measure of Student Performance (54 pts max) │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 20 │ 40.35 │ 5.29 │ │ Control: H M │ A │ │ 39 │ 39.90 │ 5.60 │ │ Control: Reg │ │ B │ 48 │ 31.23 │ 8.11 │ The null hypothesis of equal means among the three groups was rejected with a P-value <0.0001. On average, the Experimental and High Math Control Groups both scored significantly higher than the Regular Control Group. Although the overall mean for the Experimental Group was higher than that of the High Math Control Group, this difference was not statistically significant at the 5% level. The final ANCOVA model for the Year 1 overall measure was highly significant with a P-value <0.0001. The factor of group membership (Experimental vs. Control) was not significant in the model. However, the covariates ACT Math score and Cumulative College GPA were highly significant in the model. This is consistent with the ANOVA model in Table 3. In the ANOVA, students with high ACT Math scores performed better on average than students with lower ACT Math scores. Summaries of the final ANCOVA model are given in Table 4. Table 4: Year 1 ANCOVA Results for Overall Measure of Student Performance │ R^2 │ Significant Variable │ Coefficient │ P-value │ │ 55.9% │ ACT Math │ 0.7439 │ <0.0001 │ │ │ Cumulative College GPA │ 5.5152 │ <0.0001 │ │ │ Pretest Skill 1 │ 1.7416 │ 0.0205 │ Overall performance on common exam questions during the Year 1 study was significantly related to ACT Math score but not significantly different for students exposed or not exposed to the new course materials. The new materials appear to ``at least do no harm.'' Even though the overall student performance was not significantly different for students exposed or not exposed to the new course materials, student performance on particular aspects covered during the semester could have differed for those students exposed to the new materials compared to those not exposed. To look at student performance on particular aspects of statistics, the scores on questions relating to learning outcomes for the specific activities were analyzed separately. The means and standard deviations of these scores for the Experimental, High Math Control and Regular Control Groups during Year 1 are in Tables 5, 6, 7 and 8. All ANOVAs were statistically significant with P-values less than or equal to 0.0002. Subsequent comparisons of group means was done using a Least Significant Difference approach with an individual comparison alpha level of 0.05. Groups with different letters have differences in means that are statistically significant. Table 5: Year 1 - Distributions & Numerical Summaries (Exam 1, Questions 1 and 2 combined, out of 18 points). │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ │ B │ 20 │ 13.45 │ 1.432 │ │ Control: H M │ A │ │ 39 │ 16.05 │ 1.849 │ │ Control: Reg │ │ B │ 48 │ 13.33 │ 4.184 │ We were very concerned about the results on the assessment questions dealing with distributions and numerical summaries. In this instance the Experimental Group was comparable to the Regular Control Group and significantly lower than the High Math Control Group. Closer examination of responses revealed that the Experimental Group missed the connection between the shape of the distribution and appropriate summary measures (e.g. symmetric shape - sample mean and sample standard deviation, asymmetric shape - five number summary or median and interquartile range). Although mentioned in lecture this idea was not reinforced by what students were doing in Activity #2. This lead us to revise the activity so as to reinforce this idea. Table 6: Year 1 - Normal model (Exam 1, Questions 3 and 4 combined, out of 6 points). │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 20 │ 4.60 │ 1.465 │ │ Control: H M │ A │ │ 39 │ 4.15 │ 1.461 │ │ Control: Reg │ │ B │ 48 │ 2.04 │ 1.701 │ Average scores on questions dealing with the Normal model were similar for the Experimental and High Math Control Groups. These groups scored significantly higher, on average, than the Regular Control Group. Being able to solve Normal model questions relies on basic algebra skills and the ability to use the table of the standard normal distribution. These skills are more developed in students with higher ACT math scores. Table 7: Year 1 - Regression (Exam 2, Question 1, out of 20 points) │ Group │ │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ │ 20 │ 14.20 │ 3.172 │ │ Control: H M │ │ B │ │ 39 │ 11.77 │ 3.232 │ │ Control: Reg │ │ │ C │ 48 │ 9.21 │ 3.984 │ The exam question on regression showed the largest differences between the groups in terms of average scores. The Experimental Group had the highest average score followed by the High Math Control Group with the Regular Control Group having the lowest average score. Examination of student responses revealed that the Experimental Group did better on interpretations of regression coefficients and R^2. The High Math Control Group got lower average scores due to lower scores on these interpretations. The Regular Control Group tended to have difficulties with some of the calculations as well as the interpretations. All students in Stat 101 see the appropriate interpretations in lecture and in homework assignments. The fact that the Experimental Group did significantly better, on average, on this assessment was encouraging to us, indicating that Activities #3, 4 and 5 might be of some help. Table 8: Year 1 - Experiments (Exam 2, Question 2, out of 10 points) │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 20 │ 8.10 │ 1.373 │ │ Control: H M │ A │ │ 39 │ 7.92 │ 1.707 │ │ Control: Reg │ │ B │ 48 │ 6.65 │ 1.564 │ The questions dealing with experimentation yield similar results to those dealing with the Normal model. There was no statistically significant difference between the Experimental and High Math Control Groups average scores. Both of these groups had scores that were significantly higher than the Regular Control Group. Analysis of Covariance models were also run on each of the individual assessments. Summaries of the ANCOVA models are given in Table 9. All final ANCOVA models were highly significant with a P-value <0.0001. Group membership (Experimental vs. Control) and the covariates ACT Math score and Cumulative College GPA were all significant at the 0.1% level in the final ANCOVA model for Distributions and Numerical Summaries. The coefficient for the Group membership variable indicates that the Experimental Group had lower scores on this assessment than the Control Groups once you adjusted for ACT Math and Cumulative College GPA. This is consistent with what we saw in the Analysis of Variance. For the scores on the Regression question, the final ANCOVA group membership was also statistically significant at the 5% level. However, the sign of the coefficient indicates that the Experimental Group scored better, on average, than the Control Groups once the other variables are taken into account. Both of these results are consistent with what we saw in the Analysis of Variance. For scores on questions involving the Normal model and Experiments, group membership was not statistically significant. Again, these results are consistent with the results of the Analysis of Variance. Table 9: Year 1 - ANCOVA Models │ Assessment │ R^2 │ Significant Variable │ Coefficient │ P-value │ │ Distributions │ 29.3% │ Group(C-E) │ 1.5496 │ 0.0001 │ │ Numerical │ │ ACT Math │ 0.2925 │ 0.0006 │ │ Summaries │ │ Cummulative College GPA │ 1.7616 │ 0.0010 │ │ Normal │ 45.8% │ ACT Comp │ 0.1665 │ 0.0028 │ │ Model │ │ ACT Math │ 0.1210 │ 0.0151 │ │ │ │ Pretest Skill 2 │ 0.6120 │ 0.0374 │ │ Regression │ 46.9% │ Group(C-E) │ -0.8791 │ 0.0374 │ │ │ │ ACT Math │ 0.2130 │ 0.0080 │ │ │ │ Cumulative College GPA │ 2.6132 │ <0.0001 │ │ │ │ Pretest Skill 1 │ 1.1296 │ 0.0070 │ │ Experiments │ 18.9% │ Cumulative College GPA │ 0.7444 │ 0.0068 │ │ │ │ Pretest Skill 3 │ 0.2619 │ 0.0057 │ In all but one of the ANCOVA models, Cumulative College GPA was a highly significant covariate. Three of the four ANCOVA models also included a significant Pretest Skill (Pretest Skill 2 for the Normal model, Pretest Skill 1 for Regression and Pretest Skill 3 for Experiments). The significance of Pretest Skill 2 (ability to describe distributions using histograms) on student performance on the Normal model was not surprising. However, it was surprising this Pretest Skill did not show up in the ANCOVA model for Distributions and Numerical Summaries. For the Regression questions, the results of the ANCOVA indicate that even after controlling for ACT Math Score and Cumulative College GPA, the ability to compute numerical summaries (Pretest Skill 1) was still significantly related to students' scores. For the course group project, students were randomly assigned to project groups in both the regular sections and special section of the course. There were a total of 7 projects from the Experimental Group, 28 projects from the High Math Group and 18 projects from the Regular Control Group. The 28 projects in the High Math Control Group were completed by groups containing only one or two students from the High Math Control Group. The rest of the project group members in the High Math Control Group were students with ACT Math scores below 27. The 18 projects from the Regular Control Group contained only students with ACT Math scores below 27. Table 10 gives the means and standard deviations for the project scores for the three groups. Table 10: Year 1 Project Results (out of 50 points) │ Group │ │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ │ 7 │ 46.36 │ 2.48 │ │ Control: H M │ │ B │ │ 28 │ 40.38 │ 5.82 │ │ Control: Reg │ │ │ C │ 18 │ 36.25 │ 8.36 │ As with the individual assessments, the null hypothesis of equal means between the three groups was rejected with P-value of 0.0037. The Experimental Group had the highest score on average, followed by the High Math Control Group and then the Regular Control Group. All pairs of comparisons among the three groups were statistically significant. We were encouraged by the performance of the Experimental Group on this assessment as, unlike the exam questions, the project requires students to put together ideas from various topics into a coherent statistical investigation. 6.2 Year 2 Results Table 11 below gives the means and standard deviations of the overall measure of student performance on common exam questions during the Year 2 study for the Experimental, High Math Control and Regular Control Groups. Table 11: Year 2 Overall Measure of Student Performance (out of 45 points) │ Group │ │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ │ 16 │ 40.06 │ 2.48 │ │ Control: H M │ │ B │ │ 17 │ 34.71 │ 5.88 │ │ Control: Reg │ │ │ C │ 39 │ 27.99 │ 7.86 │ The null hypothesis of equal means between the three groups was rejected with a P-value <0.0001. The Experimental Group scored higher on average than the High Math Control Group which scored higher on average than the Regular Control Group. These differences were statistically significant at the 5% level. The final ANCOVA model for the Year 2 overall measure was highly significant with a P-value <0.0001. Again, the results of the ANCOVA are consistent with the results from the ANOVA. Group membership was significant in the ANCOVA even after you controlled for ACT Math score and Cumulative College GPA. The sign of the coefficient indicates that, on average, the students in the Experimental Group performed better than students in the Control Groups even after controlling significant covariates. Summary values of the final ANCOVA model for the overall measure of students' performance for the Year 2 study are given in Table 12. Table 12: Year 2 ANCOVA Results on Overall Measure of Student Performance │ R^2 │ Significant Variable │ Coefficient │ P-value │ │ 60.53% │ Group(C-E) │ -1.8602 │ 0.0474 │ │ │ ACT Math │ 0.7858 │ <0.0001 │ │ │ Cumulative College GPA │ 3.9522 │ 0.0015 │ Unlike in Year 1, the results from Year 2 indicate that students exposed to the new course materials did significantly better than those who did not use the new materials when using the overall measure of performance on common exam questions. In addition, after controlling for the significant covariates, the factor of group membership was significant at the 5% level. As in Year 1, ACT Math score and Cumulative College GPA turned out to be statistically significant covariates. To look at differences in student performance over the course of the semester on items tied to the activities, separate analyses of scores on common exam questions dealing with Distributions and Numerical Summaries, the Normal model, Regression and Experiments was performed. Table 13 contains the means and standard deviations of the scores for the Experimental, High Math Control and Regular Control Groups for these areas, respectively. Table 13: Year 2 - ANOVA Results │ Distributions and Numerical Summaries │ │ (Exam 1, Questions 1 & 2, out of 16 points) │ │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 16 │ 13.66 │ 1.62 │ │ Control: H M │ A │ │ 17 │ 12.53 │ 2.54 │ │ Control: Reg │ │ B │ 39 │ 9.28 │ 3.22 │ │ Normal Model │ │ (Exam 1, Questions 3 & 4, out of 10 points) │ │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 16 │ 9.72 │ 0.55 │ │ Control: H M │ A │ │ 17 │ 8.76 │ 2.36 │ │ Control: Reg │ │ B │ 39 │ 6.90 │ 2.98 │ │ Regression │ │ (Exam 2, Question 1, out of 12 points) │ │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 16 │ 10.38 │ 1.16 │ │ Control: H M │ │ B │ 17 │ 7.53 │ 2.05 │ │ Control: Reg │ │ B │ 39 │ 6.51 │ 2.42 │ │ Experiments │ │ (Exam 2, Question 2, out of 7 points) │ │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 16 │ 6.31 │ 0.57 │ │ Control: H M │ A │ B │ 17 │ 5.88 │ 0.91 │ │ Control: Reg │ │ B │ 39 │ 5.29 │ 1.29 │ For each assessment, the null hypothesis of equal means for the three groups was rejected with P-values of <0.0001, 0.0006, <0.0001, and 0.0064 for the Distributions and Numerical Summaries, Normal Model, Regression and Experiments assessments, respectively. With all of these analyses, the Regular Control Group had significantly lower mean scores than the Experimental Group. The Experimental Group had higher average scores than the High Math Control Group on each of the four assessments. However, for all except the assessment question on Regression, there was no statistically significant difference between the Experimental Group and the High Math Control Group. The accumulation of differences was enough to create the statistically significant difference between these two groups when looking at the overall measure of performance on common exam questions. The ANCOVA results mirror the findings above. All final ANCOVA models were statistically significant with P-values <0.0001, <0.0001, 0.0002, and <0.0001 for the Distributions and Numerical Summaries, Normal Model, Regression and Experiments assessments, respectively. The only time Group membership was statistically significant in the ANCOVA model was for the question on Regression. The sign of the coefficient was consistent with the fact that the Experimental Group scored higher, on average, than the Control Groups on this assessment question. The three other assessment areas produced ANCOVA models that did not include a Group membership variable. Summary values for the final ANCOVA models for the three exams are given in Table 14. Table 14: Year 2 - ANCOVA Results} │ Assessment │ R^2 │ Significant Variable │ Coefficient │ P-value │ │ Distributions and │ 58.1% │ ACT Math │ 0.3836 │ <0.0001 │ │ Numerical Summaries │ │ Pretest Skill 3 │ 0.5443 │ 0.0142 │ │ Normal │ 34.4% │ Cummulative College GPA │ 1.2015 │ 0.0102 │ │ Model │ │ Pretest Skill 3 │ 0.7739 │ <0.0001 │ │ Regression │ 52.6% │ Group (C-E) │ -1.3614 │ <0.0001 │ │ │ │ Cumulative College GPA │ 1.9991 │ <0.0001 │ │ Experiments │ 21.8% │ ACT Math │ 0.0642 │ 0.0229 │ │ │ │ Cumulative College GPA │ 0.5534 │ 0.0147 │ For the course group project, students were randomly assigned to project groups in both the regular and special section of the course. There were 6 projects completed from the Experimental Group. From the two control groups, there were a total of 21 different projects. From prior experience, high math ability students play a significant role in the completion of the group course project. Therefore, a project was assigned as a High Math Control Group project if the project was completed by at least one student from the High Math Control Group. This assignment also matches the way projects were treated in the Year 1 study, in that only Regular Control Group students contributed to the projects in their own group, while both High Math and Regular Control Group members contributed to projects in the High Math Control Group. This division resulted in 10 projects for the High Math Control Group and 11 projects for the Regular Control Group in the Year 2 study. Table 15 gives the mean and standard deviations for the course project for the three groups (Experimental, High Math Control, Regular Control). Table 15: Year 2 Project Results (out of 50 points) │ Group │ │ │ Number │ Mean │ Std. Dev. │ │ Experimental │ A │ │ 6 │ 38.67 │ 4.46 │ │ Control: H M │ A │ │ 10 │ 31.80 │ 8.57 │ │ Control: Reg │ A │ │ 11 │ 33.91 │ 8.95 │ Unlike in Year 1, there were no significant differences in means among the three groups (P-value = 0.2734). The scores on the project were generally lower in Year 2 compared to Year 1. Also, except for the Regular Control Group, there was quite a bit more variation in scores for the Year 2 project compared to that in Year 1. The magnitude of the differences between groups was substantial but finding statistically significant differences was hampered by the larger variation and smaller sample sizes. The SATS was used to assess differences in student attitudes about statistics at the beginning of the semester. The means and standard deviations of the scores on the six components of the SATS from the three groups is given in Table 16 below. Table 16: Year 2 SATS Survey Results │ │ │ Pre-course │ Post-course │ │ SATS Component │ Group │ │ │ Mean │ Std. Dev │ │ │ Mean │ Std. Dev. │ │ Affect │ Experimental │ A │ │ 5.04 │ 1.02 │ A │ │ 5.43 │ 1.11 │ │ │ Control: H M │ A │ │ 4.91 │ 0.79 │ A │ │ 5.25 │ 0.92 │ │ │ Control: Reg │ │ B │ 4.03 │ 1.02 │ │ B │ 3.97 │ 1.27 │ │ Cognitive Competence │ Experimental │ A │ │ 5.57 │ 0.71 │ A │ │ 5.83 │ 0.93 │ │ │ Control: H M │ A │ │ 5.84 │ 0.75 │ A │ │ 5.98 │ 0.67 │ │ │ Control: Reg │ │ B │ 4.97 │ 0.90 │ │ B │ 4.69 │ 1.20 │ │ Value │ Experimental │ A │ │ 5.74 │ 0.77 │ A │ │ 5.42 │ 0.88 │ │ │ Control: H M │ │ B │ 5.11 │ 0.93 │ A │ │ 5.15 │ 1.14 │ │ │ Control: Reg │ │ B │ 4.94 │ 0.89 │ │ B │ 4.52 │ 1.01 │ │ Difficulty │ Experimental │ A │ │ 3.95 │ 0.62 │ A │ │ 4.28 │ 0.72 │ │ │ Control: H M │ A │ │ 4.27 │ 0.63 │ A │ │ 4.44 │ 0.77 │ │ │ Control: Reg │ A │ │ 3.89 │ 0.61 │ A │ │ 3.86 │ 1.00 │ │ Interest │ Experimental │ A │ │ 5.36 │ 1.08 │ A │ │ 5.02 │ 1.42 │ │ │ Control: H M │ │ B │ 4.53 │ 1.14 │ A │ │ 4.56 │ 1.43 │ │ │ Control: Reg │ │ B │ 4.19 │ 1.09 │ │ B │ 3.73 │ 1.34 │ │ Effort │ Experimental │ A │ │ 6.25 │ 0.94 │ A │ B │ 5.75 │ 0.90 │ │ │ Control: H M │ │ B │ 5.66 │ 0.75 │ │ B │ 5.33 │ 0.69 │ │ │ Control: Reg │ A │ │ 6.31 │ 0.69 │ A │ │ 6.04 │ 1.10 │ There were significant differences in mean attitudes between the three groups on five of the six components on both the pre-course and post-course SATS. The Experimental and High Math Control Groups had significantly higher mean scores than the Regular Control Group on the Affect and Cognitive Competence components of the pre-course SATS. Many students, especially at the beginning of a semester, see the introductory statistics course as essentially a mathematics course. It is not surprising then that students with strong mathematics backgrounds would have a better feeling about learning statistics (Affect) and would be more confident in their abilities to learn the course materials (Cognitive Competence). By the end of the course, the same patterns emerged with slightly higher means scores for the Experimental and High Math Groups and slightly lower mean scores for the Regular Group. The Experimental Group had significantly higher mean scores on the Value and Interest component of the pre-course SATS than either control group. This result is not unexpected given the nature of the Experimental Group. These students were interested in learning statistics and valued the subject enough to enroll in a special section of the course. By the end of the semester, the difference between the High Math Group and Experimental Group was still present but no longer statistically significant. Interestingly, the Experimental Group and the Regular Control Group had statistically higher mean scores for the amount of Effort they would spend on learning statistics and on the course itself than the students in the High Math Control Group. This result may be attributed to the fact that the Regular Control Group expected to work harder on the course possibly due to perceived lack of mathematics ability while the Experimental Group expected to work harder possibly due to enrollment in a special section of the course. Again, by the end of the semester, the difference between the High Math Group and the Experimental Group had narrowed and was no longer statistically significant. The means for Difficulty are consistent with those for Effort, both pre- and post-course, but no statistically significant differences are seen among any of the groups. Students responses on both the pre-course and post-course versions of the SATS were used to look at potential differences in the change in attitudes over the course of the semester between the three groups. Six ANCOVA models, one for each component of the SATS, were used to look at differences in changes in attitudes with the pre-course attitude score on the component included as a covariate in the model. Differences in the change in attitudes in the three groups were statistically significant at the 5% level only for the Affect and Cognitive Competence components. In both cases, the significant change occurred between the Regular Control Group and the other two groups. And in both cases, the sign of the change was negative, indicating a more negative change in attitudes on these two components across the semester for the Regular Control Group. Finally, student responses on the SATS were used to determine if attitudes toward statistics were significantly related to overall performance in the course after controlling for group membership and student characteristics. Using the overall measure of student performance on the common exam questions, student mean scores from the six components of the pre course SATS and the 6 components of the post course SATS were added to the list of potential variables that could be included in an ANCOVA model. Only one covariate, post course Cognitive Competence was a statistically significant addition to the ANCOVA model (P-value of 0.0006 for full versus reduced model). The coefficients and corresponding P-values for the SATS ANCOVA model are given in Table 17. Table 17: Year 2 ANCOVA Model with SATS │ Significant Variable │ Coefficient │ P-value │ │ Group(C-E) │ -2.0865 │ 0.0168 │ │ ACT Math │ 0.4703 │ 0.0094 │ │ Cumulative College GPA │ 3.2214 │ 0.0052 │ │ Post Cognitive Competence │ 2.2945 │ 0.0006 │ 7. Discussion The laboratory activities we developed had the overall goal of getting students in the first course in statistics to begin to think more like a statistician. By this we mean that students should begin to see data in context and statistical thinking as a way to answer questions about the world around us. An important part of beginning to think like a statistician involves asking questions about the data. Why should we collect data? What data should we collect? How should the data be collected? Students should also see that there is often more than one way to analyze data. Different analyses can lead to different answers and students should recognize why those answers are different. With this in mind we developed our laboratory activities to revolve around data collected at the beginning of the course. By revisiting the data in activities throughout the first half of the course we depart from the usual presentation of a new data set for each new method. We believe this helps students see statistics as a way of learning about the world around them as opposed to a laundry list of techniques and methods. >From the point of view of a ``proof of concept,'' this study was a success. We were able to develop new lab activities, field test and make adjustments to them, so as to ``do no harm.'' The preliminary results on assessing student learning are especially encouraging for the topic of regression. Students exposed to the new activities performed better on the assessment of regression compared to students that were not exposed even after adjusting for significant covariates. This may be due, in part, to an additional laboratory activity (Activity #4) that tries to get students to think about what might be reasonable estimates for the y-intercept and slope coefficient given the context of the explanatory and response variables. In analyzing the statistical results above, several patterns emerge. Students with strong math backgrounds performed better on average than the other students. In every case, the Regular Control Group performed no better and often times significantly worse on average than either the High Math Control Group or the Experimental Group. Particularly at the beginning of the semester, students with high math abilities appear to use their previous mathematics skills to help them learn concepts about distributions. The questions dealing with distributions and numerical summaries require students to have basic skills in statistics along with good numerical literacy. It is not surprising then that students with stronger mathematics backgrounds would score higher on these questions. Except for the Year 1 assessment of distributions and numerical summaries, the Experimental Group scored, on average, the same as or in many cases significantly higher than the High Math Control Group. The performance of students in the special section was noticeably lower on the questions which required students to describe characteristics of a distribution (Questions 1 & 2 - Year 1 in the Appendix). Many times, students in the special section left out some of the characteristics of the distribution and thus scored lower on the problem as a whole. This result could be due to the activity or to differences in the emphasis of the instructors when teaching this material. In the Year 2 study, which controls for instructor differences and used a revised Activity #2, the mean scores of the High Math Control Group and the Experimental Group on assessment of distributions and numerical summaries are not statistically different. In both Year 1 and Year 2 of the study, the Experimental Group performed much better, on average, on the assessment of regression than either of the control groups. Regression concepts, such as slope and intercept, appear many times in high school algebra courses. However, in teaching this material as it relates to regression in statistics, our experience has been that students, even ones with strong math backgrounds, do not easily transfer their previous mathematical knowledge to this new application. The new course materials put a great deal of emphasis on regression and correlation concepts. The students in the special section of the course were exposed to these concepts repeatedly over the course of several labs. While the amount of class time used to cover these topics between the two sections was approximately the same, the regular section only completed one lab on regression and correlation as opposed to two labs for the special section. While the differences in performance on the regression assessment carried over into the course group project in the Year 1 study, no significant difference in performance between the three groups was found for the projects in the Year 2 study. The observed mean differences between the three groups were roughly equal between Years 1 and 2. However, the standard deviations of experimental and high math control groups were higher and the number of projects was lower in Year 2 than in Year 1. These differences lead to a lower power for the Year 2 study. Also, in studying the Year 2 projects, we found that students had difficulties in understanding the concept of replication - having several experimental units in each treatment group. Many project groups in both the special and the regular section simply conducted the experiment by applying the treatment to each experimental unit several times. The labs on experimentation used in the two sections of Statistics 101 have since been rewritten to help students better understand the idea of replication within an experiment. The effect of these revisions on project scores has not yet been tested. Finally, in discussing these results, we have discovered that the two instructors for the special section in this study approach the concept of experimental units differently when teaching the course. This discussion has lead to more consistent instruction of this topic. The ANCOVA results are consistent with the ANOVA results. The results of significant mean differences in scores on the assessments between the experimental and control students from the ANOVA analyses are still present even when controlling for other student characteristics. The variable Cumulative College GPA is significant in all but one ANCOVA model. Controlling for other student characteristics and their group membership (either Control or Experimental) students with higher GPAs are scoring higher on average on these assessments. ACT Math is significant in four of five ANCOVA models from Year 1, and in three of five ANCOVA models from Year 2. Students with higher ACT Math scores tend to do better on these assessment items. In terms of attitudes towards statistics, as measured by SATS, there was no significant change in the patterns of means for the three groups from pre- to post-course. According to the ANCOVA models, only Affect and Cognitive Competence changed, for the worse, for the Regular Control Group. Again, the new activites ``did no harm'' for the experimental group. 8. Summary and Conclusions The process of development and field testing of the activities presented many challenges but there were some encouraging results. The design of the study was influenced by many factors. Because the activities were to be tested for the first time, we wanted to present them in a manner that would enable us to address and correct problems immediately. The special section of Stat 101 provided us with a research laboratory where we could do this. Given the poor performance of the students in the special section on the assessment of distributions and numerical summaries the first year, we were able to make adjustments to Activity #2 and to address the deficiency in additional instruction in the special section. Had we tried this activity on a larger group of students in the regular sections of Stat 101 we may have done harm that could not be corrected as easily. Field testing the activities in the special section solved a logistical problem but introduced the problem of finding a comparable control group. Having a control group with a similar ACT Math profile addressed this difficulty. Due to constraints on teaching assignments for Year 1 of the study, group membership (either experimental or control) was confounded with course instructor. The special section of the course was taught by an experienced professor while the regular sections of the course were taught by relatively inexperienced graduate teaching assistants. Differences in teaching styles, methods, and emphasis between the five instructors could affect student learning. While this limitation is not present in Year 2 of the study (the same instructor taught all students), the regular and special sections of Statistics 101 were structured differently in both years. The enrollments of the special sections were around 20 students while the enrollments in the regular sections were around 100. The lab section for the special sections was led by both the course instructor and a graduate student teaching assistant while the labs for the regular sections were each run by a single graduate student teaching assistant. Now that the development and initial assessment has been performed and we are satisfied that the new materials will not do harm we plan to proceed to a randomized clinical trial (Phase II) so as to include a proper randomized control group. Because of the restrictions mentioned earlier, we intend to conduct this trial in a different introductory statistics course where the laboratory is taught by the course instructor, rather than a graduate teaching assistant. There are multiple sections of this course that will act as blocks in our design. Students from sections of this introductory statistics course who agree to participate will be randomly assigned to treatment groups. One group will use the activities we developed in lab and the other group will use the current laboratory activities. With the availability of the ARTIST materials (www.gen.umn.edu/artist) we hope to use these in future assessment so as to be able to compare performance with other statistics students at different institutions. This material is based upon work supported by the National Science Foundation, DUE # 0231322. We would also like to thank Dr. Carl Lee of Central Michigan University for evaluating the course projects for Year 1 of this study. We would also like to thank the Associate Editor and two anonymous referees whose thoughtful comments have greatly improved this paper. Earlier versions of the assessment of the Year 1 and Year 2 studies have appeared in the ASA Proceedings of the Section on Statistical Education. First Exam Questions Year 1 1. The table below gives the daily high temperature recorded at the Des Moines Airport for the month of January 2004. │ Day │ Temp │ Day │ Temp │ Day │ Temp │ Day │ Temp │ Day │ Temp │ │ 1 │ 52 │ 7 │ 26 │ 13 │ 35 │ 19 │ 16 │ 25 │ 27 │ │ 2 │ 60 │ 8 │ 31 │ 14 │ 40 │ 20 │ 23 │ 26 │ 21 │ │ 3 │ 34 │ 9 │ 26 │ 15 │ 38 │ 21 │ 46 │ 27 │ 15 │ │ 4 │ 21 │ 10 │ 31 │ 16 │ 34 │ 22 │ 19 │ 28 │ 5 │ │ 5 │ 11 │ 11 │ 49 │ 17 │ 34 │ 23 │ 53 │ 29 │ 2 │ │ 6 │ 16 │ 12 │ 45 │ 18 │ 27 │ 24 │ 25 │ 30 │ 2 │ │ │ │ │ │ │ │ │ │ 31 │ 14 │ a. Make a stem and leaf plot of the daily high temperatures for the month of January 2004. b. Describe the distribution of the daily high temperatures for the month of January 2004. (Calculations are not needed for your description). c. Which numerical summaries would you report for these data? Do not calculate these values, but briefly explain your choice. 2. Another measure of center is the midrange. midrange = (min + max)/2 For a sample of size 20, which is affected more by a single outlier, the mean or the midrange? Explain your answer. 3. The Environmental Protection Agency (EPA) estimates fuel economy for automobile models. Assume the distribution of fuel economy is normally distributed with a mean of 24.8 mpg and a standard deviation of 6.2 mpg for highway driving. a. What percent of all cars will have fuel economies greater than 27 mpg? b. The worst 5% of all cars will have fuel economies less than what amount? 4. First-time Freshman attending Iowa State University in Fall 2003 had a mean ACT Composite Score of 24.6 points. The first quartile of ACT Composite scores was 21.7. If the ACT Composite scores follow a normal distribution, what is the standard deviation of these scores? Year 2 1. Short Answer a. A data set contains 10 observations that have a median of 22. One of the 10 observations is changed from 5 to 3. What is the new median? b. A data set contains 10 observations that have a mean of 22. One of the 10 observations is changed from 10 to 20. What is the new mean? c. A data set contains 10 observations that have a standard deviation of 3 and an interquartile range of 2.5. Five points are added to each of the 10 observations. What are the new standard deviation and IQR? d. A data set contains 10 observations that have a mean of 20 and a median of 22. Five points are added to each of the 10 observations. What are the new mean and median? e. A data set contains 10 observations that have a standard deviation of 0. What is the range of the data? f. Lilac House, a Bed & Breakfast on Market Street in Mackinac Island, MI, has 5 rooms. The most expensive room is $120 a night and the cheapest room is $60 a night. What are possible prices for the other three rooms so that the median price of a room is $60 and the mean price of a room is $75 a night? 2. Use the JMP output to answer the following questions about the distribution of the number of tornadoes in Iowa per year for the years 1953-2003. a. Describe the distribution of the number of tornadoes per year in Iowa. b. Give two reasons why the mean is larger than the median for these data. c. Which numerical summaries of center and spread are most appropriate for these data? Give the values of these summaries and explain your answer. 3. The tensile strength of paper is measured in pounds per square inch (psi). The tensile strength of the paper produced by a particular company has a normal distribution with a mean of 35 psi. a. Currently, 25% of the paper produced by the company has a tensile strength less than the minimum requirement of 30 psi. What is the standard deviation of the tensile strength of the paper produced by this company? b. The company would like to achieve its goal of having only 5% of the paper produced below the minimum tensile strength of 30 psi. If the standard deviation of the tensile strength of the paper is 5 psi, what does the mean tensile strength of the paper have to be to achieve the company's goal? 4. Control Groups: The Mental Development Index (MDI) of the Bayley Scales of Infant Development is a standardized measure used in longitudinal follow-up of high-risk infants. Scores on the MDI have a normal distribution with a mean of 100 and a standard deviation of 16. a. What proportion of children will have a MDI score above 80? b. What is the MDI score so that 90% of all children will be above that score? Experimental Group: Intelligence Quotient (IQ) scores are normally distributed with a mean of 100 points and a standard deviation of 15. a. What percent of people will have an IQ score more than 95? b. To belong to the group MENSA, a person is required to have an IQ score in the top 2% of the population. What is the minimum IQ score required to belong to the group MENSA? Second Exam Questions Year 1 1. Can consumption of wine help reduce the number of deaths from heart attacks? Yearly wine consumption (liters of alcohol from drinking wine, per person) and yearly death rates from heart disease (deaths per 100,000 people) for 9 randomly selected European countries are obtained. Consult the JMP output entitled Wine Consumption and Heart Disease. a. What is the least squares regression equation relating Death Rate to Wine Consumption? b. Give an interpretation of the intercept within the context of the problem. c. Give an interpretation of the slope within the context of the problem. d. Use the least square regression to predict the death rate from heart disease for another European country, France, which has a wine consumption of 9.1 liters per person per year. e. The actual death rate from heart disease for France is 71 deaths per 100,000 people. What is the residual for France? f. What is the value of R^2 for these data? Give an interpretation of this value. g. What is the value of r, the correlation between wine consumption and death rate from heart attack? h. Describe what you see in the plot of residuals and what this tells you about the relationship between wine consumption and death rate from heart disease. i. Based on this study and statistical analysis, if people in a country like France were to drink less wine would the death rate from heart disease go up? 2. Students in an introductory statistics class were asked to design an experiment to determine the relationship between the height of a ramp and the distance a ball rolls. One group decided to use 5 different ramp heights: 6, 9, 12, 15 and 18 inches. For simplicity, the first six trials were completed with the ramp height set at 6 inches; the next six trials were completed with the ramp height set at 9 inches, etc. until all 30 trials for the experiment were completed. Each of the 30 trials was conducted with a different randomly chosen ball. The same member of the group let go of a ball at the same place on the ramp each time. To make sure the ramp did not move between trials, the location of the bottom of the ramp was marked and the ramp was reset to the same place before each trial. The same group members were responsible for marking the location at which the ball stopped rolling and then measuring this distance from the end of the ramp. a. What are the experimental units? b. What is the response variable? c. What is the explanatory variable? d. How many levels of the explanatory variable were used in the experiment? e. How many trials were conducted at each level of the factor? f. Name two ways the group used the principle of control in their experiment. g. Which principle did the group fail to use correctly in their experiment, replication or randomization? Explain what the group did wrong and how you would fix their mistake. Year 2 1. Use the JMP output to answer the following questions. The data is the length and width in centimeters of a sample of butter clams. a. What is the response variable in this regression? b. What is the least squares regression equation relating the length of butter clams to their width? c. Give an interpretation of the slope within the context of the problem. d. Use the least squares regression to predict the length of a butter clam who has a width of 7 cm. e. The data includes a clam whose width is 7 cm and length is 9.5 cm. Find the residual for this clam. f. What is the value of R^2 for these data? Give an interpretation of this value in the context of the problem. g. What is the value of r, the correlation between the width and length of a butter clam? h. Describe the residual plot. Do you see any problems with the residual plot? If yes, what effect will these problems have on your linear regression of the width and length of a butter clam? 2. An ultramarathon is a foot race that is longer than 26.2 miles. Doctors have found that people who run an ultramarathon are at increased risk for developing respiratory infections after the race. Doctors believe that taking vitamin C the 10 days before and the 10 days after the race would reduce the incidence of respiratory infections in the ultramarathon runners. To test their hypothesis, 20 people were selected to receive either vitamin C or a placebo. Ten days after the race, the two groups were studied to determine how many of the runners in each group developed a respiratory infection. a. Why is this study an experiment? b. What are the experimental units? c. What is the response variable? d. What is the factor? e. What are the treatments? f. Name one thing the experimenter should use for control in this experiment. Group Course Project Description The focus of this project is on designing an experiment and using correlation and regression to analyze the resulting data. Your experiment will involve a paper helicopter. A prototype of a paper helicopter is provided. There are many ways to evaluate the flight of the paper helicopter. There are also many factors that may affect that flight. The object of this project is to investigate the relationship between a single factor you can manipulate that may affect the flight of the paper helicopter and a single measurement of some characteristic of the helicopter's flight. To do so, you 1. Phrase a hypothesis about the relationship between a numerical characteristic you can manipulate on the helicopter and a numerical characteristic describing the flight of the helicopter. 2. Identify your explanatory and response variables for the experiment. Indicate how you will measure the response variable. 3. Decide how you are going to design an experiment to investigate your hypothesis. You must have at least 5 levels of your factor. 4. Run the experiment and collect the data. This will require you to construct paper helicopters and fly them. You can make copies of the prototype helicopter. You must have a minimum of 30 data 5. Analyze your data. Remember, the focus of the statistical analysis is on correlation and regression. Turning in computer output is not enough. You must interpret the results of any analysis you 6. Write a final report. Your report should include sections on your hypothesis and the explanatory and response variables, the design of your experiment, the data, the statistical analysis and its interpretation and your conclusion stating what your have learned about the hypothesis from your data. Grades will be determined on: 1. How well you used the ideas of Chapter 13 to collect your data. [20 pts] 2. Relevance and completeness of the summary of the data. [20 pts] 3. Appropriateness of your conclusions. [5 pts] 4. Clarity of the final report. [5 pts] Anderson-Cook, C. M. and Dorai-Raj, S. (2001), "An Active Learning In-Class Demonstration of Good Experimental Designs," Journal of Statistics Education [Online], 9(1) www.amstat.org/publications/jse Brooks, J.G. and Brooks, M.G. (1993), In Search of Understanding: The Case for Constructivist Classrooms, Association for Supervision and Curriculum Development, Alexandria, Virginia. Cobb, G.W. (1993), "Reconsidering Statistics Education: A National Science Foundation Conference," Journal of Statistics Education [On-line], 1(1). www.amstat.org/publications/jse/v1n1/cobb.html Froelich, A.G. and Stephenson, W.R. (2008), "How Much Does an M&M Weigh?" submitted for publication. Keeler, C. M., and Steinhorst, R. K. (1995), "Using Small Groups to Promote Active Learning in the Introductory Statistics Course: A Report from the Field," Journal of Statistics Education [Online], 3(2) www.amstat.org/publications/jse/v3n2/keeler.html Kvam, P.H. (2000), "The effect of active learning methods on student retention in engineering statistics," The American Statistician, 54(2) , 136-140 Mvududu, N. (2003) "A Cross-Cultural Study of the Connection Between Students' Attitudes Toward Statistics and the Use of Constructivist Strategies in the Course," Journal of Statistics Education [Online], 11(3) www.amstat.org/publications/jse/v11n3/mvududu.html Quinn, R.J., and Wiest, L.R. (1998), "A constructivist approach to teaching permutations and combinations," Teaching Statistics, 20, 75-77. Schau, C., Stevens, J., Dauphinee, T. L., and Del Vecchio, A. (1995), "The development and validation of the Survey of Attitudes Toward Statistics," Educational and Psychological Measurement, 55(5), Steinhorst, R. K., and Keeler, C. M. (1995), "Developing Material for Introductory Statistics Courses from a Conceptual, Active Learning Viewpoint," Journal of Statistics Education [Online], 3(3) Weinberg, S. L., and Abramowitz, S. K. (2000), "Making General Principles Come Alive in the Classroom Using an Active Case Studies Approach," Journal of Statistics Education [Online], 8(2) Amy G. Froelich Department of Statistics Iowa State University Ames, IA 50011-1210 W. Robert Stephenson Department of Statistics Iowa State University Ames, IA 50011-1210 William M. Duckworth Department of Decision Sciences Creighton University Omaha, NE 68178
{"url":"http://www.amstat.org/publications/jse/v16n2/froelich.html","timestamp":"2014-04-21T04:48:01Z","content_type":null,"content_length":"103280","record_id":"<urn:uuid:078f925b-6fe0-4878-b987-b9f13be43714>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
need some calrification on integrating trig functions February 25th 2008, 12:45 PM #1 Junior Member Feb 2008 need some calrification on integrating trig functions hi peeps.. there is something bothering me about a very simple integral where 2 easily justified methods just barely dont reach the same result. suppose there is given an integral of (2cos(x)sin(x))... one method involving using the trig identities (Double Angle Formula) would be to first manipulate "2cos(x)sin(x)" to sin(2x) then easily solve the integral of that.. by doing that we see that integral[2cos(x)sin(x)dx] = integral[sin(2x)dx] = OK, now here comes the method that i had chosen to use priorly and all i did was take the '2' outside of the integral and substitute u=sin(x) in order to eliminate cos(x) as shown below.. Preliminary steps {u=sin(x), then du=cos(x)dx, then dx=du/cos(x)} integral[2cos(x)sin(x)dx] = 2integral[cos(x)sin(x)dx] = 2integral[cos(x)u du/cos(x)] = 2integral[u du] = 2[(1/2)u^2] = sin(x)^2... sin(x)^2 is 1/2(1-cos(2x)) and that does not match -(1/2)cos(2x) as highlight above. and its just a difference of just a 1/2 factor! did i do anything wrong? i was just assuming that both ways should reach to the same result.. In both cases you left out + a constant. Should I say more? ohhh... so that was the misconception... so that extra half couldve been regarded as part of a constant? so you're telling me that 1/2 - 1/2cos(2x) + C -1/2cos(2x) + C are the same things?? plzzz reply February 25th 2008, 01:00 PM #2 Senior Member Jan 2008 February 25th 2008, 01:04 PM #3 Junior Member Feb 2008 February 25th 2008, 01:06 PM #4 Senior Member Jan 2008
{"url":"http://mathhelpforum.com/trigonometry/29110-need-some-calrification-integrating-trig-functions.html","timestamp":"2014-04-16T04:29:13Z","content_type":null,"content_length":"38052","record_id":"<urn:uuid:8760de43-6443-4591-b1ec-1e61fc9e112c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
f and conjugate are holomorphic February 24th 2010, 11:19 AM #1 f and conjugate are holomorphic I have to show that: if $f$ and $\overline{f}$ are holomorphic on an open connected set $U$, then $f$ is identically constant. We can write $f= u(x,y)+iv(x,y)$ With the Cauchy-Riemann equations is easily shown that $\frac{\partial u}{\partial x}=\frac{\partial u}{\partial y}=\frac{\partial v}{\partial x}=\frac{\partial v}{\partial y}=0$ However to show that $f$ is identically constant I don't see where I need to use that $U$ is a open set. I understand that the condition "connected" is strictly necessary, otherwise f may take on different constant values on different components. But does U necessarily need to be an open set. Is "containing an open ball" not enough? I have to show that: if $f$ and $\overline{f}$ are holomorphic on an open connected set $U$, then $f$ is identically constant. We can write $f= u(x,y)+iv(x,y)$ With the Cauchy-Riemann equations is easily shown that $\frac{\partial u}{\partial x}=\frac{\partial u}{\partial y}=\frac{\partial v}{\partial x}=\frac{\partial v}{\partial y}=0$ However to show that $f$ is identically constant I don't see where I need to use that $U$ is a open set. I understand that the condition "connected" is strictly necessary, otherwise f may take on different constant values on different components. But does U necessarily need to be an open set. Is "containing an open ball" not enough? Requiring it to be open has more to do with the fact that a function $f:U\rightarrow \mathbb{C}$ is holomorphic at $z$ if there exists an open ball around $z$ such that $f$ is complex-differentiable there. So by the very definition, for $f$ to be holomorphic in $U$ each point of $U$ must have a neighbourhood contained in $U$ (too much $U$!). February 26th 2010, 12:16 AM #2 Super Member Apr 2009
{"url":"http://mathhelpforum.com/differential-geometry/130583-f-conjugate-holomorphic.html","timestamp":"2014-04-20T18:01:28Z","content_type":null,"content_length":"38631","record_id":"<urn:uuid:703be1e7-6e11-4285-a31a-89e71a8e8451>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis and interpretation of cost data in randomised controlled trials: review of published studies • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMJ. Oct 31, 1998; 317(7167): 1195–1200. Analysis and interpretation of cost data in randomised controlled trials: review of published studies Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messages • Health economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trials • A review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpreted • Few publications presented adequate descriptive information for costs or performed appropriate statistical analyses • In at least two thirds of the papers, the main conclusions regarding costs were not justified • The analysis and reporting of health economic assessments within randomised controlled trials urgently need improving With the continuing development of new treatments and medical technologies, health economic evaluations have become increasingly important. To identify cost effective care, providers, purchasers, and policy makers need reliable information about the costs as well as the clinical effectiveness of alternative treatments. For clinical outcomes, randomised controlled trials are the standard and accepted approach for evaluating interventions. This design provides the most scientifically rigorous methodology and avoids the biases which limit the usefulness of alternative non-randomised designs.^1 Pragmatic randomised controlled trials provide a suitable environment not only for assessing clinical effectiveness but also for comparing costs,^2^–^4 and an increasingly large amount of economic data is being collected within trials.^5^,6 The costs of competing treatments are usually estimated using information about the quantities of resources used—that is, the set of cost generating items which make up the treatment and its consequences. For example, the resources used in a surgical operation may include the staff time involved, the consumables used, and the length of a subsequent inpatient stay. To estimate the cost of treatment, this resource use information is combined with unit cost estimates, which give a fixed monetary value to each cost generating item. The total cost of treatment is then the weighted sum of the quantities of resources used, where the weights are the unit costs. The cost associated with a treatment may be estimated as a deterministic (fixed) value by costing a typical treatment protocol. This approach requires assumptions about the usual quantities of healthcare resources that would be used during treatment. For the surgical procedure example, this would involve assumptions about the grades of staff present during the operation, the typical time taken, consumables used, and length of inpatient stay. Carrying out an economic evaluation alongside a randomised controlled trial, however, allows detailed information to be collected about the quantities of resources used by each patient in the study: a record would be kept for every patient of the actual staff present, time taken, consumables used, and inpatient stay. Such information allows an estimate of the cost of treatment to be obtained for each individual patient, producing a set of cost values, which will be referred to as “patient specific” cost data. Availability of patient specific cost data not only allows the use of statistical inference as a basis for drawing conclusions about costs but reduces the extent to which the comparison between randomised groups is based on assumptions about resource use. In addition it allows the relation between costs and other factors such as patient characteristics and clinical outcomes to be In trials where patient specific cost data are available, the comparison of costs between treatment groups is used to make inferences about the true cost difference in the population from which the trial sample was drawn. The evidence from the sample needs to be assessed using statistical analysis. Although several reviews of economic evaluations have been undertaken,^5^,7^–^15 to date none has concentrated specifically on statistical aspects of the analysis of patient specific cost data from randomised controlled trials. We therefore focused on this issue, aiming to assess the use of statistical methods in this context and whether the conclusions drawn for costs are properly justified. Selection of study articles Published papers included in this review are those which reported on randomised trials where patient specific cost data were available, on which statistical methods were or could have been used. The search was limited to publications in English, involving human subjects, and published during 1995 and was carried out using the Medline database as of April 1997. The search required at least one of “trial” or “intervention(s)” and at least one of “health economic(s),” “economic evaluation,” or “cost(s)” in the title, abstract or MeSH headings. The search identified 872 eligible articles. Papers were excluded on the basis of their abstracts if it was clear that they were not reporting on the results of a randomised trial. The full articles were read for 111 papers. Where patient specific cost data had not been collected or when information about costing methods were insufficient to judge their suitability, the papers were excluded. For unclear cases, the articles were reread by a second reviewer and agreement reached as to their suitability. In this way 45 articles were finally included in the review. Information collected A data collection form was developed and was completed on reading each article in the review. This included information about the collection and calculation of costs, sample size calculations cited, summary measures reported, and statistical methods used. The final part of the assessment judged the appropriateness of any inferential conclusions drawn about costs, given the statistical results presented in the paper. These judgments did not involve consideration of design issues or methods of analysis but were simply based on cost estimates and any P values or confidence intervals Initial assessments for all papers were carried out by one assessor (JAB). Most of the information collected involved recording what was and was not explicitly stated in the paper, so that little subjective judgment was required. To examine reproducibility for these items a second investigator (SGT), unaware of the initial assessments, independently assessed a random sample of nine of the 45 trials. Agreement was complete for items reported in this paper. In the case of the potentially more subjective judgments about the appropriateness of the conclusions drawn, all 45 articles were read and categorised independently by both reviewers. There was only one disagreement, this caused by misreading of the paper by one reviewer. In five other cases, discussion was needed to determine the classification, because the reporting of results and conclusions in these was unclear. Description of papers The 45 papers identified came from both specialist and more general journals, and covered a wide variety of clinical areas including cancer, heart disease, nursing, and psychiatry. About half (24; 53%) were primary publications for the trial which usually included both clinical and economic results. In many of these, the economic component was rather small and lacking in detail. The remaining papers (21; 47%) were “follow on” papers to the main effectiveness analyses, which reported cost results either alone or in combination with other outcomes of interest, such as quality of life. The vast majority of the studies were designed as pragmatic trials, directly relevant to clinical practice; the economic analysis thus had direct policy implications. The economic data in these trials either came from resource use information, using some assumed unit cost values, or from data on charges for health care. The number of resource items included in the calculation of total costs varied considerably; some used quite detailed elements while others had very few. Patient specific information was sometimes only available for a limited number of resources, while fixed cost estimates were assumed for others. Sample size calculations Sample size calculations were mentioned in only seven (16%) of the 45 articles in the review. None were for economic outcomes; six were based on clinical endpoints, and in the remaining case it was unclear which outcomes were being considered. In the case of health economic assessments published separately from the main effectiveness analyses, sample size calculations for clinical outcomes may have been reported elsewhere. For 10 papers (22%), authors reported using a subsample of the original randomised trial for the economic analysis. Various reasons were given for this, including selection of a subset to minimise the burden on patients in the study; interest in the relative costs of only two arms of a three arm trial; and inclusion of only some centres from a multicentre trial, either because the others refused to be involved in the economic evaluation or in order to reduce data collection efforts. Descriptive statistics One trial in the review, which compared four three day antimicrobial regimens for treatment of acute cystitis, found mean costs (US$) per patient of $114 for patients treated with trimethoprim-sulpamethoxazole, $131 for amoxicillin, $155 for nitrofurantoin, and $155 for cefadroxil.^16 No information on the variability or ranges of costs per patient were given, so it is impossible to judge to what extent the average presented was typical for the patients studied. In a trial of whether to re-evaluate patients receiving oxygen at home at intervals of two months or six months, the mean cost and standard deviation over one year were presented for each group in the trial.^17 For example, in the six month re-evaluation group the standard deviation was larger than the mean ($11 Reporting of descriptive information is an important part of a statistical investigation and should precede analysis. For cost data, the crucial information is the arithmetic mean—that is, the simple average cost. This is because policy makers, purchasers, and providers need to know the total cost of implementing the treatment. This total cost is estimated as the arithmetic mean cost in the trial, multiplied by the number of patients to be treated. Measures other than the arithmetic mean (such as the median, mode, or geometric mean) cannot provide an estimate of total cost. The fact that the distribution of costs is often highly skewed does not imply that the use of the arithmetic mean is inappropriate. However, describing the variability in costs between individuals in the trial, and any peculiarities in the shape of the distribution such as skewness, is also important. The figure shows the percentage of all the papers reviewed reporting various summary measures for the cost data in each randomised group. Overall 42 papers (93%) reported measures of location, which were given as arithmetic mean or total costs in all but two articles. Six papers reported other measures of location along with the mean, five giving medians and one presenting modes in each group. Of the 45 papers, 20 (44%) reported one or more measures which described the spread or range of the cost data across individuals in each randomised group. As shown in the figure, standard deviations were used to indicate variability between individuals in nine (20%) of the papers. The other 11 papers (24%) gave measures that do not directly or fully describe the variability in the cost data. Three gave standard errors and three gave confidence intervals for the means in each group. Two further papers reported the maximum and minimum cost values only, and the remaining three presented a mean plus or minus some quantity “X”, where the authors failed to state explicitly the meaning of this quantity. Some papers had indications that the authors were aware of the likely non-normal distribution of their cost data. For nine (20%) this was explicitly stated, and three of these represented the distribution graphically. Seven further papers (16%) indicated some awareness about distributional problems either by reporting median cost (rather than or in addition to the mean) or by using non-parametric tests or log transformations when analysing the cost data. Inferential statistics Inferences made about costs need to be supported by a measure of precision (standard error or confidence interval) of the difference in mean costs between randomised groups, or at least a P value. For example, a study of induction of labour versus serial antenatal monitoring reported that the mean cost (Canadian $) in the monitoring group was higher by $193 (95% confidence interval $133 to $252, P<0.0001).^18 In contrast, a study of midwife team versus routine care during pregnancy and birth simply reported that the average cost (Australian $) per delivery was “$3324 for team care women and $3475 for routine care women, resulting in a saving of $151, or a 4.5% reduction in costs.”^19 In the latter example, no inference is justified since the precision of these cited quantities is unknown. The inferences about the average cost difference need to be based on a comparison of arithmetic means as, for example, given by the t test. Analyses of log transformed costs address the differences in geometric means, while non-parametric tests address differences in both median and shape of the cost distribution between groups. These analyses do not consider the question of interest about the arithmetic mean cost difference. There may, however, be legitimate concern over the validity of the t test, analysis of variance, and other standard methods of comparing arithmetic means. These methods all require assumptions of normality which may be violated by the often highly skewed distribution of cost data, particularly when sample sizes are small. Overall, only 25 of the 45 articles (56%) reported results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only five (11%) gave a measure of precision for the estimated difference in costs (figure). All were reported as confidence intervals calculated using methods which assume normality. 24 of the studies (53%) reported a P value for a comparison of costs between the treatment groups (figure). In nine (20%) of these, P values were obtained from a two sample t test or from analysis of variance comparing arithmetic mean costs across more than two groups. In one paper, a t test was carried out on log transformed costs. Non-parametric tests were used in eight papers (18%). Two papers reported results from regression analyses only; four papers reporting P values failed to state which test had been used, one of which reported the P value in the abstract of the report only. Three of the reviewed papers (7%) included more detailed analyses adjusting for predictors of costs by using multiple regression models of untransformed or log transformed costs. Justification for conclusions For the study of induction of labour versus serial antenatal monitoring mentioned at the beginning of the previous section, the authors concluded in the abstract that “a policy of managing post-term pregnancy through induction of labour...results in lower cost.”^18 This is an inferential conclusion that could be extrapolated from the trial results to future policy, and it is justified in terms of the confidence interval and P value for the mean cost difference presented. The trial of midwife team care versus routine care concluded that “the team approach...was associated with a reduction in costs per woman.”^19 This would also be likely to be interpreted as an inferential statement by readers. However, it was based simply on a comparison of mean costs, without any information on the precision of the mean cost difference observed. It is not a justified conclusion. The table summarises results of these assessments for all the papers in the review. All the 45 papers presented apparently inferential conclusions regarding costs. The justification of a conclusion was judged in a narrow sense in terms of whether supporting inferential statistics were cited; inadequacies of design or problems of data (such as missing values) or inappropriate analysis (such as non-parametric tests) were not considered. Hence a lenient view of the “justification for conclusions” was taken. Despite this, only 16 (36%) were judged to have been justified. This finding was identical for conclusions presented in the abstract or the main text. In a substantial number of cases (20) no statistical analysis was provided and in all cases the conclusions were not justified because they were apparently simply based on an eyeball comparison of the mean costs observed in each group. All of these papers claimed a difference in costs. Among the studies that undertook statistical analysis, the main reason that conclusions were not justified was that a claim of no difference in cost was made on the basis of a non-significant test result, without providing the necessary confidence interval for the cost difference. Missing data, cost effectiveness, and sensitivity analyses Information concerning the completeness of the cost data was given for only 24 studies (53%). Of these, three mentioned that their data were complete and 21 stated that some data were missing, the amount ranging up to 35% of the sample. Eleven papers apparently excluded subjects with missing cost data from the analysis without any further investigation. Five others compared characteristics of this group of patients with those whose data were complete, in order to identify any obvious biases. Four further papers dealt with missing data in other ways: one used a sensitivity analysis, another imputed values, and two used longitudinal analyses which do not require the data to be complete at all time points. Seven (16%) of the trials reported some measure of cost effectiveness—for example, cost per quality adjusted life year, cost per year of life gained, or cost per unit change in some clinical measurement. None of these papers carried out statistical tests for the cost effectiveness estimates or used confidence intervals to report on their precision. Two, however, used the confidence intervals of the effects, and in one case costs, to consider extreme cases of the cost effectiveness ratio. Only 11 (24%) of the 45 studies reported having carried out sensitivity analyses, and in five cases these were for the cost effectiveness results. The sensitivity analyses investigated robustness to various assumptions including unit costs, cost to charge ratios, assumed resource use values, and discount rates. Randomised controlled trials are not always the appropriate vehicle to address economic questions,^20^,21 and there is an important role for other methods of economic evaluation, such as modelling.^ 22 When economic evaluations are carried out alongside randomised controlled trials, however, the cost data collected should be interpreted appropriately. This review has revealed major deficiencies in the way cost data in randomised controlled trials are summarised and analysed. Descriptive statistics In providing descriptive information for continuous data, such as costs, recommended practice^23 would be to present a measure of location (for example, mean or median) and variability (for example, standard deviation or interquartile range) and mention any peculiarities about the shape of the distribution (such as skewness). Cost data are typically highly skewed, because a few patients incur particularly high costs. The arithmetic mean is then larger than the median, sometimes substantially, because it is more influenced by these high costs. Although the median can be interpreted as the most “typical” cost for individual subjects, since half of them have costs below this value and half above, it is the arithmetic mean cost that is important for policy decisions. It is only the arithmetic mean—not other measures such as the median, mode or geometric mean—that, when multiplied by the number of patients to be treated, estimates the total cost that would be incurred if the treatment were implemented. Although these other measures are commonly used for skewed data in other circumstances, the more informative arithmetic mean should always be reported for costs. This was done in nearly all the papers in our review, but statistical comparisons often used methods that did not directly compare these arithmetic means. Summarising the distribution of costs observed in a trial can be problematic unless there is space to show the distribution as a diagram. Because of skewness, the standard deviation alone is not an ideal way to represent the spread of costs between individuals. The observations lying within two standard deviations of the mean will cover about 95% of a distribution of values only if the distribution is approximately normal. Often, for cost data, the value two standard deviations below the mean is an impossible negative quantity. It is therefore also useful to present the interquartile range, a range containing the central 50% of the cost data, or a 95% reference interval, a range that excludes 2.5% of the cost data at each extreme. The full range (minimum to maximum) is less useful because it is totally dependent on just the two most extreme observations. Standard errors and confidence intervals reflect the precision of the estimated mean and are not appropriate ways of describing how the costs vary between individuals.^23 Most papers in our review did not describe the variability of their cost data at all, and many of the others gave only unsatisfactory summary information. Inferential statistics The interpretation of patient specific cost data in randomised controlled trials needs to be guided by formal methods of statistical inference—but only half of the papers reviewed presented a P value or confidence interval for cost comparisons. Conclusions regarding the evidence about cost differences cannot reliably be made without such statistical analysis. Among the papers that used statistical analysis, half used inappropriate methods (such as the non-parametric Mann-Whitney U test, or analysis of log transformed costs) that do not compare of arithmetic mean costs. Only 11% of the papers presented a confidence interval for the average cost difference, although the use of confidence intervals has repeatedly been recommended in statistical guidelines.^23^,24 The review focused on randomised controlled trials, since the rigour of this design might be expected to be accompanied by rigour in statistical analysis and reporting. However, overall, only 36% of conclusions drawn were justified. This is a lenient view since it takes no account of problems in design and execution of trials or the use of inappropriate methods of statistical analysis. Reporting inappropriate conclusions for either clinical or economic outcomes is potentially misleading and unethical.^25 Economic outcomes should be evaluated with the same statistical standards that are now expected for clinical outcomes. The tendency to make strong conclusions based simply on observed mean values of costs is all the more flawed when small samples have been used for the economic Sample size calculations The often large variability in costs between individuals emphasises the need to perform economic evaluations on sufficiently large samples so that precise conclusions can be drawn. The rationale for sample size calculations (having adequate power for the planned analyses and having a predetermined stopping point) are as relevant to cost outcomes as to clinical outcomes. Although cost outcomes are often regarded as “secondary,” they are still important. There may be practical reasons to base the health economic evaluation on a subset of the whole trial but statistical justification is lacking. The use of subsets and the complete absence of sample size calculations reportedfor costs in this review indicates the large scope for improvement in the rational planning of economic Completeness and relevance of the review The review was based on papers published in 1995 accessed through Medline. Limiting the search to journals on a single database means that this may not be an exhaustive review of all relevant papers. The reporting standards of journals cited by Medline, however, are likely to be better than those of non-Medline journals, therefore producing an overly optimistic view of the use of statistical methods in economic evaluations. The results of a similar search using the Cochrane Controlled Trials Register included 43 of the 45 papers in this review (the other two were both follow on papers to a main clinical effectiveness publication and in both cases only the clinical paper appeared in the Cochrane register). The Medline search may not have identified absolutely all randomised controlled trials with patient specific costs.^26 Some trials were excluded from the review because it was not clear from their methods whether patient specific cost data had been collected; however, these trials presented no measures of variability or statistical inferences for costs. Standards may have improved since 1995 in response to general guidelines,^27 although these currently contain little recommendation regarding statistical aspects of economic evaluations. A recent study evaluating the BMJ guidelines^27 failed to show that these had had any impact on the general quality of economic evaluations submitted or published.^28 In addition, experience with statistical guidelines indicates that the rate of response to these is generally slow,^29^,30 since precedent is a powerful inhibitor of change. Statistical complexities The statistical issues in analysing cost data are not, however, all straightforward,^31 in particular how to compare arithmetic mean costs in very skewed data. Standard methods for analysing arithmetic means such as the t test are known to be fairly robust to non-normality. This robustness, however, depends on several features of the data, in particular sample size and severity of skewness. There are no set criteria by which to judge whether the analysis will be robust for a particular dataset, and relying on standard methods could produce misleading results, especially if sample sizes are small. Extending simple comparisons to adjust for baseline variables may exacerbate the problems. Both simple and more complex analyses of costs can, however, be carried out or checked using bootstrapping.^32 This approach allows a comparison of arithmetic means without making any assumptions about the cost distribution. Although some examples of the use of bootstrapping for cost data have recently been published,^33 this method is not yet routinely used by medical researchers. Other statistical issues in the analysis of costs include choosing an appropriate sample size for the evaluation, placing confidence intervals on cost effectiveness ratios,^34 handling missing data, and providing a rational strategy for sensitivity analyses. All of these are complicated issues that are in need of further clarification. This review has shown that there is an urgent need to improve the statistical analysis and interpretation of cost data in randomised controlled trials. The BMJ guidelines and other health economics guidelines need to be revised to incorporate more detailed statistical advice for researchers, editors, and reviewers when dealing with patient specific cost data from trials. These guidelines not only need to encourage the use of statistical inference but need to provide advice on dealing with some of the more complex issues mentioned above. Proportion of 45 papers reporting descriptive statistics and inferential statistics for costs. Statistical tests were t test or analysis of variance (parametric); Mann-Whitney U test or Kruskal-Wallis test (non-parametric); 2 regression, 4 unspecified ... Classification of conclusions regarding costs. Values are number of papers with justified conclusions out of the total number in each category (percentages) Funding: JB was funded by North Thames NHS Executive; ST was funded by HEFC London University. Competing interests: None declared. Bradford Hill A. Observation and experiment. N Engl J Med. 1953;248:995–1001. [PubMed] Drummond MF, Stoddart GL. Economic analysis and clinical trials. Controlled Clin Trials. 1984;5:115–128. [PubMed] Drummond MF, Davies L. Economic analysis alongside clinical trials. Revisiting the methodological issues. Int J Technol Assess Health Care. 1991;7:561–573. [PubMed] Thompson SG, Barber JA. From efficacy to cost-effectiveness. Lancet. 1998;350:1781. [PubMed] Adams ME, McCall NT, Gray DT, Orza MJ, Chalmers TC. Economic analysis in randomized control trials. Med Care. 1992;30:231–243. [PubMed] Elixhauser A, Luce BR, Taylor WR, Reblando J. Health care CBA/CEA: an update on the growth and composition of the literature. Med Care. 1993;31(suppl):JS1–211. [PubMed] Jefferson T, Demicheli V. Is vaccination against hepatitis B efficient? A review of world literature. Health Econ. 1994;3:25–37. [PubMed] Briggs A, Sculpher M. Sensitivity analysis in economic evaluation: a review of published studies. Health Econ. 1995;4:355–371. [PubMed] Ancona-Berk VA, Chalmers TC. Cost and efficacy of the substitution of ambulatory for inpatient care. N Engl J Med. 1981;304:393–397. [PubMed] Evers SMAA, van Wijk AS, Ament AJHA. Economic evaluation of mental health care interventions. A review. Health Econ. 1997;6:161–177. [PubMed] Mason J, Drummond M. Reporting guidelines for economic studies. Health Econ. 1995;4:85–94. [PubMed] Udvarhelyi IS, Colditz GA, Rai A, Epstein AM. Cost-effectiveness and cost-benefit analyses in the medical literature. Are the methods being used correctly? Ann Intern Med. 1992;116:238–244. [PubMed] Ganiats TG, Wong AF. Evaluation of cost-effectiveness research: a survey of recent publications. Fam Med. 1991;23:457–462. [PubMed] Gerard K. Cost-utility in practice: a policy maker’s guide to the state of the art. Health Policy. 1992;21:249–279. [PubMed] Zhou X, Melfi CA, Hui SL. Methods for comparison of cost data. Ann Intern Med. 1997;127:752–756. [PubMed] Hooton TM, Winter C, Tiu F, Stamm WE. Randomized comparative trial and cost analysis of 3-day antimicrobial regimens for treatment of acute cystitis in women. JAMA. 1995;273:41–45. [PubMed] Cottrell JJ, Openbrier D, Lave JR, Paul C, Garland JL. Home oxygen therapy. A comparison of 2- vs 6-month patient reevaluation. Chest. 1995;107:358–361. [PubMed] Goeree R, Hannah M, Hewson S. Cost-effectiveness of induction of labour versus serial antenatal monitoring in the Canadian Multicentre Postterm Pregnancy Trial. Can Med Assoc J. 1995;152:1445–1450. [ PMC free article] [PubMed] Rowley MJ, Hensley MJ, Brinsmead MW, Wlodarczyk JH. Continuity of care by a midwife team versus routine care during pregnancy and birth: a randomised trial. Med J Aust. 1995;163:289–293. [PubMed] Fayers PM, Hand DJ. Generalisation from phase III clinical trials: survival, quality of life, and health economics. Lancet. 1997;350:1025–1027. [PubMed] O’Brien B. Economic evaluation of pharmaceuticals: Frankenstein’s monster or vampire of trials. Med Care. 1996;34 (suppl 12):DS99–1108. [PubMed] Buxton MJ, Drummond MF, van Hout BA, Prince RL, Sheldon TA, Szucs T, et al. Modelling in economic evaluations: an unavoidable fact of life. Health Econ. 1997;6:217–227. [PubMed] Altman DG, Gore SM, Gardner MJ, Pocock SJ. Statistical guidelines for contributors to medical journals. BMJ. 1983;286:1489–1493. [PMC free article] [PubMed] 24. Guidelines for referees. BMJ. 1996;312:41–44. 25. Altman DG. Practical statistics for medical research. London: Chapman and Hall; 1991. Dickersin K, Scherer R, Lefebvre C. Identifying relevant studies for systematic reviews. BMJ. 1994;309:1286–1291. [PMC free article] [PubMed] Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to thehe BMJ Economic Evaluation Working Party. BMJ. 1996;313:275–283. [PMC free article] [PubMed] Jefferson TO, Smith R, Yee Y, Drummond M, Pratt M, Gale R. Evaluating the BMJ guidelines for economic submissions: prospective audit of economic submissions to BMJ and the Lancet. JAMA. 1998;280 :275–277. [PubMed] Altman DG, Goodman SN. Transfer of technology from statistical journals to the biomedical literature: past trends and future predictions. JAMA. 1994;272:129–138. [PubMed] Gore SM, Jones G, Thompson SG. The Lancet’s statistical review process: areas for improvement by authors. Lancet. 1992;340:100–102. [PubMed] Coyle D. Statistical analysis in pharmacoeconomic studies: a review of current issues and standards. Pharmacoeconomics. 1996;9:506–516. [PubMed] 32. Efron B, Tibshirani RJ. An introduction to the bootstrap. New York: Chapman and Hall; 1993. Lambert CM, Hurst NP, Forbes JF, Lochhead A, Macleod M, Nuki G. Is day care equivalent to inpatient care for active rheumatoid arthritis? Randomised controlled clinical and economic evaluation. BMJ. 1998;316:965–969. [PMC free article] [PubMed] Chaudhary MA, Stearns SC. Estimating confidence intervals for cost-effectiveness ratios: An example from a randomized trial. Stat Med. 1996;15:1447–1458. [PubMed] Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Group • Cited in Books Cited in Books PubMed Central articles cited in books • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC28702/?tool=pubmed","timestamp":"2014-04-16T10:29:01Z","content_type":null,"content_length":"94938","record_id":"<urn:uuid:1401e11c-28a5-413b-b495-cd14bf8a4987>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
How I learned to play (the) Accordion Icelandic Oratorios, or How I learned to play (the) Accordion This article would not have been possible without three people in particular who got me interested and helped me learn to play Accordion: Jason Crupper, Rick Holzgrafe, and Mark Masten. I originally learned the solitaire game Accordion from Albert Morehead and Geoffrey Mott-Smith's classic book The Complete Book of Solitaire and Patience Games. The game is also sometimes known as Idle Year, because of the way you can play hundreds of games without winning. Morehead and Mott-Smith put the odds of winning at 1:100, but explain that that is a shorthand for games where the win rate is extremely low. The way the game is traditionally dealt is to turn up cards one by one from the stock, putting them in a row left to right. If adjacent cards are of the same rank or suit, the one on the right (i.e. most recently dealt) can be placed on top of the one on the left (we will call this move a slide). The same can be done with cards three positions apart; the card to the right jumps over two cards and is placed on the card three positions left if it matches in suit or rank (we will call this move a leap). Piles of cards thus form, and a pile of cards is treated as a single card (following the card on top), and can be moved left or piled on from the right. As the game is dealt, the row of cards expands (as cards are dealt from the stock when no plays are possible) and contracts (as several plays are sometimes made in succession), rather like the musical instrument which thus gives the game its name. Like most people, I played quite a few times without success. Many years ago I wrote a very simple solver to simulate the play of Accordion, making slides whenever possible and leaps otherwise, winning 4 times in a sample of 10,000 deals (more on this below). The first inkling I had that the game can be played a different way was when I read the strategy guide on the website for Solitaire Till Dawn, a solitaire package for the Macintosh programmed by Rick Holzgrafe. He suggested dealing the entire deck out ahead of time, turning the game from a closed solitaire into a fully open one.He also described a strategy for using sweepers (cards of the same rank which are accumulated at the back of the tableau) and winning about a third of the time.We'll get back to this strategy later on, with examples and elaborations. I didn't have access to a Macintosh when I first read the article, so I put it in the back of my mind until a number of years later. The idea that the game can be won in open style spread, and dealing out the entire deck has become a common method of presentation in solitaire packages, including BVS Solitaire. I saw an online version by Andrew Pipkin, programmed in Java, in which the entire deck is dealt in one overlapped row of 52 cards. I like this way of dealing, but not the way he shows slivers of the covered cards, which makes quick visualization of moves much harder as the piles get larger late in the game, and has the disconcerting visual effect of making the selected card jump backwards when the piles are Mark Masten did some computer analysis of the open version of Accordion which he reported to a solitaire mailing list in April of 1999. He played 25,000 deals by computer, and gave the astonishing fact that every single deal was solvable! It appears that open Accordion has one of the highest win rates of any open solitaire, almost certainly even higher than FreeCell. Gordon Bower calculated that it would take 270,000 wins without a loss, or less than five losses in 714,000 deals, to argue statistically that Accordion has a higher win rate than FreeCell. By 2000, Mark's solver had played around 800,000 deals without a loss! Jason Crupper showed, on the Pretty Good Solitaire forum, that impossible deals can be constructed, but to my knowledge no random deal has ever been shown impossible. Jason first wrote to me about FreeCell (flourishes) in 2002, and we later started talking about Accordion too. He wrote a computer version using the programming language Scheme, and played many games, winning every time. He sent me a copy of his program and some examples of won deals. This is a nice version, with numbered deals, undo, and most importantly, automatic recording of wins. Jason uses a simple notation for moves which I will also use here. I finally started playing seriously, learned the basic technique described by Rick Holzgrafe, and started winning a few times. I then figured out some elaborations of his strategy, and I have now reached the point where I can win virtually 100% of the time. So let's see how to play... Accordion Virtuoso After playing Jason's program and others, I started to develop my own version, using a single row of cards as in Pipkin's version, but simply made covered cards disappear entirely. I also added an option to freeze a rank of cards to prevent them from being accidentally covered until near the end of the game. I incorporated Jason's solution notation, putting it in a text box on screen (this allows partial solutions to be recorded easily). I have gradually added other features, including unlimited undo, variable leap values, and other variants, as described below. Here is a screenshot showing the Accordion game. The entire deck is fanned out from left to right. The program calculates a simple function to select the best rank to try as the sweeper rank: the best choice is shown in the lower left corner, in this case fours, which have a very low score of 14 (other ranks with low scores may be shown in the larger gray window directly below the cards; in this case the program finds no other good candidates). The program also shows which candidate is the first, counting from the right end, to have two, three, and all four of its rank appear. In this case fours appear twice, three times, and four times before every other rank. All four cards of the best candidate rank are highlighted, and they may not be discarded while highlighted (there is an option to turn off the highlighting or to choose a different rank). The fours of hearts and spades are already in ideal position; there is only one diamond behind the 4D and six clubs behind the 4C (so the evaluation function for fours is 1x1x2x7 = 14). The program works on a single click system: a left mouse click sends the card you are clicking on either one (slide) or three cards (leap) to the left. If both moves are possible, it slides; a right mouse click automatically leaps if possible, doing nothing otherwise. This takes a little practice to get used to, but you can play pretty quickly once you are familiar with it. Highlighted cards can be moved (either slide or leap), but not discarded (another card cannot slide or leap onto them), which prevents you from accidentally discarding sweepers (highlighted cards can be discarded if you hold the Alt key down while clicking). Winning a simple deal -- basic techniques When presented with a deal, the first thing to do is to try and find a rank where all four cards are near the end of the row of cards. Usually this does not happen, but you want to pick a rank where two or three of them are near the end and the other(s) are not too far from it. In particular you want as many of the four cards as possible to be the last card dealt of their respective suits -- I call this ideal position -- or to be able to get them there quickly. The first task is to get rid of the cards between the three fours, and get the jack of diamonds in position where it can jump over the fours. We start by moving the ten of hearts onto the seven of hearts, allowing the four of spades to jump over two hearts onto the king of spades, leaving those hearts open to be discarded in turn by the four of hearts. The queen of clubs covers the king of clubs, then leaps over to the three of clubs (equivalent to consecutive leaps by the king and queen of clubs). Note that we are saving the jack of spades as a target for the jack of diamonds. The program automatically displays each move made, using Jason Crupper's simple notation: the card moving is abbreviated (T for ten, etc.), followed by a one for a slide and a three for a leap. We want the jack of diamonds to leap onto the jack of spades, but there are too many cards in between at the moment. So we need to leap with one of the fours. The four of diamonds can do so right now; this is not a perfect move, since we want to get the jack of diamonds past the four of diamonds, but it allows us to make some progress by leaping with the jack of diamonds. Leaping next with the five of diamonds allows the queen of clubs to make a double leap, and now the jack of diamonds can get past its four, sliding one more card after that. Three of our sweepers are now together in perfect position, and we leap onto the eight of hearts immediately to their left (we generally make this kind of move whenever possible). There are various ways to clear out most of the cards between the fours of hearts and clubs. We might think about keeping the ace of diamonds as a target for the jack, but we instead choose to leap with the ace of clubs, mop up two spades with slides, then leap with the queen of clubs and ace of spades, sliding onto the queen of clubs after that. Now only the jack of diamonds remains to the right of the last four. We can try to find a target further down for it to leap on, but since it is already to the left of its sweeper, if we can instead leap with the four of hearts, the four of diamonds can slide onto it. This is easy to do if we leap first with the ace of spades, then mop up the six of clubs before leaping onto the six of hearts. Now we mop up the jack of diamonds, and all of our sweepers are in perfect position (this doesn't often happen before the midpoint of a deal). Now we push cards to the left, looking for opportunities to leap or slide onto the card immediately to the left of the cluster of sweepers. Here we leap with the two of diamonds and mop up a spade. The spade cannot be discarded right away, but if we leap onto the eight of clubs, the ace becomes close enough for the four of spades to leap onto it. We generally try to avoid breaking up the cluster unless we can immediately close it up again, but the two card maneuver we just did is a basic one -- we can do the same thing again after leaping with the nine of hearts. Another useful maneuver is a series of leaps with consecutive cards of the same suit. In this case we can leap with three hearts in a row, each landing on the previous one. Then we mop up both tens with slides. Now we do our basic maneuver, leaping with the four of diamonds before re-closing with the four of hearts and continuing with a four of spades leap. Instead of doing two leap-slide combinations with the fours of hearts and diamonds, we see that we can actually mop up the other four red cards with the queen of hearts if we eliminate the eight of diamonds with a slide first. Now we leap into the queen of hearts, and see that we have only four cards left besides the sweepers. Since we have no other red cards left... ...we could unhighlight the sweepers and eliminate the two red fours, but it's easy enough to finish with all four sweepers alone (I call this a clean finish). If we slide with the seven of clubs, the four of spades has room to leap and slide, followed by the four of clubs. Now we unhighlight the sweepers... and a series of slides produces a win. The background turns orange to signal a win, and there is a trumpet fanfare (usually a cascade of cards will occur too, but we turned it off to allow a clear screenshot of the final position). Sometimes the last card in the row is very difficult to get past the sweepers. I figured out that you can leave one card behind the sweepers, and finish by placing this card on top of the sweeper of the same suit. The deal shown above is number 57988. The program recommends tens as the best sweeper (sevens are a distant second). In order to start clearing the cards amongst the first three tens, we leap with the jack of hearts, setting up several other plays which reduce the cards before the ten of clubs down to three. No other plays are available towards the end of the row, so we go down to the nine of clubs, which is in the middle of a string of cards of two alternating suits. These are hard to break up unless we can pull one of the cards out -- in this case we leap with the nine of clubs, opening up other moves. Later we reach the following position; the four sweepers are together; our plan is to eliminate everything else, finishing by mopping up the other three tens with the ten of hearts, and covering it with the seven: Suit reduction Sometimes three of the same rank are in ideal position, or nearly so, but the fourth is at or near the beginning of the row (particularly when it is the first card of its suit dealt). You can still win these deals by carefully discarding all but one of the cards of the problem suit (except for the sweeper rank) and eliminating the last non-sweeper of that suit by covering it with a card of the same rank. Here's a good example, deal number 12330: After three moves, we have the first three sweepers together, and start to reduce the diamonds in the middle of the row by a series of leaps and slides: Now we get rid of the two and three of diamonds using the three of spades, and the ten and eight of diamonds using the eight of spades, leaving us with only the four of diamonds at the far left. Using the other three sweepers, it is easy to mop up the cards of the three remaining suits. Note that twice we leap with the four of hearts (the second time onto the four of diamonds, after unprotecting the sweepers with the F8 key) to set up an ABBA pattern of suits which we eliminate with a leap and a slide. This is a very basic technique (underlined in orange, below right): Sometimes all four of the same rank are in the first dozen or so cards, but there are a number of cards (I call them flotsam) at the end of the row behind the potential sweepers. If you can condense these cards and gradually get them past the sweepers, you can still win.Here's deal number 5844, fully played out: Another technique sometimes needed in hard deals I call endgame pretreatment. Sometimes you reach the last few cards of a solution and hit an impasse, when there are no plays left. Above is an example, number 63351, the first time I used a version of this technique. Notice how the last few cards have several alternations in suit (the hearts and diamonds alternating from the third through sixth cards) and rank (tens two apart at the far left). After reaching the point shown above, I was getting stuck if I tried going forward in the usual way, so I tried a different approach, making a series of plays at the left end. Gradually the endgame untangled, and I was able to win: After winning that deal, I started to look for chances to use this technique, when a deal was proving hard to solve. I started using it at the very beginning of a deal (hence the name pretreatment, as in doing laundry) when I could see that the lefthand end was giving me problems. Here's another example, 51579, where I made a series of five pretreatment plays at the very beginning of the deal (see how the six of clubs gets inserted in to break up the nasty alternation of clubs and hearts). AV liked threes as sweepers, but I got nowhere until I switched to sevens, eventually winding up with a clean finish: I have now played over 600 deals with Jason Crupper's Scheme program and more than 400 with Accordion Virtuoso, and have not yet come across a deal I have not been able to solve. The hardest deals I have encountered in Accordion Virtuoso are 13286, 15501, 16125, 20444, 20766, 30008, 35346, 38072, 41474, 43696, 48690, 49450, 50653, 60601, 61150, 85177, 86400, and, hardest of all, 62185. But the hardest deal I have seen yet is number 101862164 in Jason's program, here imported into AV: Accordion's Revenge Mark Masten later thought up and tested a harder version, which he called Accordion's Revenge. A card is selected from the 52 cards dealt, and the game has to be won with the selected card at the top of the pile at the end. This is impossible with the first two cards dealt, but can be done with any card from the third to the last. In fact, he ran 500 deals with each of the 50 possible positions, and still won every single deal! While I was learning to play, I also practiced winning some deals while finishing up with the third card dealt. This may be the hardest position on average to win with. Let's see an example of how this works: This is deal number 7795. The seven of hearts, shown jogged upward, is the card we want to finish with on top. The general approach is to get the sweepers down close to positions 4, 5, 6, and 7. We eventually want to get either all four sweepers in turn to position 1, finishing with the four of diamonds, or the four of hearts to position 2 and the last sweeper (of any suit) to position 1. The position below left shows most of the deal played out, with the sweepers ready to make the last few moves. The four of diamonds leaps onto the seven, the four of spades onto the queen, the four of hearts onto the four of diamonds, the four of clubs slides and then leaps, and two slides finish the deal (below right). The finishing card (AR card for short) can occur anywhere except for the first two positions. There are at least three basic strategies for winning at Accordion's Revenge, depending on the position and rank of the AR card. If the rank of the AR card is a reasonable sweeper candidate, it is usually easiest to choose that as the protected rank, bring the sweepers together with the AR card last, and finish as usual. Otherwise, the AR card will eventually have to be a trailer: if the AR card appears far to the left, the most sensible approach is similar to deal 7795 above: push the sweepers down to the AR card, then try to leap over it and sweep up the remainder of the cards left of the AR card, then consolidate the sweepers down to one of the same suit as the AR card. If the AR card is far to the right, it will probably be easiest to maneuver any sweepers and flotsam around it, then play an ordinary sweeper strategy. Here's another example. Notice the last few plays, where the queen mops up the last sweeper and a couple of other cards. This mopup technique is seen quite often when.... Playing with bigger leaps Andrew Pipkin's Java version of Accordion has options to allow any combination of slides and leaps from 1 to 6 cards. He added these to make the game easier, under the common misunderstanding that the standard 1/3 game was too hard to win. Mark Masten investigated various leap values using his solver, and discovered yet another surprising fact: many of the variant games with a one-card slide combined with a single leap value greater than 3 are quite playable and can be won fairly often; in fact the medium leap values of 4 through 6 may even be easier than the standard game. I added leap values from 2 through 10 to the Accordion Virtuoso program and began to experiment with them. I have now reached the point where I can usually win with leap values up to 7. Test runs with the solver show that most deals are winnable even up to leaps of 10. It appears that a leap value of 2 is quite difficult; the solver wins a substantial number of them (at least 40%), but its strategy seems impenetrable and I have been unable to win myself yet. Let's see how the strategy for larger leaps works. Here's deal 39004, played with a leap of 8. I chose fives as the sweeper instead of the recommended sevens: The early moves are mostly leaps, trying to clear the flotsam from the area between the sweepers, though slides are used occasionally to line up leaps correctly. Now we start to eliminate clubs and spades. By the time we have reached the position below, most of the clubs and spades are gone. We plan to leap the nine of diamonds onto the nine of clubs (eliminating the last non-sweeper club) and the eight of diamonds onto the eight of spades. We eventually plan to leap the seven of spades onto the king of spades (after the latter has picked up the king of diamonds). The aim is to reach a position where every remaining card can be removed with a series of slides. Each sweeper picks up the remainder of its suit (or a stray card of one suit is picked up by the same rank of another suit, as the ten of diamonds does below), and the last sweeper mops up the remaining cards of its suit and all of the remaining sweepers: Here's one more example, with leap-6, ending in a mopup of the last nine cards. Note also another technique of consolidating suits here, collecting several cards of the same suit in a row and using them as a sort of landing strip to leap onto with other cards of the same suit. Playing without Slides A barely explored question is what happens if there are no slides, but two sizes of leap. In particular, what happens if you can leap either 2 or 3 to the right? Obviously it is impossible to reduce to one card, but how often can you reduce to two? Here is a successful 2/3 game; Accordion Virtuoso has been set up so that the Control key momentarily decreases the leap value by 1, so this can be played using only the right mouse button and the Control key (otherwise you would have to toggle constantly between leap values). This is an unusual game which requires some different strategies: an important maneuver is reducing a pair of adjacent cards of the same suit by leaping both onto another card of the same suit. Suit elimination is an important technique in the later stages. Andrew Pipkin's version can be set to play this directly. More features of Accordion Virtuoso Two optional help features available to the player (both are off by default, but can be switched on and off at will from the options menus) are shown below, partway through a deal. The row of colored squares directly below the card show which cards can currently be played (AutoCount). Black squares show available slides, white squares leaps, and red squares show cards able to either slide or leap (showing available leaps is particularly helpful in long-leap games). The Card Tracker at the bottom right shows which cards have been covered, making it easy to see at a glance which cards of a particular suit or rank are still left. In addition, the program automatically signals when no more moves are available by turning the background blue; this AutoEnd feature is on by default and is independent of the AutoCount option. Many years ago, before I understood Accordion very well, I came up with a variant, which I naturally called Concertina, where a card (or pile) on the right may, if desired, be placed underneath the card/pile one or three positions left; we could naturally call these moves underslides and underleaps (or more colorfully, ducks and dives). Note that if both slide and leap are possible with the same card, underslide and underleap have the exact same effect). [Using Concertina rules, it seems likely that Accordion's Revenge is usually winnable even if the AR card is the first or second card dealt, using it as a kind of reverse trailer.] Since the standard game is usually winnable already with most leap values from 3 to 10, I now suggest that Concertina be played with a leap value of 2. In the Accordion Virtuoso program, clicks with the Shift key held down will produce an underslide or underleap; these are symbolized with plus signs instead of dash/minus signs. Here's a fully played out example: There is one online version of Accordion where the programmer has mistakenly programmed underslides and underleaps only. I don't know what the strategy of this variant would be, or even if it is Other variations Mark Masten has also suggested using alternate decks, in particular a 49-card deck consisting of seven cards each of seven suits. Obviously it is possible to combine any of the above variations; in particular playing Accordion's Revenge with larger leap values makes a very challenging game. Another idea, which I have only tried in a couple of hand-dealt games, is to play a two-dimensional version with four rows of 13 cards (this needs some space; overlapping the cards in the normal fashion is awkward). Horizontal plays in rows are as usual; vertical plays may be made in any direction and at any distance. A row contracts automatically when a card from it moves upward or downward into another row. Traditional Play I have now incorporated an automatic solver for the traditional version of Accordion into the Accordion Virtuoso program. It makes every play possible (from left to right) after each card is dealt, and can either play slides in preference to leaps, or vice versa. I have run all of the daily deals (those numbered 1 to 86400, which are dealt randomly by the New Deal option depending on the second of the day) with both options. When playing leaps if both plays are available for the same card, 37 deals out of 86400 were won (about 1 in 2335). When playing slides first, 56 deals were won (about 1 in 1543); four deals (2071, 23197, 75566, and 76541) were won with both options. Deal 76541 is shown below, with 31 of the cards playable at the start (9 have a leap/slide option). If all available plays are made from left to right, the deal is won. It's harder than you think to play poorly -- Accordion Misère Misère is a French term often used to refer to games played with an reversed object of play (such as lowball poker or giveaway chess). In this case, what happens if we try to block ourselves and finish with as many cards as possible? So far the best I have managed is 27 cards left in deal number 99042, which has only 11 playable cards at the start (note the blue background showing a blocked Searching for hard deals Computer searches for the deals with the fewest initial moves may prove a fruitful way to look for very hard deals. A search through the first quarter of a million deals turned up a number of examples where only 11 cards are playable at the start. Of these, 98175 and 243224 proved especially difficult to solve, and a few deals I have not yet solved, including 99042, shown above. Most recently edited on April 16, 2009. This article is copyright © 2008, 2009 by Michael Keller. All rights reserved.
{"url":"http://www.solitairelaboratory.com/accordion.html","timestamp":"2014-04-19T11:56:25Z","content_type":null,"content_length":"32606","record_id":"<urn:uuid:84f531b0-4eaf-4f0e-8988-970de58bb674>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Combining random variables, squared [Archive] - Statistics Help @ Talk Stats Forum Been sitting with this for a while, and hoping wome of you can assist me. I'm given a pdf f(y) = 6y(1-y) for 0<y<1, and I want to calculate the pdf of W=Y^2. I got som tools to calculate the pdf of W =XY, if X and Y are independent, but are these tools valid in this case? And even if they are, I get the wrong result. Have been trying too search the web for a while, but quite impossible since I can't search the "^2".:/
{"url":"http://www.talkstats.com/archive/index.php/t-11225.html","timestamp":"2014-04-19T07:35:58Z","content_type":null,"content_length":"4066","record_id":"<urn:uuid:7f42cc35-e1c1-4fa8-9170-070dacdb389e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Dictionary - Spread Spread is the name given in statistics to describe how the data lies. It is measured in a variety of ways such as the range, the interquartile range and the standard deviation. For example, the spread of marks in a mathematics exam is often much wider than in English.
{"url":"http://mathebook.net/dict/sdict/spread.htm","timestamp":"2014-04-20T10:49:15Z","content_type":null,"content_length":"4921","record_id":"<urn:uuid:bc7ceada-536b-424a-856e-1e91d3423ee9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to OnlineConversion.com Ton of air conditioning Ton of air conditioning by nielsenec (Guest) on 05/09/01 at 08:21:14 What is a ton of air conditioning in SI units? It is an energy unit that should be able to be converted into BTU/hr or kW or something similar. I believe the unit originated with the cooling capacity of a ton of ammonia, the first refrigerant. Re: Ton of air conditioning by Robert Fogt (Guest) on 05/09/01 at 18:46:52 I found this information: One ton of capacity equals 12,000 Btu/h (also one Btu/h is equal to 0.293 watts so...One ton = 3.516 Kilowatts) That is assuming the information is accurate, I do not have another reference to verify this with. The website is: That would be something good for the ole to-do list, it would be nice to have that conversion up. With luck I can find a .gov or .mil site to double check the accuracy. Re: Ton of air conditioning by shahram (Guest) on 07/27/01 at 15:10:52 hey I'm in need of a favor from u...I'm currently working on an assignment in which I have to convert how many kilowatthours one ton airconditioner consumes in an hour....please help me my email is : schaali_54@yahoo.com Re: Ton of air conditioning by riber on 01/26/02 at 07:14:56 Actually it is correct that one ton of air-conditioning is equal to 12,000 btu/hr. However the other question trying to convert a one ton A/C unit into kw/hr consumed is very dependant on the make and model of the air-conditioner, as well as its EER (energy Efficiency Rate). These range from 6-14 EER. Standard average one ton A/C unit consumes 1.335 KW/hr. Re: Ton of air conditioning by coldfuse on 02/06/02 at 20:53:16 Just wanted to provide background on the derivation for tons of refrigeration, and provide information on the similar SI standard. The latent heat of fusion for ice is 144 BTU/lb. For one ton, that is 2000 lb x 144 BTU/lb, or 288,000 BTU. Refrigeration's roots are in the ice making industry, and the ice guys wanted to convert this into ice production. If 288,000 BTU are required to make one ton of ice, divide this by 24 hours to get 12,000 BTU/Hr required to make one ton of ice in one day. This is simply the requirement for the phase change from liquid to solid -- to convert +32 deg F water into +32 deg F ice. As a practical matter, additional refrigeration is required to take city water and turn it into ice. One BTU is the heat removal required to lower the temperature of one pound of water by one degree F. In SI units, kilocaries are used. One kilocalorie is the heat removal required to lower the temperature of one kilogram of water by one degree C. One ton of refrigeration is equal to 3024 kilocaries per hour. It is basically the 12,000 BTU/Hr divided by pounds per kilogram divided by 1.8 (to get from degrees F to degrees C). I hope this explanation hasn't been too cumbersome and will be helpful for someone out there! I'll :-X now! Re: Ton of air conditioning by coldfuse on 02/06/02 at 21:46:28 You may wish to click on http://www.startinbusiness.co.uk/si_tz.htm for verification of the SI conversion for kilocaries to tons. I should have posted this previously Go Back | Archive Index
{"url":"http://www.onlineconversion.com/forum/forum_989421674.htm","timestamp":"2014-04-18T13:10:56Z","content_type":null,"content_length":"11495","record_id":"<urn:uuid:028f0e31-7e83-402a-b3bd-803e4a99c29f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Fitting and XSPEC This chapter comprises a brief description of the basics of spectral fitting, their application in XSPEC, and some helpful hints on how to approach particular problems. We then provide more details on the way XSPEC provides flexibility in its approach to the minimization problem. We also describe the data formats accepted. Although we use a spectrometer to measure the spectrum of a source, what the spectrometer obtains is not the actual spectrum, but rather photon counts (C) within specific instrument channels, (I). This observed spectrum is related to the actual spectrum of the source (f(E)) by: Where R(I,E) is the instrumental response and is proportional to the probability that an incoming photon of energy E will be detected in channel I. Ideally, then, we would like to determine the actual spectrum of a source, f(E), by inverting this equation, thus deriving f(E) for a given set of C(I). Regrettably, this is not possible in general, as such inversions tend to be non-unique and unstable to small changes in C(I). (For examples of attempts to circumvent these problems see Blissett & Cruise 1979; Kahn & Blissett 1980; Loredo & Epstein 1989). The usual alternative is to choose a model spectrum, f(E), that can be described in terms of a few parameters (i.e., f(E,p1,p2,...)), and match, or “fit” it to the data obtained by the spectrometer. For each f(E), a predicted count spectrum (C[p](I)) is calculated and compared to the observed data (C(I)). Then a “fit statistic'” is computed from the comparison and used to judge whether the model spectrum “fits” the data obtained by the spectrometer. The model parameters then are varied to find the parameter values that give the most desirable fit statistic. These values are referred to as the best-fit parameters. The model spectrum, f[b](E), made up of the best-fit parameters is considered to be the best-fit model. The most common fit statistic in use for determining the “best-fit” model is[], defined as follows: where σ(I) is the (generally unknown) error for channel I (e.g., if C(I) are counts then σ(I) is usually estimated by []; see e.g. Wheaton et.al. 1995 for other possibilities). Once a “best-fit” model is obtained, one must ask two questions: 1. How confident can one be that the observed C(I) can have been produced by the best-fit model f[b](E)? The answer to this question is known as the “goodness-of-fit” of the model. The [] statistic provides a well-known-goodness-of-fit criterion for a given number of degrees of freedom (ν, which is calculated as the number of channels minus the number of model parameters) and for a given confidence level. If [] exceeds a critical value (tabulated in many statistics texts) one can conclude that f[b](E) is not an adequate model for C(I). As a general rule, one wants the “reduced []” [] to be approximately equal to one []. A reduced [] that is much greater than one indicates a poor fit, while a reduced [] that is much less than one indicates that the errors on the data have been over-estimated. Even if the best-fit model (f[b](E)) does pass the “goodness-of-fit” test, one still cannot say that f[b](E) is the only acceptable model. For example, if the data used in the fit are not particularly good, one may be able to find many different models for which adequate fits can be found. In such a case, the choice of the correct model to fit is a matter of scientific judgment. 2. For a given best-fit parameter (p1), what is the range of values within which one can be confident the true value of the parameter lies? The answer to this question is the “confidence interval” for the parameter. The confidence interval for a given parameter is computed by varying the parameter value until the [] increases by a particular amount above the minimum, or “best-fit” value. The amount that the [] is allowed to increase (also referred to as the critical []) depends on the confidence level one requires, and on the number of parameters whose confidence space is being calculated. The critical [] for common cases are given in the following table (from Avni, 1976): Confidence Parameters 0.68 1.00 2.30 3.50 0.90 2.71 4.61 6.25 0.99 6.63 9.21 11.30 To summarize the preceding section, the main components of spectral fitting are as follows: · A set of one or more observed spectra [] with background measurements B(I) where available · The corresponding instrumental responses [] · A set of model spectra [] · These components are used in the following manner: · Choose a parameterized model which is thought to represent the actual spectrum of the source. · Choose values for the model parameters. · Based on the parameter values given, predict the count spectrum that would be detected by the spectrometer in a given channel for such a model. · Compare the predicted spectrum to the spectrum actually obtained by the instrument. · Manipulate the values of the parameters of the model until the best fit between the theoretical model and the observed data is found. Then calculate the “goodness” of the fit to determine how well the model explains the observed data, and calculate the confidence intervals for the model's parameters. This section describes how XSPEC performs these tasks. C(I): The Observed Spectrum To obtain each observed spectrum, [], XSPEC uses two files: the data (spectrum) file, containing D(I), and the background file, containing B(I). The data file tells XSPEC how many total photon counts were detected by the instrument in a given channel. XSPEC then uses the background file to derive the set of background-subtracted spectra C(I) in units of counts per second. The background-subtracted count rate is given by, for each spectrum: where [] and [] are the counts in the data and background files; []and [] are the exposure times in the data and background files; [] and [], [] and [] are the background and area scaling values from the spectrum and background respectively, which together refer the background flux to the same area as the observation as necessary. When this is done, XSPEC has an observed spectrum to which the model spectrum can be fit. R(I,E): The Instrumental Response Before XSPEC can take a set of parameter values and predict the spectrum that would be detected by a given instrument, XSPEC must know the specific characteristics of the instrument. This information is known as the detector response. Recall that for each spectrum the response [] is proportional to the probability that an incoming photon of energy E will be detected in channel I. As such, the response is a continuous function of E. This continuous function is converted to a discrete function by the creator of a response matrix who defines the energy ranges []such that: XSPEC reads both the energy ranges, [], and the response matrix [] from a response file in a compressed format that only stores non-zero elements. XSPEC also includes an option to use an auxiliary response file, which contains an array []that is multiplied into [] as follows: This array is designed to represent the efficiency of the detector with the response file representing a normalized Redistribution Matrix Function, or RMF. Conventionally, the response is in units of M(E): The Model Spectrum The model spectrum, [], is calculated within XSPEC using the energy ranges defined by the response file : and is in units of photons cm^-2s^-1. XSPEC allows the construction of composite models consisting of additive components representing X-ray sources (e.g., power-laws, blackbodys, and so forth), multiplicative components, which modify additive components by an energy-dependent factor (e.g., photoelectric absorption, edges, ...). Convolution and mixing models can then perform sophisticated operations on the result. Models are defined in algebraic notation. For example, the following expression: phabs (power + phabs (bbody)) defines an absorbed blackbody, phabs(bbody), added to a power-law, power. The result then is modified by another absorption component, phabs. For a list of available models, see Chapter 6. Fits and Confidence Intervals Once data have been read in and a model defined, XSPEC uses a fitting algorithm to find the best-fit values of the model parameter. The default is a modified Levenberg-Marquardt algorithm (based on CURFIT from Bevington, 1969). The algorithm used is local rather than global, so be aware that it is possible for the fitting process to get stuck in a local minimum and not find the global best-fit. The process also goes much faster (and is more likely to find the true minimum) if the initial model parameters are set to sensible values. The Levenberg-Marquardt algorithm relies on XSPEC calculating the 2^nd derivatives of the fit statistic with respect to the model parameters. By default these are calculated analytically, with the assumption that the 2^nd derivatives of the model itself may be ignored. This can be changed by setting the USE_NUMERICAL_DIFFERENTIATION flag to “true” in the Xspec.init initialization file, in which case XSPEC will perform numerical calculations of the derivatives (which are slower). At the end of a fit, XSPEC will write out the best-fit parameter values, along with estimated confidence intervals. These confidence intervals are one sigma and are calculated from the second derivatives of the fit statistic with respect to the model parameters at the best-fit. These confidence intervals are not reliable and should be used for indicative purposes only. XSPEC has a separate command (error or uncertain) to derive confidence intervals for one interesting parameter, which it does by fixing the parameter of interest at a particular value and fitting for all the other parameters. New values of the parameter of interest are chosen until the appropriate delta-statistic value is obtained. XSPEC uses a bracketing algorithm followed by an iterative cubic interpolation to find the parameter value at each end of the confidence interval. To compute confidence regions for several parameters at a time, XSPEC can run a grid on these parameters. XSPEC also will display a contour plot of the confidence regions of any two parameters. The sections above provide a simple characterization of the problem. XSPEC actually operates at a more abstract level and considers the following: Given a set of spectra C(I), each supplied as a function of detector channels, a set of theoretical models {M(E)[j]} each expressed in terms of a vector of energies together with a set of functions { R(I,E)[j]} that map channels to energies, minimize an objective function S of C, {R(I,E)[i]}, {M(E)[j]} using a fitting algorithm, i.e. In the default case, this reduces to the specific expression for [] fitting of a single source where [] runs over all of the channels in all of the spectra being fitted, and using the Levenberg-Marquadt algorithm to perform the fitting. This differs from the previous formulation in that the operations that control the fitting process make fewer assumptions about how the data are formatted, what function is being minimized, and which algorithm is being employed. At the calculation level, XSPEC requires spectra, backgrounds, responses and models, but places fewer constraints as to how they are represented on disk and how they are combined to compute the objective function (statistic). This has immediate implications for the extension of XSPEC analysis to future missions. New data formats can be implemented independently of the existing code, so that they may be loaded during program execution. The ‘data format’ includes the specification not only of the files on disk but how they combine with models. Multiple sources may be extracted from a spectrum. For example, it generalizes the fitting problem to minimizing, in the case of the [] statistic, and j runs over one or more models. This allows the analysis of coded aperture data where multiple sources may be spatially resolved. Responses, which abstractly represent a mapping from the theoretical energy space of the model to the detector channel space, may be represented in new ways. For example, the INTEGRAL/SPI responses are implemented as a linear superposition of 3 (fixed) components. Instead of explicitly combining responses and models through convolution XSPEC places no prior constraint on how this combination is implemented. For example, analysis of data collected by future large detectors might take advantage of the form of the instrumental response by decomposing the response into components of different frequency. Other differences of approach are in the selection of the statistic of the techniques used for deriving the solution. Statistics and fitting methods may be added to XSPEC at execution time rather than at installation time, so that the analysis package as a whole may more easily keep apace of new techniques. XSPEC is designed to support multiple input data formats. Support for the earlier SF and Einstein FITS formats are removed. Support for ASCII data is planned, which will allow XSPEC to analyze spectra from other wavelength regions (optical, radio) transparently to the user. The OGIP data format both for single spectrum files (Type I) and multiple spectrum files (Type II) is fully supported. These files can be created and manipulated with programs described in Appendix E and the provided links. The programs are described more fully in George et al. 1992.(the directories below refer to the HEAsoft distribution). l Spectral Data: callib/src/gen/rdpha2.f, wtpha3.f l Auxiliary Responses: callib/src/gen rdarf1.f, and wtarf1.f XSPEC also includes an add-in module to read and simulate INTEGRAL/SPI data, which can be loaded by the user on demand. The INTEGRAL/SPI datasets are similar to OGIP Type II, but contain an additional FITS extension that stores information on the multiple files used to construct the responses. The INTEGRAL Spectrometer (SPI) is a coded-mask telescope, with a 19-element Germanium detector array. The Spectral resolution is ~500, and the angular resolution is ~3°. Unlike focusing instruments however, the detected photons are not directionally tagged, and a statistical analysis procedure, using for example cross-correlation techniques, must be employed to reconstruct an image. The description of the XSPEC analysis approach which follows assumes that an image reconstruction has already been performed; see the SPIROS utility within the INTEGRAL offline software analysis package (OSA), OR, the positions on the sky of all sources to be analyzed are already known (which is often the case). Those unfamiliar with INTEGRAL data analysis should refer to the OSA documentation. Thus, the INTEGRAL/SPI analysis chain must be run up to the event binning level [if the field of view (FoV) source content is known, e.g. from published catalogs, or from IBIS image analysis], or the image reconstruction level. SPIHIST should be run selecting the "PHA" output option, and selecting detectors 0-18. This will produce an OGIP standard type-II PHA spectral file, which contains multiple, detector count spectra. In addition, the SPIARF procedure should be run once for each source to be analyzed, plus one additional time to produce a special response for analysis of the instrumental background. If this is done correctly, and in the proper sequence, SPIARF will create a table in the PHA-II spectral file, which will associate each spectrum with the appropriate set of response matrices. The response matrices are then automatically loaded into XSPEC upon execution of the data command in a manner very transparent to the user. You will also need to run SPIRMF (unless you have opted to use the default energy bins of the template SPI RMFs). Finally, you will need to run the FTOOL SPIBKG_INIT. Each of these utilities - SPIHIST, SPIARF, SPIRMF and SPIBKG_INIT - are documented elsewhere, either in the INTEGRAL or (for SPIBKG_INIT the HEAsoft) software documentation. There are several complications regarding the spectral de-convolution of coded-aperture data. One already mentioned is the source confusion issue; there may be multiple sources in the FoV, which are lead to different degrees of shadowing on different detectors. Thus, a separate instrumental response must be applied to a spectral model for each possible source, for each detector. This is further compounded by the fact that INTEGRAL's typical mode of observation is “dithering.” A single observation may consist of ~10's of individual exposures at raster points separated by ~2°. This further enumerates the number of individual response matrices required for the analysis. If there are multiple sources in the FoV, then additional spectral models can be applied to an additional set of response matrices, enumerated as before over detector and dither pointing. This capability - to model more than one source at a time in a given Chi-Square (or alternative) minimization procedure - did not exist in previous versions of XSPEC. For an observation with the INTEGRAL/SPI instrument, where the apparent detector efficiency is sensitive to the position of the source on the sky relative to the axis of the instrument, the []statistic is: [] run over instrument pointings and detectors; I runs over individual detector channels j enumerates the sources detected in the field at different position [] E indexes the energies in the source model x[s] parameters of the source model, which is combined with the response x[b] parameters of the background model, expressed as a function of detector channel Examination of this equation reveals one more complication; the term B represents the background, which, unlike for chopping, scanning or imaging experiments, must be solved for simultaneously with the desired source content. The proportion of background-to-source counts for a bright source such as the Crab is ~1%. Furthermore, the background varies as a function of detector, and time (dither-points), making simple subtraction implausible. Thus, a model of the background is applied to a special response matrix, and included in the de-convolution algorithm. Arnaud, K.A., George, I.M., Tennant, A.F., 1992. Legacy, 2, 65. Avni, Y., 1976. ApJ, 210, 642. Bevington, P.R., 2002, 3^rd Edition. Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill. Blissett, R.J., Cruise, A.M., 1979. MNRAS, 186, 45. George, I.M., Arnaud, K.A., Pence, W., Ruamsuwan, L., 1992. Legacy, 2, 51. Kahn, S.M., Blissett, R.J., 1980. ApJ, 238, 417. Loredo, T.J., Epstein, R.I., 1989. ApJ, 336, 896. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992. Numerical Recipes (2nd edition), p687ff, CUP. Wheaton, W.A. et.al., 1995. ApJ, 438, 322.
{"url":"http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XspecSpectralFitting.html","timestamp":"2014-04-18T09:11:54Z","content_type":null,"content_length":"54816","record_id":"<urn:uuid:f5ffe362-7a0d-4e93-8388-0b160ab8e256>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
You are subscribed to this thread Proving conjecture for recursive function "f(2n)= 1/(2n+1)" you mean 1/(n+1) right, (not being picky just making sure) Halls meant exactly what he said: [itex]f(2n)=1/(2n+1)[/itex]. You conjecture is that when is even, [itex]f(n)=1/(n+1)[/itex]. Another way of saying is even is saying that [itex]n=2m[/itex], where [itex]m[/itex] is an integer. Apply this to your conjecture: [itex]f(n)=1/(n+1)\;\rightarrow\; f(2m) = 1/(2m+1)[/itex]. To prove some conjecture by induction, you need to show two things: • That the conjecture is true for some base case and • That if the conjecture is true for some m then it is also true for m+1. The conjecture is obviously true for [itex]m=1[/itex] as [itex]f(2\cdot1) = 1/(2\cdot1+1) = 1/3[/itex]. All that remains is proving the recursive relationship.
{"url":"http://www.physicsforums.com/showthread.php?t=226005","timestamp":"2014-04-20T18:32:21Z","content_type":null,"content_length":"32680","record_id":"<urn:uuid:11858fce-ca6d-49ad-b7db-d535e70c3e16>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
I need help with my project Newbie Poster 2 posts since Jan 2013 Reputation Points: 0 [?] Q&As Helped to Solve: 0 [?] Skill Endorsements: 0 [?] •Unverified Member I have a project. the project is: In the game of craps, a pass line bet proceeds as follows. Two six-sided dice are rolled; the first roll of the dice in a craps round is called the "come out roll." A come out roll of 7 or 11 automatically wins, and a come out roll of 2, 3, or 12 automatically loses. If 4, 5, 6, 8, 9, or 10 is rolled on the come out roll, that number becomes "the point." The player keeps rolling the dice until either 7 or the point is rolled. If the point is rolled first, then the player wins the bet. If a 7 is rolled first, then the player loses. Write a program that simulates a game of craps using these rules without human input. Instead of asking for a wager, the program should calculate whether the player would win or lose. The program should simulate rolling the two dice and calculate the sum. Add a loop so that the program plays 10,000 games. Add counters that count how many times the player wins, and how many times the player loses. At the end of the 10,000 games, compute the probability of winning [i.e., Wins / (Wins + Losses)] and output this value. Over the long run, who is going to win the most games, you or the Note: To generate a random number X, where 0 <= X < 1, use X = Math.random();. For example, multiplying Math.random() by 6 and converting to an integer results in a random integer that is between 0 and 5. And, I already have some, I just havd no idea how to do the next step... import java.util.Random; public class Project2 public static void main(String[] args) Random rand = new Random(); int grt=10000;//game run times int dice3=0; for(int i=0; i<100; i++) int dice1 = (int)((Math.random()*6)+ 1); int dice2 = rand.nextInt(6)+1; dice3 = (dice1 + dice2); if(dice3 == 2 || dice3 == 3 || dice3 == 12) System.out.println("you lose"); else if(dice3 == 7 || dice3 == 11) System.out.println("you win"); System.out.println("Your point is "+dice3); Posting Whiz 339 posts since Oct 2012 Reputation Points: 163 [?] Q&As Helped to Solve: 84 [?] Skill Endorsements: 8 [?] Write a method that rolls two dice and returns the value. Then write a method that does a come out roll and returns a win, a loss, or a point. Then write a method that does an entire game: a come out roll and then repeated rolls until the result is determined and a win or a loss is returned. Then in your main you just need to call the game method 10,000 times. It is almost always true of problems that they can be made easy by breaking them into small pieces. Newbie Poster 2 posts since Jan 2013 Reputation Points: 0 [?] Q&As Helped to Solve: 0 [?] Skill Endorsements: 0 [?] •Unverified Member thanks for your reply, actually I just stuck at the part of reroll the dice, see it equals to the point or 7, I really have no idea for doing that. can you give me some ideas how fix it please? ... trying to help 10,387 posts since Apr 2008 Reputation Points: 2,081 [?] Q&As Helped to Solve: 1,752 [?] Skill Endorsements: 47 [?] • Featured Generate two random numbers 1-6, add them together, test the result for ==7 or whatever else you need.
{"url":"http://www.daniweb.com/software-development/java/threads/445833/i-need-help-with-my-project","timestamp":"2014-04-19T19:53:01Z","content_type":null,"content_length":"39756","record_id":"<urn:uuid:0b3c7409-398c-424a-9b7a-df79ff06bdc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Experiments in Chris Wood on Tensor Networks for Open Quantum Systems Chris Wood on tensor networks for open quantum systems and completely positive maps For those of you following the Network Theory series, we’ve been trying to unify concepts across an apriori seemingly distinct range of topics. For this reason, I jokingly and seriously at the same time call this the grand unified network theory project. This post is more of a news item related to some potentially interesting work. Before getting suckered into working on the network project by John Baez, I was considering topics related to quantum networks. Today I want to mention some recent work I took part in, related to quantum networks and initiated by Chris Wood from the Institute for Quantum Computing (IQC) in Canada. A future direction of the network theory project will be to consider open quantum systems. We might build on and use some of the results appearing in the following preprint. There is a story behind how this project all got started, and if you have a moment, you can read it right now. Mike Mosca invited me to IQC to teach my course on tensor networks. Chris Wood must have been bored, but regardless of the reason, he showed up. He was not even enrolled in the course initially, but he liked it enough that he signed up. Chris was already an expert in open quantum systems, he wrote what I consider a very solid honours thesis on the topic In his thesis, Chris explains a lot of the background in open quantum systems before going into several reserach results. You might be thinking, “that’s one heck of a masters thesis”, but in fact, this is his undergraduate thesis! He got a 1st class degree and a university medal for this, ended up doing a masters at Perimeter Institute and is now working towards his PhD at IQC. In his thesis, he made use of the so called quantum circuits model, and as is typical in the field of quantum information, he drew pictures such as Here the wedge shaped diagram with edges labelled $A$ and $S$ depicts the so called Bell-state. He could have used a curved line like we did in our paper, but it’s just syntax. He drew other diagrams too, for increasingly complicated scenarios including Where did all these diagrams come from and what do they mean? Well, we’re not going to have time to explain that here, but for those that are curious about quantum network theory, I can shamelessly recommend my own lecture series on the topic. • Youtube series, Lectures on Tensor Network States, QIC 890/891 Selected Advanced Topics in Quantum Information, The University of Waterloo, Waterloo Ontario, Canada, (2011). If you’re not happy with my course, I suggest you make a better one. I even placed all of the LaTeX source for my lectures notes online to download if you wanted to base parts of your new course on what I did. In my course, we talked a lot about using Penrose graphical notation for tensor network states. For instance, Here Oxford Professor, Roger Penrose is expressing a so called density operator using the graphical tensor notation he pioneered. One of the key citations to his work includes To get an idea of what sorts of things you can find in this 1971 paper, consider Here Penrose is explaining what we call “Penrose wire bending duality”. As he explains, the input and output of diagram can be changed at will, by simply bending inputs to outputs and vise versa. To get an idea of what this means exactly, consider the following figure from the paper. What this is showing is known as the Kraus picture of open systems evolution. To explain this diagram, we have a quantum state $\rho$ acted on by operators $K$. Of course, expressing the known pictures of evolution into string diagrams would not get published in a journal. It is well known that one can express quantum equations in terms of string diagrams, and follows from work done as early as the 1960’s and 1970’s by Penrose and others. What we did was something different. We can use Penrose duality and bend one of the wires the other way around. We can then slide a box around the bent wire and manipulate the diagram a bit to arrive at the following The form we arrive at already has a name. It is called the superoperator picture of open systems evolution. We translated from one picture to another, using pictures. This was the point of the paper. There are several so called pictures of open systems evolution, and we considered how the Penrose graphical notation can be used to transform between them. This is perhaps the simplest case, but it illustrates the key idea. If you are very interested, we encourage you to read the paper and take a look at figure 1. The boxes are the different pictures we consider and for each arrow, we give a transformation between them. This is even explained a bit in the abstract. • We develop a graphical calculus for completely positive maps and in doing so cast the theory of open quantum systems into the language of tensor networks. We tailor the theory of tensor networks to pictographically represent the Liouville-superoperator, Choi-matrix, process-matrix, Kraus, and system-environment representations for the evolution of open-system states, to expose how these representations interrelate, and to concisely transform between them. Several of these transformations have succinct depictions as wire bending dualities in our graphical calculus --- reshuffling, vectorization, and the Choi-Jamiolkowski isomorphism. The reshuffling duality between the Choi-matrix and superoperator is bi-directional, while the vectorization and Choi-Jamiolkowski dualities, from the Kraus and system-environment representations to the superoperator and Choi-matrix respectively, are single directional due to the non-uniqueness of the Kraus and system-environment representations. The remaining transformations are not wire bending duality transformations due to the nonlinearity of the associated operator decompositions. Having new tools to investigate old problems can often lead to surprising new results, and the graphical calculus presented in this paper should lead to a better understanding of the interrelation between CP-maps and quantum theory.' If you have ideas, we’d like to hear them: please feel free to email us. If you have a few quick questions about the paper, Chris Wood will be around today to respond to them. He lives in Waterloo Canada right now which is on the East Coast. I’m in Singapore, so if it’s the middle of the night in Waterloo, I’ll answer them. Now before we go, I should mention what some of you might have noticed. We are using what David Cory suggested as the color convention. In the diagrams, like colored pictures are summed over. This could of course be replaced by attaching colored diagrams with a connecting wire in the Penrose graphical notation. However, the color convention proved helpful for our work considering open systems. To get an idea of how nice it looks, here is a proof of Penrose’s snake equation.
{"url":"http://www.azimuthproject.org/azimuth/show/Experiments+in+Chris+Wood+on+Tensor+Networks+for+Open+Quantum+Systems","timestamp":"2014-04-18T16:00:59Z","content_type":null,"content_length":"20935","record_id":"<urn:uuid:718c2316-81f7-463e-a5ca-23871a481b31>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Power equation derivation I am working on a derivation of of the power equation. It is very simple, I would like to know if it is correct. I started with the equation for mechanical work over a changing path, and changing force. This is from Wikipedia, and it agrees with my text. W = [tex]\int[/tex] F [º] ds where: [º] is the symbol for dot product F is the force vector; and s is the position vector. Then to make it power I just put (1/delta t) in front of it to make: P = [tex]\frac{1}{\Delta t}[/tex] [tex]\int[/tex] F [º] ds Let me know if this checks out.
{"url":"http://www.physicsforums.com/showthread.php?t=307889","timestamp":"2014-04-19T15:07:59Z","content_type":null,"content_length":"25310","record_id":"<urn:uuid:b6425e74-2265-42ee-ab0d-6cd1b60dc981>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Fun with expanders starts today Long in the making, the online course on expanders starts today. In the first week of class: what are the topics of the course, and how to prove that the eigenvalues of the adjacency matrix of a regular graph tell you how many connected components there are in the 2 comments I’m curious what the Stanford math department thinks of MOOCs? Looking at e.g. Coursera, there’s a ton of CS material there, but almost a total vacuum of math. The only exception is Rob Ghrist’s class, but that’s only an intro calculus class. Top math programs in the country e.g. teach the same year-long graduate courses in analysis, algebra and geometry year after year. The professors I’ve talked to seem to not care about MOOCs at all. It’s sort of strange, since our department has both good and, unfortunately, bad teachers. Every year one specific professor has taught algebraic topology, a large chunk of the incoming Ph.D. class has decided to choose that as a research field, so having a good class in a subject makes a world of difference. Video lectures would seem to provide a great way to provide visualizations of things that are very hard to do on the black board. Of course, it probably means enlisting a students that can do all the animations for the professor, but it would seem to be a much better way to teach people how to think about e.g. topology and geometry in terms of pictures. I’m really looking forward to seeing how this medium works for a research-level class. I’ve looked at a few MOOCs and some of them, e.g. Dan Boneh’s crypto, was really well done and really made use of the new medium.
{"url":"http://lucatrevisan.wordpress.com/2013/04/23/fun-with-expanders-starts-today/","timestamp":"2014-04-20T00:38:28Z","content_type":null,"content_length":"75311","record_id":"<urn:uuid:665aed40-b416-47f7-aeac-3d83a22c052c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Use the arc length formula and the given information to find θ. s = 7 m, r = 12 m; θ = ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d67323e4b0d6c1d541fa03","timestamp":"2014-04-20T11:06:31Z","content_type":null,"content_length":"34915","record_id":"<urn:uuid:a9f7e4d6-e5c6-490c-816d-faacd5b8354c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Review Problem A very small block of mass m is placed at the top of a hemispherical frictionless cap of radius R. The block is given a slight nudge (Vi » 0) just enough to make it slide down the cap, which itself sits on top of a horizontal table. Where will it land? Note: The answer must be expressed only in terms of R. If you have trouble dealing with symbols you may assign a value for R (for instance, R = 1 m).
{"url":"http://www.sewanee.edu/physics/PHYSICS101/Review.htm","timestamp":"2014-04-19T19:34:24Z","content_type":null,"content_length":"6700","record_id":"<urn:uuid:15f417ee-5560-4109-ad03-5032de59a9fe>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometric Functions TRIGONOMETRIC FUNCTIONS If we are given we may state that, from the general formula, Then by substituting equations (5.4) and (5.5) into equation (5.3), Now we are interested in finding the derivative function sin u, so we apply the chain rule From the chain rule and equation (5.6), we find In other words, to find the derivative of the sine of a function, we use the cosine of the function times the derivative of the function. By a similar process we find the derivative of the cosine func­tion to be [ ] The derivatives of the other trigonometric functions may be found by expressing them in terms of the sine and cosine. That is, and by substituting sin u for u, cos u for v, and du for dx in the expression of the quotient theorem we have and substituting into equation (5.7), we find that Now using the chain rule and equation (5.8), we find By stating the other trigonometric functions in terms of the sine and cosine and using similar processes, we may find the following derivatives: EXAMPLE: Find the derivative of the function EXAMPLE. Find the derivative of the function SOLUTION: Use the power theorem to find Then find Combining all of these, we find that Find the derivative of the following:
{"url":"http://www.tpub.com/math2/54.htm","timestamp":"2014-04-17T15:26:26Z","content_type":null,"content_length":"22723","record_id":"<urn:uuid:f4659173-102d-4731-91af-99d52e636ec4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
In our previous example, we demonstrated how to calculate the Kaplan-Meier estimate of the survival function for time to event data. A related quantity is the Nelson-Aalen estimate of cumulative hazard. In addition to summarizing the hazard incurred ... An article attacking R gets responses from the R blogosphere – some reflections In this post I reflect on the current state of the R blogosphere, and share my hopes for the future Happy Birthday GGD! The 10 Most Popular Posts Since GGD’s Launch The first post on Getting Genetics Done was one year ago today. To celebrate, here are the top 10 most viewed posts since GGD launched last year. Incidentally, nine of the ten are tutorials on how to do something in R. Thanks to all the readers and all... Generalized linear mixed effect model problem I am trying to compare cohort difference in infant mortality using generalized linear mixed model. I first estimated the model in Stata:xi:xtlogit inftmort i.cohort, i(code)which converged nicely:Fitting comparison model:Iteration 0: log likelih... Using the “foreign” package for data conversion I was in a rush to convert a SPSS data into Stata format. Somehow my Stattransfer v.8 for Linux was lost and I did not want pause my work and go back to Windows just to get this one file converted. So fire Emacs+ESS+R, load the "foreign" package, did t... Comparison of plots using Stata, R base, R lattice, and R ggplot2, Part I: Histograms One of the nicer things about many statistics packages is the extremely granular control you get over your graphical output. But I lack the patience to set dozens of command line flags in R, and I'd rather not power the computer by pumping the mouse trying to set all the clicky-box options in Stata's graphics editor. I want something... High tech, low tech? Interesting article: http://www.iq.harvard.edu/blog/sss/archives/2006/01/tools_for_resea.shtml Side by side analyses in Stata, SPSS, SAS, and R I've linked to UCLA's stat computing resources once before on a previous post about choosing the right analysis for the questions your asking and the data types you have. Here's another section of the same website that has code to run an identical analysis in all of these statistical packages, with examples to walk through (as they note... Documentation and tutorial roundup I recently lost my documentation folder (oops), so I had to go online and retrieve the documentation files and tutorials that I find indispensible for working. I decided I’d save myself and everyone else the trouble by posting the list here. All of the files are available in PDF format. All R manuals Scilab documentation.
{"url":"http://www.r-bloggers.com/tag/stata/","timestamp":"2014-04-18T13:08:40Z","content_type":null,"content_length":"37813","record_id":"<urn:uuid:9955c1f3-f166-47d7-a48e-279d7748fe4e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
June 1st 2010, 05:25 AM #1 Each year a car depreciated by 9%. Calculate the number of years after which the value of the car is 47% of it's original value. I'm aware of the compound interest formula but I don't know if it would be any use here or how to reverse it because I'm not looking for a precise number. Should I create a number such as x and try and somehow change the compound interest formula so it's inverse? Last edited by mr fantastic; June 1st 2010 at 05:50 AM. Reason: Re-titled. Bump. Still no closer to a clue and its been ages. Hope the admin doesn't mind me bumping this. Well, to be blunt, your original post doesn't make much sense; what's "reverse" and "inverse"? Once more, to find the number of periods, logs ARE REQUIRED. Finding the "value left" after a GIVEN period is a different story: (1 + i)^n With i = -.09 and n = 8: (1 - .09)^8 = .91^8 = .47025... June 1st 2010, 08:00 AM #2 MHF Contributor Dec 2007 Ottawa, Canada June 2nd 2010, 04:06 AM #3 MHF Contributor Dec 2007 Ottawa, Canada June 3rd 2010, 03:32 AM #4 June 4th 2010, 06:15 AM #5 June 4th 2010, 06:18 AM #6 June 4th 2010, 07:29 AM #7 MHF Contributor Dec 2007 Ottawa, Canada
{"url":"http://mathhelpforum.com/business-math/147244-depreciation.html","timestamp":"2014-04-17T22:35:56Z","content_type":null,"content_length":"49388","record_id":"<urn:uuid:83915ae3-c1ae-4faf-b816-0b98d0b6617f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Travel Time & Distance to Tavira Unfollow Follow Travel Time & Distance to Tavira by heidic Sep 1, 2007 at 10:01 PM Does anyone how far it is between Evora & Tavira. How long is driving time? Quote & Answer Re: Travel Time & Distance to Tavira By road till Castro Verde (131 Km) and, from then, by motorway are about 250Km and 2h 45m. By motorway leaving Évora in direction to Lisbon and, after Vendas Novas, turning down to south - Algarve, are 310 Km - 3 hours. I will prefer the first option! Be the first to rate this answer! Re: Travel Time & Distance to Tavira here´s the info from via michelin about the quickiest way. There are other options, from the most economical to the shortest. This is the quikiest: Departure Évora Destination 8800 Tavira Date: 02/09/2007 Vehicle type: Automobile, Hatchback Route type: Quickest Time and distance Time: 02h52 including 01h02 on motorways Distance: 250km including 113km on motorways dont 23km on scenic roads Costs 25.93 EUR Toll costs: 5.00 EUR Petrol costs: 20.93 EUR Road tax cost: 0.00 EUR Be the first to rate this answer! Re: Travel Time & Distance to Tavira Great! Thanks for the tips and routes Be the first to rate this answer! Re: Travel Time & Distance to Tavira Be the first to rate this answer!
{"url":"http://forum.virtualtourist.com/Evora-288149-7-3373244/Travel-Time-Distance-to-Tavira.html","timestamp":"2014-04-20T11:05:01Z","content_type":null,"content_length":"71025","record_id":"<urn:uuid:42c93c95-7d6c-4504-a7c8-3b1c6554bc07>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Factorization (using eisenstein) March 26th 2007, 05:24 AM #1 Junior Member Feb 2006 Factorization (using eisenstein) Can someone please help me with the following question and explain to me what they are doing at each step: In each case below, decide whether or not the given polynomial is irreducible over Q, justifying your answer in each case. If the polynomial is not irreducible , give its complete factorization into Q-irreducible factors. i) (x^12) - 18(x^6) - 175 ii) (x^6) + (x^3) + 1 iii) (x^10) + 1 A polynomial is reducibble in Z[x] if and only if it is reducible in Q[x]. Begin this problem by writing, y^2-18y-175=0 where y=x^6. Find zeros and hence find the factors, Those two polyonomial are irreducible in Z[x]. x^3-25 has no zero (and has degree at most three). x^3+7 has no zero (and has degree at most three). March 26th 2007, 07:02 AM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/advanced-algebra/12986-factorization-using-eisenstein.html","timestamp":"2014-04-16T22:03:06Z","content_type":null,"content_length":"32988","record_id":"<urn:uuid:4ac3a689-1f10-4fb6-927a-f1eb874a04db>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Signal Cancellation Improves DDS SFDR Direct digital synthesizers (DDS) are commonly used for sinusoidal signal generation in RF communication systems and test equipment. An integral component of DDS is the digital-to-analog converter (DAC). Although a DAC is intended to perfectly reproduce an analog signal from its digital equivalent, the conversion process is rarely perfect. The DAC's digital resolution (number of bits) is a limiting factor that introduces quantization errors resulting in a noise floor. Other errors, such as linearity errors, create undesired harmonics in the DAC/DDS output spectrum, limiting the spurious-free dynamic range (SFDR). Fortunately, a DDS architecture has been developed that reduces harmonic spurious signals due to the non-ideal characteristics of the DAC, resulting in a significant improvement in SFDR performance. In general, harmonics can usually be filtered without much difficulty. However, the conversion of a digital signal to an analog signal by a DAC lies in the realm of sampling theory. This means that harmonic signals do not always appear at obvious frequencies according to the rules of digital signal processing (DSP). For example, suppose a DAC sampling at 100 MHz generates a sinusoid with a frequency of 26 MHz. The third harmonic would be expected to appear at 78 MHz where it can be easily filtered. In fact, a third-harmonic image will also appear at 22 MHz because of the effects of sampling. Because it is only 4 MHz away from the fundamental signal, it is difficult to filter without also attenuating the fundamental signal. Obviously, if harmonics could be selectively attenuated, the SFDR performance of the DAC could be improved. As mentioned earlier, harmonic distortion introduced by the DAC is usually the limiting factor of SFDR performance in a DDS. Present solutions for improving SFDR involve frequency planning and/or the addition of external filtering at the DAC outputs. An alternative solution is to predistort the digital signal as it arrives at the input to the DAC in such a way that it eliminates the harmonic signal, a technique that is a variation on a method called destructive interference. Summing two sinusoids of the same frequency, but with equal and opposite amplitude, will result in cancellation of both sinusoids. The mathematical explanation of this concept is best understood by first considering the signals concerned in the context of a DAC-generated sinusoid. The first is the primary sinusoid with amplitude P and frequency wP. An arbitrary spurious signal has amplitude S and frequencyw[S. ]The frequencies of the primary and spurious sinusoids are related by w[S]=Nw[P ](where N>1). Furthermore, in the special case where the spurious sinusoid is harmonic, N is an integer greater than 1. The amplitudes of the primary and spurious sinusoids are related by S = aP, where typically a w[S) ]as the spurious sinusoid is introduced, but offset from the spurious sinusoid by some arbitrary angle q. The amplitudes of the canceling and spurious sinusoids are related by C = bS. However, since the spurious sinusoid and the canceling sinusoid both possess the same frequency, they combine to form a single resultant sinusoid of amplitude R and frequency w[S ]Combining the relationships between the amplitudes of P, S, and C along with the fact that the spurious and cancelling signals are offset by the angle q, it can be shown that the amplitude of the resultant sinusoid (R) is given by: The most notable feature of this expression occurs when the amplitude of the canceling sinusoid is identical to the amplitude of the spurious sinusoid and phase shifted by 180 deg. This occurs when b = 1 and q = 180 deg. (p rad). Under such conditions, R = 0. With the above expression for R, it is instructive to examine the quantitative relationship between R, b and q. This is best done by considering the ratio, R/aP, which gives the amplitude of the resultant sinusoid relative to the amplitude of the spurious sinusoid. This ratio, expressed in decibels, is: Figure 1 demonstrates the way in which R varies as a function of both b and q. The axis labeled "Amplitude Error" equates to b values that deviate from unity over a range of 5 percent. The axis labeled "Phase Error" equates to q values that deviate from 180 deg. over a range of 5 deg. Notice that the four corners of the surface plot are local maxima and register approximately -20 dB. This means that if the phase of the canceling signal is within 5 deg. of being completely out of phase with the spurious signal and is matched to within 5 percent of the amplitude of the spurious signal, then the resultant signal can be expected to achieve a 20 dB amplitude reduction with respect to the level of the original spurious signal. A basic DDS architecture is comprised of an accumulator, phase-toamplitude converter, and a DAC. This structure is ideally suited for the implementation of the destructive interference concept. A cancellation signal can be generated by adding a duplicate DDS path, excluding the DAC (Fig. 2). However, two modifications must be made to the primary DDS path. The first is the inclusion of an adder inserted between the primary phase-to-amplitude converter and the DAC in order to facilitate the combining of the cancellation signal with the primary signal. The second is a multiplier that has the primary frequency tuning word as one input and a user-specified frequency scaling value as the other input. This provides the ability to adjust the frequency of the cancellation signal. However, because the frequency of the cancellation signal is always an integer multiple of the primary signal (i.e., a harmonic), the design of the multiplier is somewhat simplified (integer rather than floating point). In addition to the two changes in the primary DDS path, the "cancellation" DDS also requires two modifications (Fig. 2). The first is the insertion of an adder between the accumulator and the input to the phase-to-amplitude converter. This allows the user to apply a phase offset ( ) to the cancellation signal relative to the primary signal. The second is a multiplier between the output of the cancellation phase-to-amplitude converter and the input to the adder that now precedes the DAC. This allows the user to scale the amplitude of the cancellation signal. Page Title The ability of the DDS to produce precise integer multiples of the primary frequency is key to the concept of destructive interference. The cancellation DDS can be designed with less complexity than the primary DDS, because the harmonic spurious signals generated by the DAC are usually very small compared to the primary signal. Typically, harmonic spurs are -50 dBc or smaller. This represents a cancellation signal that is no more than 0.32 percent of the DAC fullscale output, which means the upper 8 b of the DAC range are not required for generating the cancellation sinusoid. Thus, if the primary DDS is designed with a 14-b DAC, the cancellation DDS need only be designed with a 6-b output (the 14-b DAC resolution less the 8 unused most significant bits). This, in turn, implies that the phase-to-amplitude converter of the cancellation DDS only requires phase resolution of 9 b. This is based on the "rule of thumb" for DDS design, which states that the phase resolution of the phase-to-amplitude converter must be at least 3 b more than its amplitude resolution in order to maintain 0.5 LSB amplitude accuracy. Thus, the reduced amplitude requirement for the cancellation DDS means that the cancellation DDS requires much less hardware than the primary DDS. The cancellation DDS can be further simplified. The cancellation DDS makes use of a multiplier prior to the accumulator in order to generate the desired harmonic frequency. However, since an accumulator is nothing more that a recursive adding structure, and multiplication is commutative over addition, then the multiplier can be placed after the accumulator instead of before it. Since the primary and cancellation accumulators operate in parallel, the cancellation accumulator is redundant. This leads to a simpler cancellation DDS as shown in Fig. 3. Figure 3 also shows the reduced complexity of the phase-to-always amplitude converter by identifying smaller input and output data bus widths (Q and S, respectively). One small problem has been ignored to this point. When the primary and cancellation signals are summed prior to the DAC, an overflow condition will exist. This is because the phase-to-amplitude converter of the primary DDS generates a full-scale sinusoid by design. Any signal added to the full-scale output of the primary phase-to-amplitude converter will necessarily cause an overflow. This is easily prevented by slightly attenuating the output of the primary phase-to-amplitude converter to create enough headroom for the cancellation signal (Fig. 4). The amount of required attenuation depends on the largest cancellation signal that can be generated by the cancellation DDS. The largest cancellation signal is based on S (the data bus width at the output of the cancellation phaseto-amplitude converter). Given a D-bit DAC and an S-bit maximum cancellation signal, the required attenuation value is given by the formula below. For example, given a 12-b DAC and a 4-b maximum cancellation signal, the attenuation value is 1-2(^4-^12)=0.99609375: Extending this concept to a multichannel harmonic suppression technique by replicating the "simplified cancellation DDS" from Fig. 2 is quite simple. This concept is shown in Fig. 5. Note that each cancellation DDS has its own frequency, phase, and amplitude control. All of the cancellation channels sum with the primary signal prior to the DAC. The only caveat to the multichannel implementation is that the attenuation value for the headroom adjustment must take into consideration the number of cancellation channels (N). This results in a slightly modified version of the attenuation formula: The actual amplitude and phase values required to destructively interfere with a harmonic spur depend upon the frequency of the primary sinusoid and any internal nonlinearities of the DAC. Due to this variability, the amplitude and phase settings for the cancellation DDS must be determined empirically. To cancel a harmonic spur, first its actual frequency must be determined. Recall that the effects of sampling may position a harmonic spur at a frequency other than its expected harmonic position. Its actual frequency location can be determined by the following procedure. First, let f[S] be the sample rate of the DAC, f[p] the frequency of the primary sinusoid, f[H] the harmonic frequency, and f[SPUR] be the frequency of the harmonic spur after correcting for the effects of sampling. To find f[H] multiply f[p] by the harmonic number, N (i.e., N = 2 for the 2nd harmonic, N = 3 for the 3rd harmonic, etc.). Next, find the remainder, R, after dividing f[H] by f[S]. If R S/2, then f[SPUR] = R, otherwise f[SPUR] = f[S]- R. With knowledge of the harmonics spur's actual frequency location a spectrum analyzer can be used to determine its amplitude relative to the primary sinusoid. This is done by first noting that the spur magnitude relative to the primary magnitude is measured in units of dBc. For example, if the primary registers -12dB on the spectrum analyzer and the spur measures -71 dB, then the dBc value is -71 - (-12) = -59 dBc. Hence, the voltage relationship between the spur and the primary is: Since the voltage level of the primary is directly related to the full-scale range of the DAC, then the above ratio should define the range of the required cancellation signal. However, the magnitude of a sinusoid generated by a DAC depends on its frequency. This frequency dependency is deterministic and is defined by the well-known sin(x)/x (or sinc) magnitude response. Since the cancellation sinusoid is generated at the input to the DAC, its amplitude must be scaled to compensate for the sinc response of the DAC. The required scale factor is: So, the required cancellation amplitude relative to the full scale input of the DAC is given by the formula below. This quantity represents the fraction of the full-scale input of the DAC necessary to produce a cancellation sinusoid of the appropriate amplitude. The actual amplitude adjustment code, A[SCALE](Fig.5), depends on the DAC resolution (D bits) and the cancellation DDS resolution (S bits). Once the appropriate amplitude code is entered, the spur and cancellation amplitudes should be fairly well matched. Although a spectrum analyzer can help in the determination of the ASCALE value, it cannot help with the value, because a spectrum analyzer cannot measure phase. The determination of the q value requires a trial-and-error approach in order to close in on the proper phase code for the cancellation DDS. A future installment on this article will examine performance results using the AD9912, a new 1-GSamples/s DDS with two spurious reduction channels. The average spurious suppression will be shown based on slight part-to-part variability. Stability concerns over supply and temperature will also be addressed. Analog Devices, Inc., 804 Woburn St., Wilmington, MA 01887; (800) ANALOGD, FAX: (781) 937-1021, Internet: www.analog.com. 1. Roger Huntley, Jr. et al., Synthesizer Structures and Methods That Reduce Spurious Signals, United States Patent No. 7,026,846 B1 April 11, 2006. 2. DDS Tutorial, http://www.analog.com/UploadedFiles/Tutorials/450968421DDS_Tutorial_rev12-2-99.pdf.
{"url":"http://mwrf.com/print/systems/signal-cancellation-improves-dds-sfdr","timestamp":"2014-04-21T05:52:31Z","content_type":null,"content_length":"29223","record_id":"<urn:uuid:0c690198-96e1-4682-a113-a0baef1b9ee0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Write formula using quantifiers November 12th 2012, 09:29 AM Write formula using quantifiers Hello there. I need to write a following formula using quantifiers, but it has to be in prenex normal form. The formula is : function f is not weakly decreasing and not weakly increasing. I got to here: $\exists x \exists y(x \le y \wedge f(x)>f(y)) \wedge \exists x\exists y(x \le y \wedge f(x)<f(y))$ by negating : $(\forall x \forall y(x \le y \wedge f(x)\le f(y)) \vee (\forall x \forall y(x \le y \wedge f(x)\le f(y))$ But I have no idea how to proceed, since my solution is not in prenex normal form. Please help. November 12th 2012, 10:13 AM Re: Write formula using quantifiers How are those terms, weakly decreasing and weakly increasing defined? November 12th 2012, 10:39 AM Re: Write formula using quantifiers Weakly increasing- $(\forall x \forall y(x \le y \wedge f(x)\le f(y))$ Weakly decreasing- $(\forall x \forall y(x \le y \wedge f(x) \geq f(y))$ I came up with this: $\exists x \exists y\exists a \exists b((x \le y \wedge (f(x)>f(y))) \wedge (a \le b \wedge (f(a)<f(b)))$ But is it correct? If so then why and how can I derive it from those two formulas above? November 12th 2012, 11:29 AM Re: Write formula using quantifiers Weakly increasing- $(\forall x \forall y(x \le y \wedge f(x)\le f(y))$ Weakly decreasing- $(\forall x \forall y(x \le y \wedge f(x) \geq f(y))$ I came up with this: $\exists x \exists y\exists a \exists b((x \le y \wedge (f(x)>f(y))) \wedge (a \le b \wedge (f(a)<f(b)))$ But is it correct? If so then why and how can I derive it from those two formulas above? Are you sure that it is not an implication? Weakly increasing- $(\forall x \forall y(x \le y \Rightarrow f(x)\le f(y))$ Weakly decreasing- $(\forall x \forall y(x \le y \Rightarrow f(x) \geq f(y))$ Then the negation you have would be correct. November 12th 2012, 11:50 AM Re: Write formula using quantifiers Yes, I'm sorry, there should be an implication. Weakly increasing- $(\forall x \forall y(x \le y \rightarrow f(x)\le f(y))$ Weakly decreasing- $(\forall x \forall y(x \le y \rightarrow f(x) \geq f(y))$ But now I have to bring it to prenex normal form. Is there some other way to arrive to a proper formula without converting my negation from above using an algorithm to prenex normal form like I did two messages prior? Is my answer from above even correct? November 12th 2012, 12:06 PM Re: Write formula using quantifiers Yes, I would accept that. But I don't know the level of rigor expected of you.
{"url":"http://mathhelpforum.com/discrete-math/207347-write-formula-using-quantifiers-print.html","timestamp":"2014-04-18T01:37:43Z","content_type":null,"content_length":"10900","record_id":"<urn:uuid:f4083fc9-0314-4edb-ad68-4665d76f3384>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Bug 15411 – 'format' adds a leading space depending on 'digits' Playing with the 'digits' argument in 'format' can lead to an additional leading space in some cases: > format(9995, digits = 3) [1] " 9995" as compared to: > format(9994, digits = 3) [1] "9994" First of all, it seems to affect only 'numeric', but not 'integers': > seq <- 9990:10000 > class(seq) [1] "integer" > vapply(seq, function(x) format(x, digits = 3), "") [1] "9990" "9991" "9992" "9993" "9994" "9995" "9996" "9997" "9998" [10] "9999" "10000" > class(as.numeric(seq)) [1] "numeric" > vapply(as.numeric(seq), function(x) format(x, digits = 3), "") [1] "9990" "9991" "9992" "9993" "9994" " 9995" " 9996" " 9997" " 9998" [10] " 9999" "10000" Note the leading space from 9995 to 9999. It also seems to happen only for numbers that would round up to the next power of 10 when printed to the requested number of significant digits. Consider the following example: > seq <- as.numeric(99990:100000) > vapply(seq, function(x) format(x, digits = 4), "") [1] "99990" "99991" "99992" "99993" "99994" "1e+05" "1e+05" "1e+05" "1e+05" [10] "1e+05" "1e+05" > vapply(seq, function(x) format(x, scientific = FALSE, digits = 4), "") [1] "99990" "99991" "99992" "99993" "99994" " 99995" " 99996" " 99997" [9] " 99998" " 99999" "100000" > print(seq, digits = 4) [1] 99990 99991 99992 99993 99994 99995 99996 99997 99998 99999 [11] 100000 > print(seq[6:10], digits = 4) [1] 1e+05 1e+05 1e+05 1e+05 1e+05 Finally, the function 'format.AsIs' does not show the same behavior: > format.AsIs(9994, digits = 3) [1] "9994" > format.AsIs(9995, digits = 3) [1] "9995" > format.default(9994, digits = 3) [1] "9994" > format.default(9995, digits = 3) [1] " 9995" It seems to affect similarly Windows and Linux (R 3.0.1); see the thread about this problem on the R-help list: I don't know if it is a bug or the expected behavior -- in which case, I could not find the reason of it in the documentation. The use of the argument 'trim = TRUE' does remove the leading space, but seems more of a workaround to me. Note also that I could not find a bug that seems related to that problem here. Here is my R version: > version platform x86_64-pc-linux-gnu arch x86_64 os linux-gnu system x86_64, linux-gnu major 3 minor 0.1 year 2013 month 05 day 16 svn rev 62743 language R version.string R version 3.0.1 (2013-05-16) nickname Good Sport Comment 1 Aleksey Vorona 2013-08-15 00:58:33 UTC The reason for this behaviour is in format.c The formatReal() function uses scientific() to estimate how much space would a number take if printed in a scientific form. It then uses this estimates to calculate how much space the number would take in the fixed form. The issue is that scientific() takes in the account "digits" parameter, and rounds up 9995. Later on, since " 9995" is still no worse than "1e+04", formatReal() decides to use fixed point format. But it does not bother to recalculate how much space the number would take in the fixed point format. As a result, estimate for "10000" is used. You can check the bug with this function as well: > format.info(9995, digits = 3) [1] 5 0 0 > format.info(9994, digits = 3) [1] 4 0 0 Easier to check for the value, rather than keep an eye for a loose space. Comment 2 Mathieu Basille 2013-08-15 02:19:39 UTC Dear Aleksey, Thanks to dig into this. I appreciate your answer, and I have to admit that it somewhat makes sense to me (note: I'm definitely not a R guru). However, would you consider this a bug? I find this behavior very hard to debug. It took me a couple of hours to understand where the problem lied, and then a few messages and another couple of hours to extract a reproducible example, before I could finally fix the problem with the 'trim' argument of format. What I mean is that, in real-world scenarios, it can be almost impossible to debug this problem, which is either not documented, or not easily found. Not mentioning the fact that this behavior affects only numeric and not integers... Comment 3 Aleksey Vorona 2013-08-15 06:07:18 UTC I am not a guru either. Simply trying to contribute a bit. I am trying to create a patch to fix it. I have a patch, which fixes your particular issue (I changed scientific() to return a flag if it rounded up the number and used that flag in the width computation). However, I found several other cases, which are not fixed by the patch I have. I will work on the patch a bit more tomorrow. Just to give you an idea on what I am trying to fix, consider this: > format(c(1, 9995, 1119996), digits=3); [1] " 1" " 9995" "1119996" That is even worse, is not it? I do think it is a bug. And trim=TRUE is only a partial solution. Just because format.info() does not have trim argument. Also, similar code is used in formatComplex(), which I have not debugged in the similar way yet... Comment 4 Aleksey Vorona 2013-08-15 06:13:17 UTC Created attachment 1472 [details] Patch v1 After re-reading the documentation, I am thinking the follow-up bug I was trying to fix is not a bug. This is a patch, which fixes the particular issue, reproducible with > format(9995, digits=3) Comment 5 Aleksey Vorona 2013-08-15 07:08:59 UTC A way to reproduce the same bug with complex numbers: > z = complex(real=9994,imaginary=9995) > format(z, digits=3) [1] "9994+ 9995i" > format.info(z, digits=3) [1] 4 0 0 5 0 0 > z = complex(real=9994,imaginary=9993) > format(z, digits=3) [1] "9994+9993i" > format.info(z, digits=3) [1] 4 0 0 4 0 0 I will try to update the patch tomorrow. Comment 6 Mathieu Basille 2013-08-15 15:48:00 UTC Thanks Aleksey for your contribution. Let me know if I can further help on this bug (notably test the patch), otherwise, I'll just keep an eye on it! Comment 7 Aleksey Vorona 2013-08-15 21:27:32 UTC Another related bug (in an unpatched R): > format(complex(real=100,imaginary=2), digits=2) [1] "100+0i" Notice that the imaginary part is lost completely. Sorry I keep updating this bug, but I feel the need to record the issues I am passing by, so that the bug will be fixed in full. Eventually. Comment 8 Aleksey Vorona 2013-08-15 22:39:49 UTC Created attachment 1473 [details] Patch v2 I have fixed the patch to work properly with real numbers in all situations I could think of. I did not write any unit tests for the cases, because I can not find unit tests for formatReal function. Here are two additional important cases, which were not handled properly by the previous patch: > format(c(94, 100, -95), digits=1); [1] " 94" "100" "-95" > format(c(94, 100, 95), digits=1); [1] " 94" "100" " 95" The spaces are expected in the last case, because the longest number is "100". The previous patch would've swallowed this spaces, because one of the numbers has been rounded up. The latest patch takes this into account and compensates for rounding up only if *the* longest number was rounded up. ("the longest" means the one with the longest representation in the Infinite precision "F" Format). Also, negative number were not compensated properly by the previous patch. I am happy with the fix for formatReal(). formatComplex() bug, on the other hand, is different. The problem is that it rounds up the complex number to the requested number of significant digits. But if real and imaginary part are more than `digits` orders of magnitude apart, one of them gets rounded up to zero. All the formatting is applied to that rounded up complex number. Because of that, the fix I have for formatReal() would not work for formatComplex(). I am going to spin up a separate bug about formatComplex(). Comment 9 Duncan Murdoch 2013-08-27 02:47:27 UTC Fixed in R-devel (and soon in R-patched). Comment 10 Mathieu Basille 2013-08-27 04:15:05 UTC (In reply to comment #9) > Fixed in R-devel (and soon in R-patched). This is good news to read! Many thanks Duncan and certainly Aleksey to dig into this very peculiar bug, and take the time to fix it.
{"url":"https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=15411","timestamp":"2014-04-20T13:19:53Z","content_type":null,"content_length":"45045","record_id":"<urn:uuid:9e93a060-89f3-4744-bff5-b017c9b19df8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimal to Binary conversion November 1st 2009, 10:54 AM #1 Apr 2009 Decimal to Binary conversion I need help in figuring out what im doing wrong: -15.625 ---> 11110001.10100000 OR 11110001.101 but those turned out to be wrong. any ideas? Um, $-15.625=-(2^3+2^2+2^1+2^0+2^{-1}+2^{-3})=-(1111.101)_2$ Are you trying to "sign" it as a computer would, to store it in bytes? This is different than a strictly mathematical conversion. Thats what i was looking for. -1111.101 November 2nd 2009, 06:18 AM #2 Senior Member Apr 2009 Atlanta, GA November 3rd 2009, 10:22 AM #3 Apr 2009
{"url":"http://mathhelpforum.com/number-theory/111719-decimal-binary-conversion.html","timestamp":"2014-04-17T19:00:09Z","content_type":null,"content_length":"32798","record_id":"<urn:uuid:e86b0251-84d9-4ffd-8f6d-6bee34a41484>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Reflexive short-cutting inequality test. This is functionally identical to the OCaml polymorphic inequality test < except that it is total even on floating-point NaNs. More importantly, it will more efficiently short-cut comparisons of large data structures where subcomponents are identical (pointer equivalent). May fail when applied to functions. =?, <=?, >?, >=?.
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/.ltq.html","timestamp":"2014-04-20T00:50:33Z","content_type":null,"content_length":"1348","record_id":"<urn:uuid:658b6a1c-14b8-48d9-aff4-e25418edce3e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Kepler's Conjecture, can be phrased, Is there a better stacking of oranges than the pyramids found at the fruit stand? In pyramids, the oranges fill just over 74% of space. Can a different packing do In August 1998, nearly 400 years after Kepler first made his conjecture, Thomas Hales, with the help of his graduate student, Samuel Ferguson, confirmed the conjecture, and solved Hilbert's 18th These pages give the broad outlines of the proof of the Kepler conjecture in the most elementary possible terms. Along the way the history of the Kepler conjecture is sketched. • Hilbert: It seems `obvious' that Kepler's conjecture is correct. Why is the gulf so large between intuition and proof? "Geometry taunts and defies us...." • Harriot and Kepler: The genesis of Kepler's conjecture. The pyramid stacking of oranges is known to chemists as the face-centered cubic packing. It is also known as the cannonball packing, because it is commonly used for that purpose at war memorials. Theorem (Kepler conjecture). No packing of balls of the same radius in three dimensions has density greater than the face-centered cubic packing. • Gauss: The first to prove anything about the Kepler conjecture, was Gauss. • Thue - Down to 2 dimensions: A typical mathematical gambit, is to gain insight into a hard problem by first tackling a simplified version. Thue solved the 2 dimensional analog of Kepler's problem in 1890. The two-dimensional version of Kepler's conjecture asks for the densest packing of unit disks in the plane. • Back to 3 dimensions - Voronoi: Implicit in the proof of Thue's theorem is the idea of a Voronoi cell. Let t>1 be a real number. We define a cluster of balls to be a set of nonoverlapping balls around a fixed ball at the origin, with the property that the ball centers have distance at most 2t from the origin. A cluster of n balls is determined by the 3n coordinates of the centers. The ball at the center of the cluster is contained in a Voronoi cell. By definition, the Voronoi cell is the set of all points that lie closer to the origin than to any other ball center in the cluster. Voronoi cells give a bound on the density of sphere packings. • Fejes Toth: The bound given by Voronoi cells is not sufficient, and a correction term must be introduced. Toth, in 1953, was the first to suggest a potential correction term. It remains unclear if his proposed correction terms have all the requisite properties. But it became clear that Kepler's problem could be solved via an optimization problem in a finite number of variables - and this optimization might be performed by computer. • Combinatorial Structures: The space of clusters is so complicated that it is not possible to minimize the correction term directly. Instead to each cluster is assigned a planar graph that identifies the most prominent geometrical features of the cluster. There are about 5000 planar graphs that need to be dealt with. • 5000 Cases: In most cases bounds derived just from the graph were sufficient. But in some cases, the crude combinatorial bounds were not good enough. One case turned out to be far more intricate than the others. • Linear Programs: For each case, there is a large-scale nonlinear optimization problem to be solved. Nonlinear optimization problems of this size can be hopelessly difficult to solve rigorously. Fortunately, the large-scale structure of the problem is linear and can be solved by linear programming methods. A related problem, of even greater antiquity, is: What is the most efficient partition of the plane into equal areas? The honeycomb conjecture asserts that the answer is the regular hexagonal After completing the proof of the Kepler conjecture, Thomas Hales turned his attention to the honeycomb conjecture. Somewhat to his surprise he obtained a (relatively) short solution without resort to computers. Theorem (Honeycomb conjecture). Any partition of the plane into regions of equal area has perimeter at least that of the regular hexagonal honeycomb tiling. • Pappus: In 36BC, in a book on agriculture, the Roman scholar Marcus Varro, presented the honeycomb conjecture not as a conjecture but as a proven fact. But the `proof', as recorded by Pappus, is • Kelvin and the Millenial List: The 3 dimensional version of the efficient partition problem is: How can space be divided into cavities of equal volume so as to minimize the surface area of the boundary? This is Hale's submission for a millenial `Hilbert problem list' • References
{"url":"http://www.math.pitt.edu/articles/cannonOverview.html","timestamp":"2014-04-20T11:12:46Z","content_type":null,"content_length":"11509","record_id":"<urn:uuid:bb99d131-f9fb-4fef-a4e9-ffa4648c15a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do you find the determinant of a 4X4 matrix? :) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c06162e4b0689d52fe1b67","timestamp":"2014-04-20T21:14:14Z","content_type":null,"content_length":"52628","record_id":"<urn:uuid:f82d89fc-e604-4384-9a58-ad0ad593c40a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
logic gate A mechanical, optical, or electronic system that performs a logical operation on an input signal. logic gate (plural logic gates) 1. a physical device, typically electronic, which computes a Boolean logical output (0 or 1) from Boolean input or inputs according to the rules of some logical operator. There are six non-trivial, symmetric, two-input, Boolean logic gates: AND, OR, XOR, NAND, NOR and XNOR. logic gate - Computer Definition A collection of transistors and resistors that implement Boolean logic operations in a circuit. Transistors make up logic gates. Logic gates make up circuits. Circuits make up electronic systems. The truth tables and symbols follow. See Boolean logic.
{"url":"http://www.yourdictionary.com/logic-gate","timestamp":"2014-04-16T22:04:55Z","content_type":null,"content_length":"42257","record_id":"<urn:uuid:2fbc40e0-d037-4805-b146-5c982493e574>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Principles of Hydraulics 1-3. Flow Flow is the movement of a hydraulic fluid caused by a difference in the pressure at two points. In a hydraulic system, flow is usually produced by the action of a hydraulic pump-a device used to continuously push on a hydraulic fluid. The two ways of measuring flow are velocity and flow rate. a. Velocity. Velocity is the average speed at which a fluid's particles move past a given point, measured in feet per second (fps). Velocity is an important consideration in sizing the hydraulic lines that carry a fluid between the components. b. Flow Rate. Flow rate is the measure of how much volume of a liquid passes a point in a given time, measured in gallons per minute (GPM). Flow rate determines the speed at which a load moves and, therefore, is important when considering power. 1-4. Energy, Work, and Power Energy is the ability to do work and is expressed in foot-pound (ft lb). The three forms of energy are potential, kinetic, and heat. Work measures accomplishments; it requires motion to make a force do work. Power is the rate of doing work or the rate of energy transfer. a. Potential Energy. Potential energy is energy due to position. An object has potential energy in proportion to its vertical distance above the earth's surface. For example, water held back by a dam represents potential energy because until it is released, the water does not work. In hydraulics, potential energy is a static factor. When force is applied to a confined liquid, as shown in Figure 1-4, potential energy is present because of the static pressure of the liquid. Potential energy of a moving liquid can be reduced by the heat energy released. Potential energy can also be reduced in a moving liquid when it transforms into kinetic energy. A moving liquid can, therefore, perform work as a result of its static pressure and its momentum. b. Kinetic Energy. Kinetic energy is the energy a body possesses because of its motion. The greater the speed, the greater the kinetic energy. When water is released from a dam, it rushes out at a high velocity jet, representing energy of motion-kinetic energy. The amount of kinetic energy in a moving liquid is directly proportional to the square of its velocity. Pressure caused by kinetic energy may be called velocity pressure. c. Heat Energy and Friction. Heat energy is the energy a body possesses because of its heat. Kinetic energy and heat energy are dynamic factors. Pascal's Law dealt with static pressure and did not include the friction factor. Friction is the resistance to relative motion between two bodies. When liquid flows in a hydraulic circuit, friction produces heat. This causes some of the kinetic energy to be lost in the form of heat energy. Although friction cannot be eliminated entirely, it can be controlled to some extent. The three main causes of excessive friction in hydraulic systems are: ☆ Extremely long lines. ☆ Numerous bends and fittings or improper bends. ☆ Excessive velocity from using undersized lines. In a liquid flowing through straight piping at a low speed, the particles of the liquid move in straight lines parallel to the flow direction. Heat loss from friction is minimal. This kind of flow is called laminar flow. Figure 1-8, diagram A, shows laminar flow. If the speed increases beyond a given point, turbulent flow develops. Figure 1-8, diagram B, shows turbulent flow. Figure 1-9 shows the difference in head because of pressure drop due to friction. Point B shows no flow resistance (free-flow condition); the pressure at point B is zero. The pressure at point C is at its maximum because of the head at point A. As the liquid flows from point C to point B, friction causes a pressure drop from maximum pressure to zero pressure. This is reflected in a succeedingly decreased head at points D, E, and F. d. Relationship Between Velocity and Pressure. Figure 1-10 explains Bernouilli's Principle, which states that the static pressure of a moving liquid varies inversely with its velocity; that is, as velocity increases, static pressure decreases. In the figure, the force on piston X is sufficient to create a pressure of 100 psi on chamber A. As piston X moves down, the liquid that is forced out of chamber A must pass through passage C to reach chamber B. The velocity increases as it passes through C because the same quantity of liquid must pass through a narrower area in the same time. Some of the 100 psi static pressure in chamber A is converted into velocity energy in passage C so that a pressure gauge at this point registers 90 psi. As the liquid passes through C and reaches chamber B, velocity decreases to its former rate, as indicated by the static pressure reading of 100 psi, and some of the kinetic energy is converted to potential energy. Figure 1-11 shows the combined effects of friction and velocity changes. As in Figure 1-9 pressure drops from maximum at C to zero at B. At D, velocity is increased, so the pressure head decreases. At E, the head increases as most of the kinetic energy is given up to pressure energy because velocity is decreased. At F, the head drops as velocity increases. e. Work. To do work in a hydraulic system, flow must be present. Work, therefore, exerts a force over a definite distance. It is a measure of force multiplied by distance. f. Power. The standard unit of power is horsepower (hp). One hp is equal to 550 ft lb of work every second. Use the following equation to find power: P = f x d/t
{"url":"http://edgeroamer.com/sweethaven/mechanics/hydraulics01/default.asp?iNum=0103","timestamp":"2014-04-16T08:00:20Z","content_type":null,"content_length":"11873","record_id":"<urn:uuid:8e5aaf3f-2299-4168-aa62-8fd689128ed0>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Rearranging a series to prove a limit. I was reading a book which is a collection of interesting mathematical journal articles. Within the book there was an article which discussed alternating series. In particular, at one point in the article it proves that the series [tex]\sum (-1)^n \frac{\sqrt(2)^{(-1)^{n}}}{n}[/tex] diverges. To be a bit more clear the series is To directly quote the article: "Since [tex]lim (-1)^n \frac{\sqrt(2)^{(-1)^{n}}}{n} = 0 [/tex] we may group these terms in pairs, and number them 2n+1, 2n+2 in pairs, with n=0,1,2..., as follows: [tex]S=[\frac{\sqrt{2}}{1}-\frac{\sqrt{2}}{4}] + [\frac{\sqrt{2}}{3}-\frac{\sqrt{2}}{8}] + ...+ [\frac{\sqrt{2}}{2n+1}-\frac{\sqrt{2}}{4n+4}][/tex] [tex]S=\sum [\frac{\sqrt{2}}{2n+1}-\frac{\sqrt{2}}{4n+4}][/tex] [tex]=\sum \frac{\sqrt{2}}{4} \frac{1}{n+1} \frac{2n+3}{2n+1}[/tex] where the latter part is clearly divergent. " Would that be considered a rearrangement of the series? I'm a bit confused on whether you can group terms together or move them around. I know series which are conditionally convergent can be rearranged to any sum and it follows that you cannot prove a limit with a rearrangement. To illustrate the source of my confusion I look at the classic series in which 1+(-1+1)+(-1+1)+... gives 1 and (1+(-1))+(1+(-1))+(1+(-1))+... gives 0 so grouping does seem to affect the result and this journal appears to be doing the same thing. Also, I'm not sure what the significance of the first statement (the limit -> 0) is. It doesn't seem to be used.
{"url":"http://www.physicsforums.com/showthread.php?p=3145399","timestamp":"2014-04-19T04:43:13Z","content_type":null,"content_length":"34138","record_id":"<urn:uuid:ff0a2a5e-445a-4a43-bd7c-ec3de09505c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractional Fourier Transform for Ultrasonic Chirplet Signal Decomposition Advances in Acoustics and Vibration Volume 2012 (2012), Article ID 480473, 13 pages Research Article Fractional Fourier Transform for Ultrasonic Chirplet Signal Decomposition ^1Department of Electrical and Computer Engineering, Bradley University, Peoria, IL 61625, USA ^2Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA Received 7 April 2012; Accepted 31 May 2012 Academic Editor: Mario Kupnik Copyright © 2012 Yufeng Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A fractional fourier transform (FrFT) based chirplet signal decomposition (FrFT-CSD) algorithm is proposed to analyze ultrasonic signals for NDE applications. Particularly, this method is utilized to isolate dominant chirplet echoes for successive steps in signal decomposition and parameter estimation. FrFT rotates the signal with an optimal transform order. The search of optimal transform order is conducted by determining the highest kurtosis value of the signal in the transformed domain. A simulation study reveals the relationship among the kurtosis, the transform order of FrFT, and the chirp rate parameter in the simulated ultrasonic echoes. Benchmark and ultrasonic experimental data are used to evaluate the FrFT-CSD algorithm. Signal processing results show that FrFT-CSD not only reconstructs signal successfully, but also characterizes echoes and estimates echo parameters accurately. This study has a broad range of applications of importance in signal detection, estimation, and pattern recognition. 1. Introduction In ultrasonic imaging applications, the ultrasonic signal always contains many interfering echoes due to the complex physical properties of the propagation path. The pattern of the signal is greatly dependent on irregular boundaries, and the size and random orientation of material microstructures. For material characterization and flaw detection applications, it becomes a challenging problem to unravel the desired information using direct measurement and conventional signal processing techniques. Consequently, signal processing methods capable of analyzing the nonstationary behavior of ultrasonic signals are highly desirable for signal analysis and characterization of propagation path. Various methods such as short-time Fourier transform, Wigner-Ville distribution, discrete wavelet transform, discrete cosine transform, and chirplet transform have been utilized to examine signals in joint time-frequency domain and to reveal how frequency changes with time in those signals [1–8]. Nevertheless, it is still challenging to adaptively analyze a broad range of ultrasonic signal: narrowband or broadband; symmetric or skewed; nondispersive or dispersive. Recently, there has been a growing attention to fractional Fourier transform (FrFT), a generalized Fourier transform with an additional parameter (i.e., transform order). It was first introduced in 1980, and subsequently closed-form FrFT was studied [8–11] for time-frequency analysis. FrFT is a power signal analysis tool. Consequently, it has been applied to different applications such as high-resolution SAR imaging, sonar signal processing, blind source separation, and beamforming in medical imaging [12–15]. Short term FrFT, component-optimized FrFT, and locally optimized FrFT have also been proposed for signal decomposition [16–18]. In practice, signal decomposition problem is essentially an optimization problem under different design criteria. The optimization can be achieved either locally or globally, depending on the complexity of the signal, accuracy of estimation, and affordability of computational load. Consequently, the results of signal decomposition are not unique due to different optimization strategies and signal models. For ultrasonic signal analysis, local responses from microstructure scattering and structural discontinuities are more of importance for detection and material characterization. Chirplet covers a board range of signals representing frequency-dependent scattering, attenuation and dispersion effects in ultrasonic testing applications. This study shows that FrFT has a unique property for processing chirp-type echoes. Therefore, in this paper, the application of fractional Fourier transform for ultrasonic applications has been explored. In particular, FrFT is introduced as a transformation tool for ultrasonic signal decomposition. FrFT is employed to estimate an optimal transform order, which corresponds to highest kurtosis value in the transform domain. The searching process of optimal transform order is based on a segmented signal for a local optimization. Then, the FrFT with the optimal transform order is applied to the entire signal in order to isolate the dominant echo for parameter estimation. This echo isolation is applied iteratively to ultrasonic signal until a predefined stop criterion such as signal reconstruction error or the number of iterations is satisfied. Furthermore, each decomposed component is modeled using six-parameter chirplet echoes for a quantitative analysis of ultrasonic signals. A bat signal is utilized as a benchmark to demonstrate the effectiveness of fractional Fourier transform chirplet signal decomposition (FrFT-CSD). To further evaluate the performance of FrFT-CSD, ultrasonic experimental data from different types of flaws such as flat bottom hole, side-drilled hole and disk-type cracks are evaluated using FrFT-CSD. The outline of the paper is as follows. Section 2 reviews the properties of FrFT and the process of FrFT-based signal decomposition. Section 3 addresses how kurtosis, transformation order and chirp rate are related using simulated data. Section 4 presents the steps involved in FrFT-CSD algorithm. Section 5 performs a simulation study of FrFT-CSD and parameter estimation for complex ultrasonic signals. Sections 5 and 6 show the results of a benchmark data (i.e., bat signal); the echo estimation results of benchmark data from side-drilled hole, and disk-shape cracks; the results of experimental data with high microstructure scattering echoes. 2. FrFT of Ultrasonic Chirp Echo FrFT of a signal, , is given by where denotes transform order of FrFT and denotes the variable in transform domain. It has been shown that if the transform order, , changes from 0 to 4, (i.e., the rotation angle, , changes from 0 to ), rotates the signal, , and projects it onto the line of angle, , in time-frequency domain [19]. This property contributes to FrFT-based decomposition algorithm when applied to ultrasonic signals. For ultrasonic applications, ultrasonic chirp echo is a type of signal often encountered in ultrasonic backscattered signals accounting for narrowband, broadband, and dispersive echoes. It can be modeled as [8]: where denotes the parameter vector, is the time-of-arrival, is the center frequency, is the amplitude, is the bandwidth factor, is the chirp-rate, and is the phase. Hence, for the ultrasonic Gaussian chirp echo, , the magnitude of given by (1) can be expressed as where the integration part can be written as with , , and . From (3), it can be seen that, for a linear frequency modulation (LFM) signal (i.e., ), if the transformation order, , satisfies the following equation: then the compacts to a delta function. This means that fractional Fourier transform can be used to compress the duration and compact the energy of ultrasonic chirp echo with an optimal transform order. Optimal transform order can be determined using kurtosis. The energy compaction is a desirable property for ultrasonic signal decomposition, which allows using a window in FrFT domain for isolation of an echo of interest. 3. Kurtosis and FrFT Order Kurtosis is commonly used in statistics to evaluate the degree of peakedness for a distribution [20, 21]. It is defined as the ratio of 4th-order central moment and square of 2nd-order central moment: where denotes 4th-order central moment and denotes 2nd-order central moment. A signal with high kurtosis means that it has a distinct peak around the mean. In the literatures of FrFT [18, 19, 22], kurtosis is typically used as a metric to search the optimal transform order of FrFT. Different transform order directs the degree of signal rotation caused by FrFT, and this rotation affects the extent of energy compaction of the transformed signal. Figure 1(a) shows a chirp signal with the parameters,. For this example, the bandwidth factor equals to zero (see (2)), and according to (5), the optimal transform order can be calculated as As shown in Figure 1(b), this optimal order can also be determined by direct search for the maximum amplitude of FrFT using different transform orders according to (3). The transform order corresponding to the maximum FrFT among all transform orders matches the theoretical result given in (7). For ultrasonic applications, the chirp echo is band-limited. For example, Figure 2(a) shows a band-limited single chirp echo with the parameters . Chirplet is a model widely used in ultrasonic NDE applications. Figure 2 illustrates the FrFT of a chirplet using different transform orders. In particular, the transform order from (7) (i.e., −0.013) is used for a comparison. Our simulation shows that the optimal transform order for the band-limited echo is different compared with the one for the LFM echo due to the impact of bandwidth factor in chirp echoes. One can conclude that the compactness in the fractional Fourier transform of an ultrasonic echo can be used to track the optimal transform order. It is also important to point out that the optimal transform order is highly sensitive to a small change in the order. Therefore, using kurtosis becomes a practical approach to obtain the optimal FrFT order for ultrasonic signal analysis. 4. FrFT Chirplet Signal Decomposition Algorithm The objective of FrFT-CSD is to decompose a highly convoluted ultrasonic signal, , into a series of signal components: where denotes the th fractional chirplet component and denotes the residue of the decomposition process. The steps involved in the iterative estimation of an experimental ultrasonic signal are(1)initialize the iteration index ;(2)obtain a windowed signal after applying a window, , in time domain; (3) determine the FrFT of the signal, , , for different orders, ;(4)calculate kurtosis of for different orders, : (5)estimate the optimal transform order, : corresponds to the FrFT transform order where has the max value. In our study, a brute-force search is used to estimate the optimal transform order. The step size of searching is set to 0.005. The computation load of calculating the kurtosis and searching for the optimal order is significant. Some researchers used the maximum peak in the transform domain as an alternative metric [17]. For ultrasonic signal decomposition, the optimal transform order is related to the chirp rate of the signal. The search range of transform order can be reduced by considering prior knowledge of ultrasonic transducer impulse response;(6)apply FrFT with the estimated order to the signal and obtain ;(7)obtain a windowed signal from : (8)apply the transformation order, , to the signal , then reconstruct the component by estimating parameters of the decomposed echo: the parameter estimation process here becomes a single-echo estimation problem. A Gaussian-Newton algorithm used in [23–25] is adopted in FrFT-CSD;(9)obtain the residual signal by subtracting the estimated echo from the signal, , and use the residual signal for next echo estimation;(10)calculate energy of residual signal () and check convergence: ( is predefined convergence condition) If , STOP; otherwise, go to step 2. For further clarification, the flowchart of FrFT-CSD algorithm is shown in Figure 3. It is important to mention that two windowing steps are used in FrFT-CSD algorithm. One window is used in step 2 in order to isolate a dominant echo in time domain. It is inevitable to have an incomplete echo due to windowing process. A good strategy of choosing this window is to keep as much of echo information as possible. The other window is applied in step 7. For ultrasonic chirp echoes, the energy compactness of FrFT helps to reduce the window size centered on a desired peak in the transform domain. As shown in Figure 2, a chirplet is compressed to a great extent after the transform. An automatic windowing process is used to detect the valleys of the dominant echo. In the cases of heavily overlapping echoes and high noise levels (i.e., the cases of poor signal-to-noise ratio), the performance of windowing method may be compromised. In this situation, a window with a predetermined size can be used to isolate desirable peaks. 5. Simulation and Benchmark Study of FrFT-CSD To demonstrate the advantages of FrFT signal decomposition in ultrasonic signal processing, ultrasonic chirp echoes with three different overlapping scenarios are simulated, where chirp rate models the dispersive effect in ultrasonic testing of materials. Two slightly overlapped (about 20% overlapped) echoes is simulated using the sampling frequency of 100MHz. The parameters of these two echoes are Figure 4 shows the simulated signal (in blue) superimposed with estimated echoes (in red). The estimated parameters perfectly match the parameters of simulation signal as compared in Table 1. One can conclude that the FrFT-CSD not only decomposes the signal efficiently, but also leads to precise parameter estimation results. A moderately overlapped (about 50% overlapped) simulated signal consisting of two echoes is shown in Figure 5. For this simulated signal, Table 2 shows that the estimated parameters are accurate within a few percents. Finally, Figure 6 and Table 3 show the simulated and estimated two heavily overlapped (about 70% overlapped) echoes. The decomposition results (Figure 6) and estimated parameters (Table 3) confirm the robustness and effectiveness of FrFT-CSD in echo estimation for ultrasonic signal analysis. An experimental bat data is commonly used as a benchmark signal in time-frequency analysis. It is a 400-sample data digitized 2.5μs echolocation pulse emitted by a large brown bat with 7μs sampling period. To evaluate the performance of FrFT-based signal decomposition algorithm, the bat data is utilized to demonstrate the effectiveness of algorithm. Through the processing of FrFT-CSD, there are four main chirp-type signal components identified in the bat signal. The decomposed signals and their Wigner-Ville distribution (WVD) are shown in Figure 7. The reconstructed signal and its superimposed WVD are shown in Figure 8. The results in Figures 7 and 8 are consistent with the analysis results from other techniques in time-frequency analysis [ 26]. The FrFT-based signal decomposition algorithm not only reveals that the bat signal mainly contains four chirp stripes in time-frequency domain, but provides a high-resolution time-frequency 6. Experimental Studies For experimental studies, two aluminum blocks with different size of side-drilled hole (SDH) are used [27]. One is with 1mm diameter, another is 4mm diameter. The experimental setting is shown in Figure 9. It can be seen that the water path is 50.8mm and the depth of SDH is 25.4mm (i.e., from the water-aluminum interface to the center of SDH). To provide a rigorous test, two 5MHz transducers are used to acquire ultrasonic data at normal or oblique refracted angles, . One is planar transducer. Another is spherically focused transducer with 172.9mm focal length. To verify the experiment setup, the FrFT-CSD is utilized to analyze the ultrasonic data from the front surface of the specimen. The ultrasonic data superimposed with the estimated chirplet is shown in Figure 10. It can be seen that the estimated time-of-arrival (TOA) of the front surface echo is 68.72μs. In addition, from the experimental setting, the TOA can be calculated as where denotes the water distance, and in the case of incidence angle 0 this distance is 50.8mm. The round trip of ultrasound is twice of the water distance, . The term denotes the velocity of ultrasound in medium: mm/μs for water. From (15), the theoretical value of TOA is 68.47μs. The estimated TOA is in agreement (within 0.4%) with the theoretical TOA. Furthermore, the parameters of chirplet are strongly related to the crack size, location, and orientation. For example, the amplitude is a good indicator of crack size. In Tables 4 and 5, the estimated amplitude from a 4mm SDH is roughly twice of the estimated amplitude from a 1mm SDH. In NDE applications, the estimated amplitude of a known-size crack could be used as a reference to estimate the size of crack. As shown in (15) and (16), the estimated TOA can be used to approximate the location of crack. In addition, different types of cracks could have different frequency variations. From [8, 26], the response of crack usually shows a downshift in the frequency compared with the responses of grains inside the material. These results indicate that the estimated parameters from FrFT-CSD algorithm track with reasonable accuracy the physical parameters of experimental setup. Moreover, the FrFT-CSD algorithm provides more detailed information describing the reflected echoes such as phase, bandwidth factor and chirp rate that can be used for further analysis. Another experiment is set up to evaluate disk-shaped cracks in a diffusion-bonded titanium alloy sample [28]. The ultrasonic data of these synthetic cracks are obtained at normal or oblique refracted angles, using a 10MHz planar transducer. The diameter of the transducer is 6.35mm. The water depth is 25.4mm. The surface of diffusion bond is 13mm below the front surface of water/titanium alloy interface. Two different sizes of cracks are made with the diameter 0.762mm (i.e., crack D) and the diameter 1.905mm (i.e., crack C). For crack C, the responded ultrasonic data is recorded from the two edges of the crack, which are marked as point a and point b. The thickness of both disk-shaped cracks is 0.089mm. Figure 11 shows the experiment setup for the alloy sample [28]. From Figure 11, the TOA of crack at refracted angle is calculated as follows: where TOA[ref] denotes the estimated TOA of reference signal (i.e., 34.58μs from Tables 6 and 7). The round trip of ultrasound inside titanium from the front surface to the diffusion bound is , where denotes the depth of diffusion bond, which is 13mm; denotes the refracted angle and denotes the velocity of ultrasound in medium: mm/μs for titanium. Therefore, at the angle 0° is 38.777μs. at the angle 30° is 39.425μs. At the angle 45°, is 40.514μs. From Tables 6 and 7, it can be seen that the estimated at angle 0° is 38.776μs and 38.754μs. Taking the thickness of the cracks (0.089mm) into consideration, it can be asserted that the estimated TOAs at incident angle 0° are in good agreement with experimental measurements. Experimental signals of crack C and crack D (with normalized amplitudes) superimposed with the estimated chirplets (depicted in dashed line and red color) are shown in Figures 12 and 13. It also can be seen that the front surface reference signal and the experimental data obtained at angle 0° are well reconstructed by the FrFT-CSD algorithm (see Figures 12(a), 12(b), 13(a) and 13(b)). Nevertheless, with the increase of refracted angle, more chirplets needed to decompose the experimental data (see the refracted angle 30 and 45 degree cases). In addition, Tables 6 and 7 show that the signal energy is more evenly distributed to estimated chirplets in the high refracted angle cases. This spreading of signal might be caused by geometrical effect of the beam profile of the planner transducer and corners/edges of disk-shaped crack. To further evaluate the performance of FrFT-based signal decomposition algorithm, experimental ultrasonic microstructure scattering signals are utilized to demonstrate the effectiveness of the algorithm. The experimental signal is acquired from a steel block with an embedded defect using a 5MHz transducer and sampling rate of 100MHz. The acquired experimental data superimposed with the reconstructed signal consisting of 8 dominant chirplets are shown in Figure 14(a). The estimated parameters of dominant chirplets are listed in Table 8. It can be seen that the 8 dominant chirplets not only provide a sparse representation of experimental data, but successfully detect the embedded defect. To improve the accuracy of signal reconstruction, FrFT-CSD could be used iteratively to decompose the signal further. A reconstructed signal using 23 chirplets is shown in Figure 14(b). The comparison between the experimental signal and the reconstructed signals clearly demonstrates that the FrFT-CSD is highly effective in ultrasonic signal decomposition. 7. Conclusion In this paper, fractional Fourier transform is studied for ultrasonic signal processing. Simulation study reveals the link among kurtosis, the transform order, and the parameters of each decomposed components. Benchmark and experimental data sets are utilized to test the FrFT-based chirplet signal decomposition algorithm. Signal decomposition and parameter estimation results show that fractional Fourier transform can successfully assist signal decomposition algorithm by identifying the dominant echo in successive estimation iteration. Parameter estimation is further performed based on the echo isolation. The FrFT-CSD algorithm could have a broad range of applications in signal analysis including target detection and pattern recognition. The authors wish to thank Curtis Condon, Ken White, and Al Feng of the Beckman Institute of the University of Illinois for the bat data and for permission to use it in the study. 1. S. Mallat, A Wavelet tour of Signal Processing: The Sparse Way, Academic Press, 2008. 2. I. Daubechies, “The wavelet transform, time-frequency localization and signal analysis,” IEEE Transactions on Information Theory, vol. 36, no. 5, pp. 961–1005, 1990. View at Publisher · View at Google Scholar · View at Scopus 3. S. Mann and S. Haykin, “The chirplet transform: physical considerations,” IEEE Transactions on Signal Processing, vol. 43, no. 11, pp. 2745–2761, 1995. View at Publisher · View at Google Scholar · View at Scopus 4. G. Cardoso and J. Saniie, “Ultrasonic data compression via parameter estimation,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 52, no. 2, pp. 313–325, 2005. View at Publisher · View at Google Scholar · View at Scopus 5. R. Tao, Y. L. Li, and Y. Wang, “Short-time fractional fourier transform and its applications,” IEEE Transactions on Signal Processing, vol. 58, no. 5, pp. 2568–2580, 2010. View at Publisher · View at Google Scholar · View at Scopus 6. S. Zhang, M. Xing, R. Guo, L. Zhang, and Z. Bao, “Interference suppression algorithm for SAR based on time frequency domain,” IEEE Transaction on Geoscience and Remote Sensing, vol. 49, no. 10, pp. 3765–3779, 2011. 7. E. Oruklu and J. Saniie, “Ultrasonic flaw detection using discrete wavelet transform for NDE applications,” in Proceedings of IEEE Ultrasonics Symposium, pp. 1054–1057, August 2004. View at 8. Y. Lu, R. Demirli, G. Cardoso, and J. Saniie, “A successive parameter estimation algorithm for chirplet signal decomposition,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 53, no. 11, pp. 2121–2131, 2006. View at Publisher · View at Google Scholar · View at Scopus 9. S. C. Pel and J. J. Ding, “Closed-form discrete fractional and affine fourier transforms,” IEEE Transactions on Signal Processing, vol. 48, no. 5, pp. 1338–1353, 2000. View at Scopus 10. L. B. Almeida, “Fractional fourier transform and time-frequency representations,” IEEE Transactions on Signal Processing, vol. 42, no. 11, pp. 3084–3091, 1994. View at Publisher · View at Google Scholar · View at Scopus 11. C. Candan, M. Alper Kutay, and H. M. Ozaktas, “The discrete fractional fourier transform,” IEEE Transactions on Signal Processing, vol. 48, no. 5, pp. 1329–1337, 2000. View at Scopus 12. A. S. Amein and J. J. Soraghan, “The fractional Fourier transform and its application to High resolution SAR imaging,” in Proceedings of IEEE International Geoscience and Remote Sensing Symposium (IGARSS '07), pp. 5174–5177, June 2007. View at Publisher · View at Google Scholar · View at Scopus 13. M. Barbu, E. J. Kaminsky, and R. E. Trahan, “Fractional fourier transform for sonar signal processing,” in Proceedings of MTS/IEEE OCEANS, vol. 2, pp. 1630–1635, September 2005. View at Publisher · View at Google Scholar · View at Scopus 14. I. S. Yetik and A. Nehorai, “Beamforming using the fractional fourier transform,” IEEE Transactions on Signal Processing, vol. 51, no. 6, pp. 1663–1668, 2003. View at Publisher · View at Google Scholar · View at Scopus 15. S. Karako-Eilon, A. Yeredor, and D. Mendlovic, “Blind source separation based on the fractional Fourier transform,” in Proceedings of the 4th International Symposium on Independent Component Analysis and Blind Signal Separation, pp. 615–620, 2003. 16. A. T. Catherall and D. P. Williams, “High resolution spectrograms using a component optimized short-term fractional Fourier transform,” Signal Processing, vol. 90, no. 5, pp. 1591–1596, 2010. View at Publisher · View at Google Scholar · View at Scopus 17. M. Bennett, S. McLaughlin, T. Anderson, and N. McDicken, “Filtering of chirped ultrasound echo signals with the fractional fourier transform,” in Proceedings of IEEE Ultrasonics Symposium, pp. 2036–2040, August 2004. View at Scopus 18. L. Stanković, T. Alieva, and M. J. Bastiaans, “Time-frequency signal analysis based on the windowed fractional Fourier transform,” Signal Processing, vol. 83, no. 11, pp. 2459–2468, 2003. View at Publisher · View at Google Scholar · View at Scopus 19. Y. Lu, A. Kasaeifard, E. Oruklu, and J. Saniie, “Performance evaluation of fractional Fourier transform(FrFT) for time-frequency analysis of ultrasonic signals in NDE applications,” in Proceedings of IEEE International Ultrasonics Symposium (IUS '10), pp. 2028–2031, October 2010. View at Publisher · View at Google Scholar · View at Scopus 20. F. Millioz and N. Martin, “Circularity of the STFT and spectral kurtosis for time-frequency segmentation in Gaussian environment,” IEEE Transactions on Signal Processing, vol. 59, no. 2, pp. 515–524, 2011. View at Publisher · View at Google Scholar · View at Scopus 21. R. Merletti, A. Gulisashvili, and L. R. Lo Conte, “Estimation of shape characteristics of surface muscle signal spectra from time domain data,” IEEE Transactions on Biomedical Engineering, vol. 42, no. 8, pp. 769–776, 1995. View at Publisher · View at Google Scholar · View at Scopus 22. Y. Lu, E. Oruklu, and J. Saniie, “Analysis of Fractional Fouriter transform for ultrasonic NDE applications,” in Proceedings of IEEE Ultrasonic Symposium, Orlando, Fla, USA, October 2011. 23. R. Demirli and J. Saniie, “Model-based estimation of ultrasonic echoes part I: analysis and algorithms,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 48, no. 3, pp. 787–802, 2001. View at Publisher · View at Google Scholar · View at Scopus 24. R. Demirli and J. Saniie, “Model-based estimation of ultrasonic echoes part II: nondestructive evaluation applications,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 48, no. 3, pp. 803–811, 2001. View at Publisher · View at Google Scholar · View at Scopus 25. R. Demirli and J. Saniie, “Model based time-frequency estimation of ultrasonic echoes for NDE applications,” in Proceedings of IEEE Ultasonics Symposium, pp. 785–788, October 2000. View at Scopus 26. Y. Lu, E. Oruklu, and J. Saniie, “Ultrasonic chirplet signal decomposition for defect evaluation and pattern recognition,” in Proceedings of IEEE International Ultrasonics Symposium (IUS '09), ita, September 2009. View at Publisher · View at Google Scholar · View at Scopus 27. Ultrasonic Benchmark Data, World Federation of NDE, 2004, http://www.wfndec.org/. 28. Ultrasonic Benchmark Data, World Federation of NDE, 2005, http://www.wfndec.org/.
{"url":"http://www.hindawi.com/journals/aav/2012/480473/","timestamp":"2014-04-18T03:04:17Z","content_type":null,"content_length":"233141","record_id":"<urn:uuid:678190eb-ceec-479c-ae2d-3eebf231ef48>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHODS AND SYSTEMS FOR PROCESSING A DIGITAL SIGNAL BY ESTIMATING SIGNAL ENERGY AND NOISE POWER DENSITY Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Systems and methods are described for processing a digital signal. In one embodiment, the method comprises receiving time-aligned input samples of an input signal; computing at least one moment using the time-aligned input samples; determining at least one of signal energy per symbol and noise power spectral density based on the at least one moment; and adjusting an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. A method of processing a digital signal, comprising: receiving time-aligned input samples of an input signal; computing at least one moment using the time-aligned input samples; determining at least one of signal energy per symbol and noise power spectral density based on the at least one moment; and adjusting an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. The method of claim 1 wherein the computing the at least one moment comprises computing a fourth order moment using the time-aligned input samples. The method of claim 2 wherein the determining comprises determining the signal energy per symbol based on the fourth order moment. The method of claim 2 wherein the computing the at least one moment further comprises computing a second order moment using the time-aligned input samples. The method of claim 4 wherein the determining comprises determining the noise power spectral density based on the fourth order moment and the second order moment. The method of claim 5 wherein the determining further comprises determining the signal energy per symbol based on an equation of the fourth order moment and the second order moment. The method of claim 6 wherein the determining comprises determining the noise power spectral density by subtracting the signal energy per symbol from the second order moment. The method of claim 1 further comprising processing the input signal by a demodulator after the adjusting. The method of claim 1 wherein the time-aligned input signals are not carrier coherent. A system for processing a digital signal, comprising: a first module that computes at least one moment using time-aligned input samples; a second module that determines at least one of a signal energy per symbol and a noise power spectral density based on the at least one moment; and a third module that adjusts an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. The system of claim 10 wherein the first module computes a fourth order moment using the time-aligned input samples. The system of claim 11 wherein the second module determines the signal energy per symbol based on the fourth order moment. The system of claim 11 wherein the first module computes a second order moment using the time-aligned input samples. The system of claim 13 wherein the second module determines the noise power spectral density based on the fourth order moment and the second order moment. The system of claim 14 wherein the second module determines the signal energy per symbol based on an equation of the fourth order moment and the second order moment. The system of claim 15 wherein the second module determines the noise power spectral density by subtracting the signal energy per symbol from the second order moment. The system of claim 10 further comprising a demodulator that processes the input signal the input signal is adjusted. The system of claim 10 wherein the time-aligned input signals are not carrier coherent. A computer program product for processing a digital signal, comprising: a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising: receiving time-aligned input samples of an input signal; computing at least one moment using the time-aligned input samples; determining at least one of signal energy per symbol and noise power spectral density based on the at least one moment; and adjusting an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. TECHNICAL FIELD [0001] The present disclosure generally relates to methods and systems for signal processing, and more particularly relates to methods and systems for signal processing for parameters used in demodulation BACKGROUND [0002] In systems that provide coherent demodulation of digital signals such as, but not limited to, Minimum Phase Shift Keying (MPSK), Minimum Shift Keying (MSK), and Gaussian Minimum Shift Keying (GMSK), the received signal must be properly aligned to a set of decision thresholds for optimum performance. Techniques to properly set a signal power as the input of a decision device typically require that both timing and carrier coherence are established prior to adjusting the signal level for optimum detection. However, many demodulator architectures allow for the establishment of signal timing prior to establishing carrier coherency. As a result, it is desirable to provide methods and systems for processing the time-aligned signals prior to establishing carrier coherency to optimize the performance of the demodulator. Other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and this background of the invention. BRIEF SUMMARY [0004] According to various exemplary embodiments, systems and methods are described for processing a digital signal. In one embodiment, the method comprises receiving time-aligned input samples of an input signal; computing at least one moment using the time-aligned input samples; determining at least one of signal energy per symbol and noise power spectral density based on the at least one moment; and adjusting an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. In another embodiment, a system is provided for processing a digital signal. The system includes a first module that computes at least one moment using time-aligned input samples. A second module determines at least one of signal energy per symbol and noise power spectral density based on the at least one moment. A third module adjusts an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. In yet another embodiment, a computer program product is provided for processing a digital signal. The computer program product comprises a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising: receiving time-aligned input samples of an input signal; computing at least one moment using the time-aligned input samples; determining at least one of signal energy per symbol and noise power spectral density based on the at least one moment; a adjusting an input signal level based on the at least one of the signal energy per symbol and the noise power spectral density. Other embodiments, features and details are set forth in additional detail below. BRIEF DESCRIPTION OF THE DRAWINGS [0008] The present invention will hereinafter be described in conjunction with the following figures, wherein like numerals denote like elements, and FIG. 1 is a functional block diagram illustrating a demodulator system including an estimation module in accordance with exemplary embodiments; FIG. 2 is a more detailed block diagram illustrating an estimation module of the demodulator system in accordance with exemplary embodiments; and FIG. 3 is a flowchart illustrating an estimation method that may be performed by the demodulator system in accordance with exemplary embodiments. DETAILED DESCRIPTION [0012] The following detailed description of the invention is merely example in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description. As used herein, the term "module" refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including, without limitation: an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Turning now to the figures and with initial reference to FIG. 1, a parameter estimation system 10 is shown to be associated with a demodulator system 11 in accordance with exemplary embodiments. As can be appreciated, the parameter estimation system 10 of the present disclosure is applicable to various signal processing systems such as, but not limited to, other demodulation systems, antenna processing systems, signal diversity combining systems, multiple input and multiple output antenna processing systems, open loop power control systems, and closed loop power control systems, and is not limited to the present example. For exemplary purposes, the disclosure will be discussed in the context of the demodulator system 11. In various embodiments, the demodulator system 11 includes a demodulator 14 having a data decision device 16 that processes an input signal 18a. The demodulator 14 is a digital signal processor such as, but not limited to a MPSK demodulator, a MSK demodulator, and a GMSK demodulator. The parameter estimation system 10 includes an estimation module 12. The estimation module 12 processes an input signal 18b to estimate the signal energy per symbol (Es) 20 and the noise power spectral density (No) 22. The input signal 18b includes time-aligned samples of the input signal 18a. The samples are time aligned but have not established carrier coherency. For example, the estimation module 12 estimates the Es 20 and the No 22 based on a relationship between characteristics of several moments and by a manipulation of these moments so that noise-only and signal-only moments become separable. By doing so, the estimation is made prior to carrier coherence tracking. The estimated signals Es 20 and No 22 are then used to adjust the input signal 18a to the data decision device 16 and/or the demodulator 14. In particular, the estimated signals 20, 22 can be used to set a signal level at the input to one or both of the data decision device 16 and the demodulator 14. Referring now to FIG. 2, a more detailed block diagram illustrates exemplary embodiments of the estimation module 12. In various embodiments, the estimations performed by the estimation module 12 may be based upon fourth-order signal characteristics. For example, let the input 18b be: where s is the signal component , and n is the noise component. The model illustrates a processing method that separates signal characteristics from noise characteristics using time-based averages which are assumed to be equivalent to ensemble averages of the estimator. In FIG. 2, the estimation module 12 computes a second moment of the received signal. For example, by taking the magnitude square of the individual samples of the received signal and averaging over a predetermined number of the sample yields: }=m.- sub.2ss*+m . (1) The fourth-moment of the received signal is then computed as: { r 4 } = E { s 4 + s 2 s * n + s 2 sn * + s 2 n 2 } = E { sn * s 2 + sn * s * n + sn * sn * + sn * n 2 } = E { s * n s 2 + s * n s * n + s * n sn * + s * n n 2 } = E { n 2 s 2 + n 2 s * n + n 2 sn * + n 2 n 2 } E { r 4 } = E { s 4 } + 4 E { s 2 } E { n 2 } + E { n 4 } . ( 2 ) ##EQU00001## In moment notation, specifically noting which components are conjugated and which are not provides: *nn*. (3) To estimate the Es consider: *nn*. (4) A zero-mean Gaussian process provides: * (5) Substituting equation 5 into equation 4 provides: *ss*. (6) For constant envelope signals and operation on samples at the output of the matched filter provides: ss*=.sigma- . . (7) Thus, the normalized sampling provides: {square root over (2E{|r| })}= {square root over (σ . (8) The Es 20 can then be subtracted from the second moment to determine No 22 as subtracting equation 8 from equation 1 and considering operation on one sample per symbol provides: }- {square root over (2E{|r| .sup- .2-σ . (9) Referring now to FIG. 3, and with continued reference to FIGS. 1 and 2, a flowchart illustrates an estimation method that can be implemented by the parameter estimation system 10 of FIG. 1 in accordance with the present disclosure. As can be appreciated in light of the disclosure, the order of operation within the method shown in FIG. 3 is not limited to the sequential execution as illustrated in FIG. 3, but may be performed in one or more varying orders as applicable and in accordance with the present disclosure. As can further be appreciated, one or more steps of the method may be added or deleted without altering the spirit of the method. In various embodiments, the estimation method may be scheduled to run at various time intervals and/or may be run based one or more predetermined events. In one example, the method may begin at 100. The time-aligned samples of the signal 18b are received at 110. The second moment of the received signal is computed using the time-aligned samples 18b and, for example, equation 1 above at 120. The fourth moment of the received signal is computed using the time-aligned samples 18b and, for example, equation 3 above at 130. The Es 20 is set to an algebraic equation of the fourth and second moments of the received signal at 140. The No 22 is computed by subtracting the Es 20 from the second moment of the received signal, for example, using equation 9 at 150. The Es 20 and No 22 are then used to optimize the input signal 18a, for example, by using the values 20, 22 to adjust the input signal power to the modulator 14 at 160. Thereafter, the method may end at 170. As can be appreciated, one or more aspects of the present disclosure can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present disclosure. The article of manufacture can be included as a part of a computer system or provided separately. Additionally, at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the present disclosure can be provided. While at least one example embodiment has been presented in the foregoing detailed description of the invention, it should be appreciated that a vast number of equivalent variations exist. It should also be appreciated that the embodiments described above are only examples, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing various examples of the invention. It should be understood that various changes may be made in the function and arrangement of elements described in an example embodiment without departing from the scope of the invention as set forth in the appended claims and their legal equivalents. Patent applications by GENERAL DYNAMICS C4 SYSTEMS, INC. Patent applications in class Interference or noise reduction Patent applications in all subclasses Interference or noise reduction User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130336432","timestamp":"2014-04-18T19:04:34Z","content_type":null,"content_length":"43668","record_id":"<urn:uuid:8f0c1636-3346-40cb-8695-8fb5dabc3215>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
linear programming September 2nd 2009, 07:36 PM linear programming i got to west field high school in Illinois and one of my teacher gave problem that is really confusing. it would be nice if i can get any help. Here is the problem: A manufacturer of the garden furniture makes a giverny bench and a kensington bench. The company can produce no more than 60 benches a day. The company needs to produce at least 15 giverny benches and at least 20 kensington benches. The company can produce at most twice as many Kensington benches as Giverny benches. if each giverny sells for $325 and kensington bench sekks for 250$. How many of each kind of bench should be produced for maxim profit. Here are my constrains that i found : X = giverny , Y = Kensington. i am confuced about this sentence "The company can produce at most twice as many Kensington benches as Giverny benches." what does it mean? is there different constrain for that? September 3rd 2009, 12:23 AM i got to west field high school in Illinois and one of my teacher gave problem that is really confusing. it would be nice if i can get any help. Here is the problem: A manufacturer of the garden furniture makes a giverny bench and a kensington bench. The company can produce no more than 60 benches a day. The company needs to produce at least 15 giverny benches and at least 20 kensington benches. The company can produce at most twice as many Kensington benches as Giverny benches. if each giverny sells for $325 and kensington bench sells for 250$. How many of each kind of bench should be produced for maximum profit. Here are my constraints that i found : X = giverny , Y = Kensington. i am confuced about this sentence "The company can produce at most twice as many Kensington benches as Giverny benches." what does it mean? is there different constraint for that? Yes, it tells you that Y <= 2X. September 3rd 2009, 01:10 PM what do u mean?(Wondering) September 3rd 2009, 01:25 PM I mean that the sentence "The company can produce at most twice as many Kensington benches as Giverny benches" tells you that Y (the number of Kensingtons) is at most twice x (the number of Givernys). In symbols, $Y\leqslant2X$. That is an additional constraint, to go along with the other ones that you listed. September 3rd 2009, 08:09 PM Thank you! for your help.
{"url":"http://mathhelpforum.com/pre-calculus/100329-linear-programming-print.html","timestamp":"2014-04-18T01:02:27Z","content_type":null,"content_length":"7445","record_id":"<urn:uuid:97899ee1-10ad-43de-8f55-e5e4eb31a14d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Sums of arctangents MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. $$ \arctan(x) & = \arctan(1) + \arctan\left(\frac{x-1}{2}\right) \\ & {} - \arctan\left(\frac{(x-1)^2}{4} \right) + \arctan\left(\frac{(x-1)^3}{8}\right) - \cdots $$ Is this known? up vote 6 down vote favorite 5 special-functions sequences-and-series show 2 more comments $$ \arctan(x) & = \arctan(1) + \arctan\left(\frac{x-1}{2}\right) \\ & {} - \arctan\left(\frac{(x-1)^2}{4} \right) + \arctan\left(\frac{(x-1)^3}{8}\right) - \cdots $$ I have no references for this particular series, but here's some hints to get a closed formula for the coefficients listed above by Michael Renardy. If we let $u:=\frac {1-x} 2$, an expansion $$\arctan(1-2u)=\arctan(1) + \sum_{k=1}^\infty \frac {c_k} k \arctan(u^k)$$ can be obtained by term-wise integration over on $[0,u]$ of a (somehow up vote more common) expansion into rational fractions $$\frac 2 {1 + (1-2u)^2}= \sum_{k=1}^\infty \\ c_k \frac {u^{k-1}} {1+u^{2k} }\, ,$$ (such expansions have a role in number theory, and are 6 down related to Dirichlet series). Here the coefficients may be identified expanding formally the geometric series $ (1+ u^{2k} )^{-1}$ and rearranging into a series of powers of $u$, to be vote compared with the power series of the LHS. One finds an equality with an arithmetic convolution, that inverted gives the $c_k$'s. The exponential growth of the $c_k$ give a positive radius of convergence (I guess $1/\sqrt 2$), that in particular allows the term-wise integration. Note that $\frac 2 {1 + (1-2u)^2}= \mathrm{Im} { \frac 2 {1-2u+i} }$, that simplifies things a bit. add comment I have no references for this particular series, but here's some hints to get a closed formula for the coefficients listed above by Michael Renardy. If we let $u:=\frac {1-x} 2$, an expansion $$\arctan(1-2u)=\arctan(1) + \sum_{k=1}^\infty \frac {c_k} k \arctan(u^k)$$ can be obtained by term-wise integration over on $[0,u]$ of a (somehow more common) expansion into rational fractions $$\frac 2 {1 + (1-2u)^2}= \sum_{k=1}^\infty \\ c_k \frac {u^{k-1}} {1+u^{2k} }\, ,$$ (such expansions have a role in number theory, and are related to Dirichlet series). Here the coefficients may be identified expanding formally the geometric series $ (1+ u^{2k} )^{-1}$ and rearranging into a series of powers of $u$, to be compared with the power series of the LHS. One finds an equality with an arithmetic convolution, that inverted gives the $c_k$'s. The exponential growth of the $c_k$ give a positive radius of convergence (I guess $1/\sqrt 2$), that in particular allows the term-wise integration. Note that $\frac 2 {1 + (1-2u)^2}= \mathrm{Im} { \frac 2 {1-2u+i} }$, that simplifies things a bit. I've voted up Pietro Majer's incomplete answer and Michael Renardy's incomplete answer in the "comments" section. Here's my own incomplete answer. Here's how I got this series: start with the identity $$ \arctan a - \arctan b = \arctan \frac{a-b}{1+ab}. $$ From this we get $$ \arctan x = \arctan 1 + \arctan\frac{x-1}{1+x}. $$ Substituting 1 for $x$ everywhere in the last expression except the power of $x-1$, we get the 1st-degree term. So we need to replace the last term above by the 1st-degree term plus another arctangent by using the basic identity above, and we get $$ \arctan\frac{x-1}{1+x} = \arctan\frac{x-1}{2} + \arctan\frac{-(x-1)^2}{2(1+x) +(x-1)^2}. $$ Then again substitute 1 for $x$ everwhere in the last term except in the power of $(x-1)$ in the numerator, to get the 2nd-degree term, and then write the last term above as the sum of the 2nd-degree term and another up vote arctangent of a yet more complicated rational function. And so on. 2 down vote Does the sequence of arctangents of rational functions go to 0? In some sense? I don't know, nor do I know the general pattern. I actually tried this first with $x-2$ instead of $x-1$; then I decided that $x-1$ already has enough initial unclarity. I don't even know whether in some reasonable sense the process goes on forever. add comment I've voted up Pietro Majer's incomplete answer and Michael Renardy's incomplete answer in the "comments" section. Here's my own incomplete answer. Here's how I got this series: start with the identity $$ \arctan a - \arctan b = \arctan \frac{a-b}{1+ab}. $$ From this we get $$ \arctan x = \arctan 1 + \arctan\frac{x-1}{1+x}. $$ Substituting 1 for $x$ everywhere in the last expression except the power of $x-1$, we get the 1st-degree term. So we need to replace the last term above by the 1st-degree term plus another arctangent by using the basic identity above, and we get $$ \arctan\frac{x-1}{1+x} = \arctan\frac{x-1}{2} + \arctan\frac{-(x-1)^2}{2(1+x) +(x-1)^2}. $$ Then again substitute 1 for $x$ everwhere in the last term except in the power of $(x-1)$ in the numerator, to get the 2nd-degree term, and then write the last term above as the sum of the 2nd-degree term and another arctangent of a yet more complicated rational function. And so on. Does the sequence of arctangents of rational functions go to 0? In some sense? I don't know, nor do I know the general pattern. I actually tried this first with $x-2$ instead of $x-1$; then I decided that $x-1$ already has enough initial unclarity. I don't even know whether in some reasonable sense the process goes on forever.
{"url":"http://mathoverflow.net/questions/50472/sums-of-arctangents?sort=oldest","timestamp":"2014-04-20T08:44:25Z","content_type":null,"content_length":"68314","record_id":"<urn:uuid:35524546-fea6-4beb-871c-57e9578215c2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
New Blaths Here are a couple new blaths (I’m going to popularize that term if it kills me) to hit the intarwobs. First off is the latest Fields-Medalist-cum-blatherer, Timothy Gowers. More interesting to me is Rigorous Trivialities, which is being sporadically written by a new graduate student at UPenn. I wonder if Isabel put him up to it. Either way, this means we’re only one seminar invitation from having three Ivy-league-educated math bloggers in one room (hint, hint). 6 Comments » 1. I didn’t put him up to it, although I do seem to recall that the fact that mathematical blogs exist came up in conversation. Isabel | September 12, 2007 | Reply 2. I was actually inspired by Greg from the Everything Seminar when I met him over the summer. And strictly speaking, there’s three of us running Rigorous Trivialities, the other two just haven’t had the time to put together any posts. Charles | September 13, 2007 | Reply 3. So, Charles, who are the other two? More UPenn grad students? Do post it back here for an Unapologetic Mathematician exclusive :D John Armstrong | September 13, 2007 | Reply 4. Nope, a first year at Chicago and a second year at Stonybrook. The former I did an REU with, the latter I did undergrad. I’m currently prodding the other two to start posting, including their own introductory posts. The one from Stonybrook should be starting by the end of next week. Here’s hoping! Charles | September 13, 2007 | Reply 5. So what was the REU on? The polynomial knots? And where was undergrad? John Armstrong | September 13, 2007 | Reply 6. Undergrad was at Rutgers, and the REU was on the polynomial knots. Got to work with Alan Durfee and Don O’Shea up at Mt. Holyoke, which was great. The polynomial knots are something I keep coming back to, I have this nagging feeling that there’s something really cool lurking around there, but that I don’t know enough to identify it, so at the least I try to make others aware of it. Charles | September 13, 2007 | Reply • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/09/12/new-blaths/?like=1&source=post_flair&_wpnonce=d91e0e6d9a","timestamp":"2014-04-21T15:09:08Z","content_type":null,"content_length":"74286","record_id":"<urn:uuid:74d53682-b934-4825-b62b-34e484b0828c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
A transformation is a one-to-one mapping on a set of points. The most common transformations map the points of the plane onto themselves, in a way which keeps all lengths the same. These transformations are called isometries. Another common sort of transformation which does not preserve lengths are dilatations. There are four isometries in the plane: translations, rotations, reflections, and glide reflections. A translation slides all the points in the plane the same distance in the same direction. This has no effect on the sense of figures in the plane. There are no invariant points (points that map onto themselves) under a translation. A rotation turns all the points in the plane around one point, which is called the center of rotation. A rotation does not change the sense of figures in the plane. The center of rotation is the only invariant point (point that maps onto itself) under a rotation. A rotation of 180 degrees is called a half turn. A rotation of 90 degrees is called a quarter turn. A reflection flips all the points in the plane over a line, which is called the mirror. A reflection changes the the sense of figures in the plane. All the points in the mirror contains all the invariant points (points that map onto themselves) under a reflection. Glide reflections A glide reflection translates the plane and then reflects it across a mirror parallel to the direction of the translations. A glide reflection changes the sense of figures in the plane. There are no invariant points (points that map onto themselves) under a glide reflection. The sense of a figure sense or handedness of a figure refers to the order of points as one goes around the figure. The three triangles in the figure are all congruent, but triangle ABC has the same sense as triangle DEF, and the opposite sense to triangle GHI. A figure has a visible sense if it looks different under a reflection. For example the letter R, if it is reflected, can not be mapped back onto itself by any sequence of translations or rotations. If a figure does not have a visible sense then it is said to have bilateral symmetry. The letter A for example, is symmetrical around a vertical line through its center. If it reflected by a vertical mirror it looks identical. Even if it reflected by a non-vertical mirror, it can be rotated onto itself. If a figure looks the same under a transformation then it is said to be symmetrical under that transformation. For example, the letter A is symmetrical under a reflection around a vertical mirror through its center. This sort of symmetry is called bilateral symmetry. The letter N is symmetrical under a half turn around the midpoint of its diagonal stroke. There are many interesting symmetrical patterns which cover the plane, all based on combinations of translations, rotations, reflections, and glide reflections. Another interesting set of symmetrical patterns are frieze patterns. [Transformations] [Plane Symmetry] [Geometry] [Resources] [Copyright] This page maintained by David A Reid Email: david.reid@acadiau.ca
{"url":"http://plato.acadiau.ca/courses/educ/reid/Geometry/Symmetry/Transformations.html","timestamp":"2014-04-21T12:16:41Z","content_type":null,"content_length":"5515","record_id":"<urn:uuid:c20a1cb0-ddb7-493b-a2de-ae16cf2ecc1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] how to fit a given pdf josef.pktd@gmai... josef.pktd@gmai... Wed Aug 11 20:51:04 CDT 2010 On Wed, Aug 11, 2010 at 8:00 PM, Geordie McBain <gdmcbain@freeshell.org> wrote: > 2010/8/12 Renato Fabbri <renato.fabbri@gmail.com>: >> Dear All, >> help appreciated, thanks in advance. >> how do you fit a pdf you have with a given pdf (say gamma). >> with the file attached, you can go like: >> a=open("AC-010_ED-1m37F100P0.txt","rb") >> aa=a.read() >> aaa=aa[1:-1].split(",") >> data=[int(i) for i in aaa] >> if you do pylab.plot(data); pylab.show() >> The data is something like: >> ___|\___ >> It is my pdf (probability density function). >> how can i find the right parameters to make that fit with a gamma? >> if i was looking for a normal pdf, for example, i would just find mean >> and std and ask for the pdf. >> i've been playing with scipy.stats.distributions.gamma but i have not >> reached anything. >> we can extend the discussion further, but this is a good starting point. >> any idea? > A general point on fitting empirical probability density functions is > that it is often much easier to fit the cumulative distribution > function instead. For one thing, this means you don't have to decide > on the intervals of the bins in the histogram. For another, it's > actually often the cdf that is more related to the final answer > (though I don't know your application, of course). > Here's a quote. > `So far the discussion of plots of distributions has emphasized > frequency (or probability) vs. size plots, whereas for many > applications cumulative plots are more important. Cumulative curves > are produced by plotting the percentage of particles (or weight, > volume, or surface) having particle diameters greater than (or less > than) a given particle size against the particle size. … Such curves > have the advantage over histograms for plotting data that the class > interval is eliminated, and they can be used to represent data which > are obtained in classified form having unequal class intervals' > (Cadle, R. D. 1965. Particle Size. New York: Reinhold Publishing > Corporation, pp. 38-39) > Once you've got your empirical cdf, the problem reduces to one of > nonlinear curve fitting, for whichever theoretical distribution you > like. For a tutorial on nonlinear curve fitting, see > scipy.optimize.leastsq at > http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html. You > could of course use this approach for the pdf too, but I fancy the cdf > result will be more robust. A similar approach for pareto works easily because the function is linear in log-log scale, but the properties of these graphical estimators was criticized in the readings as not being as good as other estimators. But they also show that in the graphs, empirical cdf or survival function are more stable/smooth than histograms, especially in the tails. MLE uses the pdf or logpdf without binning the data. > On the other hand, if you want something like your `mean and variance' > approach to fitting normal distributions, you could still compare your > mean and variance with the known values for the Gamma distribution > (available e.g. on its Wikipedia page) and back-out the two parameters > of the distribution from them. I'm not too sure how well this will > work, but it's pretty easy. called method of moments in the literature, as in Jonathan's estimator > Another idea occurs to me and is about as easy as this is to compute > the two parameters of the Gamma distribution by collocation with the > empirical cdf; i.e. pick two quantiles, e.g. 0.25 and 0.75, or > whatever, and get two equations for the two unknown parameters by > insisting on the Gamma cdf agreeing with the empirical for these > quantiles. This might be more robust than the mean & variance > approach, but I haven't tried either. It works pretty well, see and his other articles cited at the bottom. (Using a generalized method of moment approach, this can be generalized to using more than two quantiles, which works well in my On advantage of generic MLE is that I know how to get (asymptotic) standard errors and confidence intervals, while for many of the other estimators bootstrap or checking the distribution specific formulas is > Good luck! > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-August/052171.html","timestamp":"2014-04-17T18:25:01Z","content_type":null,"content_length":"8507","record_id":"<urn:uuid:b4e200bd-8645-4eae-9ae0-af8e620b2038>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Interval variables as independent variables [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: Interval variables as independent variables From Maarten buis <maartenbuis@yahoo.co.uk> To statalist@hsphsun2.harvard.edu Subject RE: st: Interval variables as independent variables Date Mon, 10 Nov 2008 14:54:40 +0000 (GMT) --- "Jessica A.Jakubowski" asked: >>>> I have data that includes an income variable measured at the >>>> interval level, e.g.: >>>> 1= less than $10,000 >>>> 2= $10,000-$15,000 >>>> I would like to know if Stata has a way to deal with interval >>>> variables on the right-hand side of the regression equation, that >>>> is, as an independent variable. --- Maarten buis answered: >>> I can see two alternative strategies (there may well be more): >>> 2) Alternatively you can scale the income variable such that it >>> optimally predicts the outcome variable --- Austin Nichols answered: >> Usually it is better to make indicators, e.g. >> tab income, gen(d) >> reg y d* --- "Feiveson, Alan H. (JSC-SK311)" answered: > But suppose one is trying to make a prediction model from actual > income, not an income range? Then wouldn't some adjustment have to be > made for the predictor variable being measured with error? If so, Austin's answer is a special case of my point 2): you can think of that model as simultaneously estimating a scale for these categories and an effect of this scaled income. The scale defines the relative distances between the categories, such that the linear effect of this scaled income optimally predicts the outcome. Lets say we have three categories: poor, middle, and rich. In order to identify the scale we need to fix the origin and the unit of the scale. Lets say we fix the origin at poor and the unit at the distance between poor and rich. In that case our scale would measure the position of middle relative to poor and rich, and will most likely be a number between 0 and 1. Lets call this number "a". It is this a together with the effect of scaled income that we want to estimate. If we create dummies for poor medium and rich, than we can say that the scaled income variable would be: scaled_inc = 0 poor + a middle + 1 rich and the effect of of that variable on some dependent variable y is: y = b + c*scaled_inc = b + c*(0 poor + a middle + 1 rich) = b + c a middle + c rich If we entered income just as a set of dummies dummies (with poor as reference category) we would have gotten: y = b0 + b1 middle + b2 rich So we can directly derive both the scaling and the effect of scaled income from this regression model with income dummies: c = b_2 a = b1/b2 Below is an application using 4 categories: *-------------------- begin example ---------------------- sysuse auto, clear recode rep78 1=2 tab rep78, gen(d) reg mpg d2 d3 d4 foreign weight // the effect of scaled repair status is the effect of d4 // the scale values to each categories are: // d1 = 0 // d2: nlcom _b[d2]/_b[d4] // d3: nlcom _b[d3]/_b[d4] // d4 = 1 *---------------------- end example --------------------- You can see how this can be extended to more than 4 categories. An interesting extension here would occur if you have an interaction effect, e.g. with time. In that case you could enforce the constraint that the scaling of income remains constant over time, but that the effect changes, and this would be a testable constraint. This idea is implemented in -propcnsreg- and is discussed in Buis, Maarten L. (2008) "Scaling levels of education" http://home.fsw.vu.nl/m.buis/ . -- Maarten Maarten L. Buis Department of Social Research Methodology Vrije Universiteit Amsterdam Boelelaan 1081 1081 HV Amsterdam The Netherlands visiting address: Buitenveldertselaan 3 (Metropolitan), room N515 +31 20 5986715 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-11/msg00355.html","timestamp":"2014-04-19T22:26:31Z","content_type":null,"content_length":"9129","record_id":"<urn:uuid:c8f8a60f-8109-421d-862d-e9870c0cc27e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Access Method Strategies The operators associated with an operator class are identified by "strategy numbers", which serve to identify the semantics of each operator within the context of its operator class. For example, B-trees impose a strict ordering on keys, lesser to greater, and so operators like "less than" and "greater than or equal to" are interesting with respect to a B-tree. Because PostgreSQL allows the user to define operators, PostgreSQL cannot look at the name of an operator (e.g., < or >=) and tell what kind of comparison it is. Instead, the index access method defines a set of "strategies", which can be thought of as generalized operators. Each operator class shows which actual operator corresponds to each strategy for a particular data type and interpretation of the index semantics. B-tree indexes define 5 strategies, as shown in Table 14-1. Table 14-1. B-tree Strategies │ Operation │Strategy Number │ │less than │1 │ │less than or equal │2 │ │equal │3 │ │greater than or equal │4 │ │greater than │5 │ Hash indexes express only bitwise similarity, and so they define only 1 strategy, as shown in Table 14-2. R-tree indexes express rectangle-containment relationships. They define 8 strategies, as shown in Table 14-3. Table 14-3. R-tree Strategies │ Operation │Strategy Number │ │left of │1 │ │left of or overlapping │2 │ │overlapping │3 │ │right of or overlapping │4 │ │right of │5 │ │same │6 │ │contains │7 │ │contained by │8 │ GiST indexes are even more flexible: they do not have a fixed set of strategies at all. Instead, the "consistency" support routine of a particular GiST operator class interprets the strategy numbers however it likes. By the way, the amorderstrategy column in pg_am tells whether the access method supports ordered scan. Zero means it doesn't; if it does, amorderstrategy is the strategy number that corresponds to the ordering operator. For example, B-tree has amorderstrategy = 1, which is its "less than" strategy number. In short, an operator class must specify a set of operators that express each of these semantic ideas for the operator class's data type.
{"url":"http://www.postgresql.org/docs/7.3/static/xindex-strategies.html","timestamp":"2014-04-19T17:15:45Z","content_type":null,"content_length":"8997","record_id":"<urn:uuid:51173328-99e5-44ef-a3ba-41b597c2dbfe>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Internal Mobility and Agent-Passing Calculi. Theoretical Computer Science, 167(1,2):235--274 - LICS'98 , 1998 "... We present the fusion calculus as a significant step towards a canonical calculus of concurrency. It simplifies and extends the π-calculus. The fusion calculus contains the polyadic π-calculus as a proper subcalculus and thus inherits all its expressive power. The gain is that fusion contains action ..." Cited by 108 (13 self) Add to MetaCart We present the fusion calculus as a significant step towards a canonical calculus of concurrency. It simplifies and extends the π-calculus. The fusion calculus contains the polyadic π-calculus as a proper subcalculus and thus inherits all its expressive power. The gain is that fusion contains actions akin to updating a shared state, and a scoping construct for bounding their effects. Therefore it is easier to represent computational models such as concurrent constraints formalisms. It is also easy to represent the so called strong reduction strategies in the lambda-calculus, involving reduction under abstraction. In the π-calculus these tasks require elaborate encodings. The dramatic main point of this paper is that we achieve these improvements by simplifying the π-calculus rather than adding features to it. The fusion calculus has only one binding operator where the π-calculus has two (input and restriction). It has a complete symmetry between input and output actions where the π-calculus has not. There is only one sensible variety of bisimulation congruence where the pi-calculus has at least three (early, late and open). Proofs about the fusion calculus, for example in complete axiomatizations and full abstraction, therefore are shorter and clearer. Our results on the fusion calculus in this paper are the following. We give a structured operational semantics in the traditional style. The novelty lies in a new kind of action, fusion actions for emulating updates of a shared state. We prove that the calculus contains the π-calculus as a subcalculus. We define and motivate the bisimulation equivalence and prove a simple characterization of its induced congruence, which is given two versions of a complete axiomatization for finite terms. The expressive power of the calculus is demonstrated by giving a straight-forward encoding of the strong lazy lambda-calculus, which admits reduction under lambda abstraction. , 1999 "... We study two encodings of the asynchronous #-calculus with input-guarded choice into its choice-free fragment. One encoding is divergence-free, but refines the atomic commitment of choice into gradual commitment. The other preserves atomicity, but introduces divergence. The divergent encoding is ..." Cited by 97 (5 self) Add to MetaCart We study two encodings of the asynchronous #-calculus with input-guarded choice into its choice-free fragment. One encoding is divergence-free, but refines the atomic commitment of choice into gradual commitment. The other preserves atomicity, but introduces divergence. The divergent encoding is fully abstract with respect to weak bisimulation, but the more natural divergence-free encoding is not. Instead, we show that it is fully abstract with respect to coupled simulation, a slightly coarser---but still coinductively defined---equivalence that does not enforce bisimilarity of internal branching decisions. The correctness proofs for the two choice encodings introduce a novel proof technique exploiting the properties of explicit decodings from translations to source terms.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1624186","timestamp":"2014-04-23T12:54:49Z","content_type":null,"content_length":"16563","record_id":"<urn:uuid:d031edf8-539f-4760-95e4-7d9664234548>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Don't Use Calculators Date: 12/18/2000 at 18:09:30 From: donnamorris Subject: Why should you not use calculators to do math? Why should children not use calculators to do math? Please state several reasons. Thank you. Ms. Morris Date: 12/18/2000 at 18:49:16 From: Doctor Ian Subject: Re: Why should you not use calculators to do math? Hi Ms. Morris, It might be going a little too far to say that children "should not use calculators" to do math. There are times when a calculator is just the ticket - for example, when you want an approximate value for the cube root of 59. What children shouldn't do is _depend_ on calculators to do simple calculations, like multiplying integers from 1 to 10, or adding and subtracting numbers, or dividing numbers with relatively few digits. (I wouldn't hesitate to whip out a calculator to compute something like 2944521.324562 divided by 192.87465, although I might just divide 3 million by 200 and call it close enough for government work.) Some reasons children shouldn't depend on calculators include: 1. There won't always be a calculator around when you need one; or if there is one, the battery may be dead. 2. It's about a zillion times quicker to use your brain to figure out that 7 x 6 = 42 than it is to key it into a calculator. This doesn't make much difference if you just want to do one calculation, but when you're simplifying algebraic expressions you typically do dozens, even hundreds of simple calculations, which makes relying on a calculator sort of like trying to run a marathon and having to stop every twenty feet to re-tie your shoes. 3. Calculators only use decimal approximations to real numbers, which gives an unrealistic idea of how the real number system works, and obliterates any opportunity to develop an appreciation for the crucial difference between an exact answer and an approximate one. 4. Children who depend on calculators without developing an independent number sense are unable to tell when they've arrived at ridiculous answers because they've inadvertently typed in the wrong number, or have missed a decimal point, or made some similar error. So they can be computing the speed of a train, get an answer like '394312353.31395 miles per hour', and have no choice but to assume that it must be right, because it's 'what the calculator said'. 5. Learning to get along without a calculator is an excellent object lesson in growing up. The essence of becoming an adult is learning to put up with a little pain up front in order to avoid a lot of pain later on. Children who avoid the pain of learning their basic arithmetic facts by depending on calculators are setting themselves up for enormous amounts of pain and frustration later in life. I hope this helps. Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/57026.html","timestamp":"2014-04-17T10:24:08Z","content_type":null,"content_length":"8028","record_id":"<urn:uuid:8da85a87-3d87-4894-bcb3-0073fb6d9b98>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
1 Jul 01:39 2011 Re: sequential removeBatchEffect()? Gordon K Smyth < 2011-06-30 23:39:08 GMT Hi Jenny, I don't think I'd recommend the sequential approach, which might give non-optimal results if the two batch factors are highly correlated with one another. In fact, I'd be tempted to remove only the pop batch effect, since the BeadChip effect is relatively small. Removing both batch effects requires that you have enough data to estimate all your experimental treatments reliably as well as the two batch effects. If you want to do this, here is a function that will remove two batch effects at once: removeBatchEffect <- function(x,batch,batch2=NULL,design=matrix(1,ncol(x),1)) x <- as.matrix(x) batch <- as.factor(batch) contrasts(batch) <- contr.sum(levels(batch)) if(is.null(batch2)) { X <- model.matrix(~batch)[,-1,drop=FALSE] } else { batch2 <- as.factor(batch2) contrasts(batch2) <- contr.sum(levels(batch2)) X <- model.matrix(~batch+batch2)[,-1,drop=FALSE] X <- qr.resid(qr(design),X) qrX <- qr(X) (Continue reading)
{"url":"http://comments.gmane.org/gmane.science.biology.informatics.conductor/35686","timestamp":"2014-04-20T06:01:07Z","content_type":null,"content_length":"6830","record_id":"<urn:uuid:dbea976e-10b1-420e-bbde-1ea5ab28bf84>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Suffix Notation Operators March 27th 2011, 12:29 PM #1 Suffix Notation Operators A very quick question here, going over my lectures notes and I've missed out an example on suffix notation, I've got 1) $(abla \phi)_i = \frac{\partial \phi}{\partial x_i}$ 7) $\left[ (\upsilon \cdot abla) u \right]_i =$ I included the first one as an example of the new notation they were giving us examples of, and 7) is the one I've got missing, would be grateful if someone could help me out. Thanks in advance. Well, you could use Einstein summation notation for the dot product there. I would use a summed variable for the dot product that is not i, or it'll get confusing. Can you continue? March 28th 2011, 02:31 AM #2
{"url":"http://mathhelpforum.com/advanced-applied-math/176004-suffix-notation-operators.html","timestamp":"2014-04-23T20:56:58Z","content_type":null,"content_length":"33903","record_id":"<urn:uuid:2d68dad2-a189-416f-9af7-3635f1dce39f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Runtime speed integer versus double Dear fellow programmers, Is using integers faster than using floating points? Or more precise: what do you estimate as the chance that writing a custom floating point class is worth the effort? I need to examine some C++ code for an ARM7 or ARM9 processor. Instead of using floating points, the coder had chosen to use integers only. To be able to fake floating points, a custom class was written. The coder states that he avoids using floating points as these are much how much slower. "Have you measured this?", I asked. "No, but it is known to be so", he replied. Google taught me that most people state integers are faster than using doubles, because their operations (addition, multiplication) take less assembler instructions. But if you need those floating points, what do you estimate as the chance that writing a custom floating point class is worth the effort? Thanks, Bilderbikkel Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/103139/","timestamp":"2014-04-19T01:56:21Z","content_type":null,"content_length":"8908","record_id":"<urn:uuid:d1fd9382-d52c-4f49-b003-4fccf5a8de5f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Haciendas De Tena, PR Algebra 2 Tutor Find a Haciendas De Tena, PR Algebra 2 Tutor ...I tutor out of my home office as well as at the public library in my community. As a certified online WyzAnt tutor, I also offer online tutoring for students whose circumstances may benefit from this option. If you are looking for a patient mentor who can give you homework support, help you pre... 15 Subjects: including algebra 2, calculus, geometry, statistics ...I think my two main strengths as a tutor are my ability to impart this understanding using visual aids and analogies, and my ability to break down complex problems to simple and easy ones. I am great at organizing problems in a way that makes them become easy, and I can teach you how to do it to... 14 Subjects: including algebra 2, chemistry, calculus, geometry ...I have assisted hundreds of students in calculus and hope to work with you in the future! Let my love of chemistry infect your son, daughter, or perhaps yourself! I have been tutoring and taught chemistry for years and find immense enjoyment assisting others in the subject. 10 Subjects: including algebra 2, chemistry, calculus, geometry ...Thus I have an extensive background in these two disciplines and am familiar with all concerns relating to animals including human beings. I have taught general math to young adults attempting to complete their GED and in other matriculation scenarios including Discrete Math and adult learners. ... 16 Subjects: including algebra 2, chemistry, geometry, biology ...I love to create fun math activities for students to play. For the past three years, I have dedicated my life to tutoring elementary children. Any age, any level, children are always a joy to 33 Subjects: including algebra 2, English, reading, special needs Related Haciendas De Tena, PR Tutors Haciendas De Tena, PR Accounting Tutors Haciendas De Tena, PR ACT Tutors Haciendas De Tena, PR Algebra Tutors Haciendas De Tena, PR Algebra 2 Tutors Haciendas De Tena, PR Calculus Tutors Haciendas De Tena, PR Geometry Tutors Haciendas De Tena, PR Math Tutors Haciendas De Tena, PR Prealgebra Tutors Haciendas De Tena, PR Precalculus Tutors Haciendas De Tena, PR SAT Tutors Haciendas De Tena, PR SAT Math Tutors Haciendas De Tena, PR Science Tutors Haciendas De Tena, PR Statistics Tutors Haciendas De Tena, PR Trigonometry Tutors Nearby Cities With algebra 2 Tutor Chandler Heights algebra 2 Tutors Chandler, AZ algebra 2 Tutors Circle City, AZ algebra 2 Tutors Eleven Mile Corner, AZ algebra 2 Tutors Eleven Mile, AZ algebra 2 Tutors Haciendas Constancia, PR algebra 2 Tutors Haciendas De Borinquen Ii, PR algebra 2 Tutors Haciendas Del Monte, PR algebra 2 Tutors Haciendas El Zorzal, PR algebra 2 Tutors Mobile, AZ algebra 2 Tutors Rock Springs, AZ algebra 2 Tutors Saddlebrooke, AZ algebra 2 Tutors Sun Lakes, AZ algebra 2 Tutors Superstition Mountain, AZ algebra 2 Tutors Toltec, AZ algebra 2 Tutors
{"url":"http://www.purplemath.com/Haciendas_De_Tena_PR_algebra_2_tutors.php","timestamp":"2014-04-20T16:21:55Z","content_type":null,"content_length":"24666","record_id":"<urn:uuid:160cff93-1cbe-4e12-b634-a511a02fa85a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
SFU/UBC Joint Graduate Student Seminar Joslin Goh Title: Prediction and Calibration Using Outputs From Multiple Simulators Abstract: Deterministic simulators are widely used to describe physical processes in lieu of physical observations. In some cases, more than one computer simulator can be used to explore the physical system. Through the combination of field observations and simulated outputs, predictive models are developed for the real physical system. The resulting model can be used to perform sensitivity analysis for the system, solve inverse problems and make predictions. The proposed approach is Bayesian and will be illustrated through applications in predictive science at the Centre for Radiative Shock Hydrodynamics at the University of Michigan. Seong-Hwan Jun Title: Entangled Monte Carlo Abstract: We propose a novel method for scalable parallelization of SMC algorithms, Entangled Monte Carlo simulation (EMC). EMC avoids the transmission of particles between nodes, and instead reconstructs them from the particle genealogy. In particular, we show that we can reduce the communication to the particle weights for each machine while efficiently maintaining implicit global coherence of the parallel simulation. We explain methods to efficiently maintain a genealogy of particles from which any particle can be reconstructed. We demonstrate using examples from Bayesian phylogenetic that the computational gain from parallelization using EMC significantly outweighs the cost of particle reconstruction. The timing experiments show that reconstruction of particles is indeed much more efficient as compared to transmission of particles. Zheng Sun Title: EDF Tests for Ordered Categorical Data Abstract: In this talk, we consider a general class of EDF (Empirical Distribution Function) tests for ordered categorical data (ordered contingency tables), that is when the cells have a natural ordering, for example, letter grades on exams. Asymptotic distributions are found under the null hypothesis H_0: each row follows the same distribution. Asymptotic distributions under some contiguous alternatives are also found and asymptotic power of these tests can be calculated. A theorem is proved connecting the cases when parameters are known with those when parameters must be estimated. Components of these test statistics are examined and the first 4 components can be interpreted as tests that are aimed at specific alternatives: location, scale, skewness and kurtosis. We compare powers of the EDF tests with many competing tests including tests derived from the Neyman Pearson Lemma. EDF tests compare favourably. A example data set is analyzed. Dr. Ruben Zamar Title: Robustness and Other Things Abstract: Data quality is typically affected by the presence of outliers and other forms of data contamination. It may also be affected by missing data, data duplication, etc. From a broad perspective I am interested in the study of the detrimental effect of poor data quality on statistical inference, and in developing appropriate alternative methods to address these problems. The purpose of this talk is to give students a broad picture of my research interests and some current research projects. "Other things" in the title refers to other related topics I am interested in, such as cluster analysis, model selection, bootstrap and data mining. Dr. Joan Hu Title: Statistical Analysis for Forest Fire Control Abstract: This talk discusses statistical issues arising from forest fire control. We start with brief background information to motivate the statistical problems. Models and inference procedures are then proposed. A set of Canadian forest fire data is used throughout the talk for illustration. This is an on-going project jointly with W. John Braun. Jabed Hossain Tomal Title: Ensembling Descriptor Sets using Phalanxes of Variables to Rank Activity of Compounds in QSAR Studies Abstract: In QSAR studies, molecular descriptors are used to model biological activity of compounds. The statistical model aims to rank rare actives early in a list of compounds. The classifier “random forest” has been found highly accurate in QSAR studies. To enhance its performance in terms of predictive ranking, we propose an ensemble method by grouping variables together. The variables in a group (we call phalanx) are good to put together, whereas the variables in different groups (phalanxes) are good to ensemble. Finally, our method aggregates the phalanxes. There exist several molecular descriptor sets in QSAR studies, and a particular set might do well in ranking activity of compounds for some assays, and fail to do well for other assays. We have considered four assays and five descriptor sets for each. We apply the ensemble of phalanxes to each descriptor set and further ensemble across the five descriptor sets we generated. The performance of our ensemble is compared with random forest. Specifically, random forest was applied to each of the five descriptor sets and to the pool of descriptor sets. We found our method superior to any of the random forests using two rigorous evaluation procedures. Shirin Golchi Title: Monotone Interpolation: Sampling from a Constrained Gaussian Posterior Abstract: Gaussian process (GP) models are popular tools for non-parametric modelling and function estimation. They are commonly used in the area of computer experiments where a finite number of function evaluations are available from a simulator and the underlying function is to be estimated using a statistical model while interpolating the given points. However, in the case that extra information such as monotonicity of the underlying function is available, it is not straight- forward to incorporate the constraints in a GP model. I will talk about the constrained posterior distribution together with a recipe to sample from it. Vincenzo Coia Title: A New Sieve Model for Extreme Values Abstract: Although rare, extreme events leave a lasting impact on our lives and the world in general. It is therefore important to determine the potential magnitude and frequency of such events, especially when these extremes are dangerous. We focus on the case when these extreme values are heavy tailed. Extreme Value Theory provides a theoretical basis for extrapolating and making inference into these heavy tails; however, there is room for improvement in the extrapolation methods. One modification to the heavy tail is to add an upper truncation; we propose a modification which "progressively truncates" the tail with permeable filters like a sieve. The techniques are then applied to the largest Atlantic hurricanes and the largest black sea bass in Buzzard's Bay. We find that, in most cases, the sieve model provides the best fit, followed by the truncated model.
{"url":"http://www.sfu.ca/~abelivea/Workshop/Home.html","timestamp":"2014-04-16T21:53:54Z","content_type":null,"content_length":"27330","record_id":"<urn:uuid:12d9cad3-f795-47dd-be07-aadf55d896bd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Stoneham, MA Math Tutor Find a Stoneham, MA Math Tutor ...Learning how questions are scored, and paying close attention to the often tricky wording within the questions, can make a difference in test outcome. I also offer strategies to overcome anxiety and boost confidence regarding testing. I recommend starting to prepare at least 3-6 months prior to testing so there is less last-minute stress. 15 Subjects: including algebra 1, algebra 2, Microsoft Excel, general computer I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College. 13 Subjects: including trigonometry, algebra 1, algebra 2, biology ...I would like to tutor students at any age and in subjects ranging from pure mathematics to practical applications in chemistry, physics and biology. In order to ensure that your tutoring is efficient and effective, please send me an email describing your specific needs in advance. I also tutor ... 12 Subjects: including SAT math, geometry, prealgebra, algebra 2 ...As a student athlete throughout college, I learned the value of time management and how to best utilize my strong work ethic, and succeeded in both arenas. I have always enjoyed helping others with schoolwork through demonstrating and explaining concepts to whomever needed it. I am passionate a... 15 Subjects: including algebra 2, calculus, chemistry, physics ...While I was there, I helped students of all ages in overcoming the challenges that they had in both subjects. During my undergraduate years at Baylor, I had a work-study job in which I was a grader for the Math Department. While I did not work with students directly, this position allowed me to see the thought processes of college students in solving math problems. 10 Subjects: including probability, algebra 1, algebra 2, calculus Related Stoneham, MA Tutors Stoneham, MA Accounting Tutors Stoneham, MA ACT Tutors Stoneham, MA Algebra Tutors Stoneham, MA Algebra 2 Tutors Stoneham, MA Calculus Tutors Stoneham, MA Geometry Tutors Stoneham, MA Math Tutors Stoneham, MA Prealgebra Tutors Stoneham, MA Precalculus Tutors Stoneham, MA SAT Tutors Stoneham, MA SAT Math Tutors Stoneham, MA Science Tutors Stoneham, MA Statistics Tutors Stoneham, MA Trigonometry Tutors Nearby Cities With Math Tutor Arlington, MA Math Tutors Belmont, MA Math Tutors Burlington, MA Math Tutors Everett, MA Math Tutors Lynnfield Math Tutors Malden, MA Math Tutors Medford, MA Math Tutors Melrose, MA Math Tutors Reading, MA Math Tutors Saugus Math Tutors Wakefield, MA Math Tutors West Medford Math Tutors Wilmington, MA Math Tutors Winchester, MA Math Tutors Woburn Math Tutors
{"url":"http://www.purplemath.com/Stoneham_MA_Math_tutors.php","timestamp":"2014-04-19T05:24:31Z","content_type":null,"content_length":"24000","record_id":"<urn:uuid:0c30c6c7-4178-4c3a-93ce-616b9d16f98e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Ask questions and get free answers from expert tutors [ ] [Ask] Latest answer by Haizhen Y. Flushing, NY how to measure line segment? Latest answer by Stuart R. West Hills, CA how to get the measurement of line segment? WyzAnt Resources is brought to you by WyzAnt Tutoring, the largest tutor marketplace on the web
{"url":"http://www.wyzant.com/resources/answers/measuring_line_segments","timestamp":"2014-04-19T08:24:19Z","content_type":null,"content_length":"27511","record_id":"<urn:uuid:d2458987-5ce5-4e95-a681-47e10f0f3ccb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Concepts of Programming - General Programming I'm hearing a lot of fancy words the more I grow interested in programming. I don't really understand a lot of it. I wish programming tutorials would give more information about what they're talking about. Let's use C++ for example. I know that setting a function to void means that it doesn't return a value. Returning 0 simply means that the program ran correctly. No programming tutorial has told me more than this. They have only ever elaborated on those facts. When I ask myself "In what situations should I set a function to void?" or "If I return 0 in another function, what would happen? Why would it cause an error?" I don't know how to answer them. These might seem like insignificant details to you, because I could go on without them. But I want to know exactly when I should be using void, and when I should be returning a variable. Object Oriented Programming is said to take data and code and put them into an object. They say this differs from traditional programming methods but I have no idea what traditional methods were and how different they were. I don't know the difference between my "code" and the "data". Can anyone please relate to me and see why I'm struggling? It's almost like nobody bothers to sit down and take the time to explain why these things are so. Every tutorial I watch, book I read does not elaborate enough and it drives me up the wall. But enough complaining. Can anyone give me some basic run through of the most basic principles of coding. I understand what integers are, floats, strings, booleans - for example. I know that it's more effective to use a single instead of a double when you're working with a smaller number, but why? I just need some explanations on common misconceptions and reasoning behind the crap I'm typing. I see it coming together, I see it working, but I don't know why it's working and I struggle to replicate it later because I don't know the reasons why I put certain things in places the first time. If typing something up here is too much to ask for, that's probably right, lol. I'll appreciate any information or tutorials, videos etc. that anyone can link me to in terms of coding and their core foundations and reasoning behind its basic principles. Just the silly things that nobody bothers to elaborate on as I explained above. Edited by Exoaria, 22 June 2013 - 07:16 AM.
{"url":"http://www.gamedev.net/topic/644633-basic-concepts-of-programming/","timestamp":"2014-04-18T17:04:59Z","content_type":null,"content_length":"198628","record_id":"<urn:uuid:a76edef9-9651-4245-b4b0-fa53896ec9b0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Automatic annotation of protein motif function with Gene Ontology terms Conserved protein sequence motifs are short stretches of amino acid sequence patterns that potentially encode the function of proteins. Several sequence pattern searching algorithms and programs exist foridentifying candidate protein motifs at the whole genome level. However, amuch needed and importanttask is to determine the functions of the newly identified protein motifs. The Gene Ontology (GO) project is an endeavor to annotate the function of genes or protein sequences with terms from a dynamic, controlled vocabulary and these annotations serve well as a knowledge base. This paperpresents methods to mine the GO knowledge base and use the association between the GO terms assigned to a sequence and the motifs matched by the same sequence as evidence for predicting the functions of novel protein motifs automatically. The task of assigning GO terms to protein motifsis viewed as both a binary classification and information retrieval problem, where PROSITE motifs are used as samples for mode training and functional prediction. The mutual information of a motif and aGO term association isfound to be a very useful feature. We take advantageof the known motifs to train a logistic regression classifier, which allows us to combine mutual information with other frequency-based features and obtain a probability of correctassociation. The trained logistic regression model has intuitively meaningful and logically plausible parameter values, and performs very well empirically according to our evaluation criteria. In this research, different methods for automatic annotation of protein motifs have been investigated. Empirical result demonstrated that the methods have a great potential for detecting and augmenting information about thefunctions of newly discovered candidate protein motifs. Gene Ontology; data mining; supervised learning; protein motif; feature extraction; logistic regression With the completion of many genome sequencing projects and advances in the methods of automatic discovery of sequence patterns (see Brazma [1] and Brejova et al [2] for reviews), it is now possible to search or discover protein sequence motifs at the genome level. If one regards protein sequences as "sentences" of the biological language with amino acids as the alphabet, then protein motifs can be considered as words or phrases of that language and determining the function of a motif is equivalent to determining the sense of a word. Identifying biological sequence motifs has been a fundamental task of bioinformatics, which has led to the development of several motif (pattern) databases, such as PROSITE, BLOCKS, SMART and Pfam [3-6]. These databases are usually constructed by studying the set of protein sequences that are known to have certain functions and extracting the conserved sequence motifs that are believed to be responsible for their functions. However, the number of motifs that can be extracted in this way is quite limited, and it has been a major challenge to discover new motifs. With the advent of algorithms and programs that can automatically discover sequence motifs from any given set of sequences [1,2,7-9], it is possible to mine a large number of sequences to find novel motifs without necessarily knowing their functions and to compile a dictionary of biological language accordingly. An essential task involved in the compilation of such a dictionary is to determine the function (the meaning) of newly identified protein motifs. Here, we report development of general methods that can be used to predict the function of protein motifs by mining the knowledge in the Gene Ontology. The Gene Ontology™ (GO) project [10] is a concerted effort by the bioinformatics community to develop a controlled vocabulary (GO terms) and to annotate biological sequences with the vocabulary. A biological sequence is described in three different aspects, namely, biological process, cellular component, and molecular function. The standardized annotation with a controlled vocabulary is the main advantage of Gene Ontology, which facilitates both communications among scientists and information management. Both the number of annotated sequences and the number of GO terms associated with individual sequences in the Gene Ontology database are increasing very rapidly. Moreover, natural language processing techniques are also being used to automatically annotate gene products with GO terms [11,12]. Thus, it can be foreseen that the annotations of protein sequences in the Gene Ontology database will become more and more detailed, and have a great potential to be used as an enriched knowledge base of proteins. The basic approach for determining the function of a motif is to study all the sequences that contain the motif (pattern). Intuitively, if all the functional aspects of the sequences matching a motif are known, we should be able to learn which function is most likely encoded by the motif, based on the assumption that every protein function is encoded by an underlying motif. This means that we would need a knowledge base of protein sequences, in which the functions of a sequence are annotated as detailed as possible. In addition, we would also need prediction methods that can work on a given set of protein sequences and their functional descriptions to reliably attribute one of the functions to the motif that matches these sequences. To determine the function of any novel motif, we would first search the protein knowledge base to retrieve all the functional descriptions of the proteins containing the motif, and then use such prediction methods to decide which function is encoded by the motif. In this research, we use the Gene Ontology database as our protein knowledge base and explore statistical methods that can learn to automatically assign biological functions (in the form of GO terms) to a protein motif. Our approach is based on the observation that the Gene Ontology database contains protein sequences and the GO terms associated with the sequences. In addition, the database also contains information of known protein motifs, e.g. the PROSITE patterns that match the sequences. Thus, the protein sequences in the database provide a sample of potential associations of GO term with motifs, among which some are correct (i.e., the GO term definition matches the functional description of the motif) and some are not. This provides us an opportunity to perform supervised learning to identify discriminative features and use these features to predict whether a new association is correct or not. Current Gene Ontology database is implemented with relational database system, which allows one to perform queries like "retrieve all GO terms associated with the sequences that matches a given motif" and vice versa. However, the database usually returns more than one GO terms that may or may not describe the function of the motif in the query. Thus, we need methods to disambiguate which GO term describe the function of the motif (assign a GO term to a motif) and determine how confident we are as the assignment is concerned. We use statistical approaches to learn from known examples and cast disambiguation task into a classification problem. Furthermore, the probability output by the classifier can be used to represent its confidence for the assignment. Recently, Schug et al [13] published their result of automatically associating GO terms with protein domains from two motif databases – ProDom and CDD [14,15]. Their approach is to use protein domains to BLAST [16] search against GO database and assign the molecular functional GO term from the sequence matching the domains with most significant p-value. They found that, in the database they worked with, most sequences only had one functional GO term. Therefore, they could assign the GO term of a sequence to the motif that matched with highest score with fairly good accuracy. However, due to restrictive assumption that each sequence has only one GO term, their approach can not address the potential problem that a sequence matching a motif has multiple associated GO terms, which is common case now, and how to resolve such ambiguity. The data set We use the May 2002 release of the Gene Ontology sequence database (available online [17]), which contains 37,331 protein sequences. For each sequence, a set of GO terms assigned to the sequence is identified, and a set of PROSITE patterns that match the same sequence is also retrieved. If both sets are nonempty, all the possible pattern-term combinations formed by the two sets are produced. Table 1 shows an example association of GO terms with PROSITE motifs. The protein MGI|MGI:97380 from the database is assigned seven GO terms and the sequence also matches two PROSITE patterns. Thus, as cross product of two sets, 14 distinct associations are produced. Note that the same pattern-term association may be observed multiple times within the database. A total of 4,135 GO terms, 1,282 PROSITE motifs, and 2,249 distinct PROSITE-GO associations have been obtained from this database. Table 1. GO terms and PROSITE patterns for the protein MGI|MGI:97380 Using the information stored in the Gene Ontology and PROSITE, we manually judged a set of 1,602 cases of distinct PROSITE-GO associations to determine whether the association is correct or not. The PROSITE-GO association set has been judged in two different ways. One way is to label an association as correct if and only if the definition of the GO term and the PROSITE motif match perfectly according to the annotator. Gene Ontology has the structure of a directed acyclic graph (DAG) to reflect the relations among the terms. Most terms (nodes in the graph) have parent, sibling and child terms to reflect the relation of "belonging to" or "subfamily". The second way of judging GO-PROSITE association is to label an association as correct if the GO term and the PROSITE motif are either exact match or the definitions of GO term and PROSITE motif are within one level difference in the tree, i.e., the definition of GO term and the PROSITE motif have either a parent-child relation or a sibling relation according to the GO structure. Thus we have two sets of labeled PROSITE-GO associations, the perfect match set and the relaxed match set (with neighbors). Both sets are further randomly divided into training (1128 distinct associations) and test (474 distinct associations) sets. Since the test sample size is fairly large, the variance of the prediction accuracy can be expected to be small. Thus we have not considered any alternative split of training and test sets. Measuring term-motif associations Intuitively, we may think of the GO terms assigned to a protein as one description of the function of a protein in one language (human understandable) while the motifs contained in the protein sequence as another description of the same function in a different language (biological). We would like to discover the "translation rules" between these two languages. Looking at a large number of annotated sequences, we hope to find which terms tend to co-occur with a given motif pattern. Imagine that, if the sequences that match a motif are all assigned a term T, and none of the sequences that do not match the motif is assigned the term T, then it is very likely that the motif pattern is encoding the function described by term T. Of course, this is only an ideal situation; in reality, we may see that most of, but not all of the proteins matching a motif pattern would be assigned the same pattern, and also some proteins that do not match the motif may also have the same term. Thus, we want to have a quantitative measure of such correlation between GO terms and motif patterns. A commonly used association measure is mutual information (M.I.), which measures the correlation between two discrete random variables X and Y [18]. It basically compares the observed joint distribution p(X = x, Y = y) with the expected joint distribution under the hypothesis that X and Y are independent, which is given by p(X = x)p(Y = y). A larger mutual information indicates a stronger association between X and Y, and I(X;Y) = 0 if and only if X and Y are independent. For our purpose, we regard the assignment of a term T to a sequence and the matching of a sequence with a motif M as two binary random variables. The involved probabilities can then be empirically estimated based on the number of sequences matching motif M (NM), the number of sequences assigned term T (NT), the number of sequences both matching M and assigned T (NT-M), and the total number of sequences in the database. Table 2 shows the top five terms that have the highest mutual information with PROSITE motif PS00109, which is the specific active-site signature of protein tyrosine kinases, along with the related counts. Table 2. Five GO terms associated with PROSITE pattern PS00109 (tyrosine kinase signature) We set out to test whether we can use mutual information as a criterion to assign a GO term to a PROSITE motif. One approach is to use a mutual information cutoff value c to define a simple decision rule: assign term T to motif M, if and only if I(T;M) ≥ c. For a given cutoff c, the precision of term assignment is defined as the ratio of the number of correct assignments to that of the total assignments according to the cutoff c. In Figure 1, we plot the precision at different mutual information cutoff values. It is easy to see that, in general, using a higher (i.e., stricter) cutoff, the precision is higher; indeed, the Pearson correlation coefficient between the precision and the cutoff is 0.837. This suggests that mutual information is indeed a good indicator of the correlation Figure 1. Correlation of mutual information cutoff and term assignment precision. Different M.I. cutoff value is used to assign GO terms to motifs. The precision of assignment is plotted vs M.I. cutoff value. The Pearson correlation coefficient between the precision and the cutoff is 0.837. However, a drawback of such an approach is that, given a motif, sometimes, many observed motif-term associations can have mutual information above the cutoff value, making it difficult to decide which pair is correct. While in other cases, the mutual information of the observed motif-term pairs may all be below the cutoff value, but we still would like to predict what terms are most likely to be appropriate for the motif. To address this problem, we can use a different cutoff strategy, and adopt a decision rule that assigns a GO term to a motif based on the ranking of mutual information, which is a common technique used in information retrieval text categorization [19]. More specifically, for each PROSITE motif M in the annotated data set, all observed motif-term associations containing M are retrieved and ranked according to mutual information, then the term that has highest mutual information is assigned to M. Alternatively, if we use this approach to facilitate human annotation, we can relax the rule to include GO terms that have lower ranks, thus allowing multiple potential GO terms to be assigned to a motif, assuming that a human annotator would be able to further decide which is correct. In this method, the key in making a decision is to select a cutoff rank that covers as many correct associations as possible (high sensitivity) while also retrieves as fewer incorrect associations as possible (high specificity). The optimal cutoff can be determined by the desired utility function. Figure 2 shows the Receiver Operating Characteristic (ROC) curve [20] of assigning GO terms to PROSITE motifs in our data set according to the rank of motif-term associations. The two curves are for the two different labeled association sets (i.e., perfect match and relaxed match) respectively. The areas under the two curves are 0.782 and 0.735 respectively, which can be considered as fairly good. We also plot the precision, also referred to as positive predictive value, in panel B. The precision is calculated as the percent of predicted assignments that are truly correct. As shown in panel B, if we assign the GO terms at the top rank for all PROSITE motifs, 50–70% of the cases will be predicted correctly. As we loosen the threshold to include lower ranked terms, we would assign more terms to a motif, and as expected, precision would decline. But even at rank 5, we still have a precision of about 50%. Also shown in Table 2, with respect to the PROSITE pattern of tyrosine kinase (PS00109), most of the top five associated GO terms are related to kinase activity and the term with the highest rank is the most specific. Figure 2. Assigning GO term to motif according to rank of M.I. A. ROC curves of assigning GO terms to motifs according to rank of mutual information. The filled circle is for the perfect match data set, and the area under the curve is 0.782. The empty triangle is for the relaxed match data set, and the area under the curve is 0.735. The numbers next to data points indicate cut-off ranks of decision rules. Diagonal line corresponds to random model. B. Precision of rules based on different mutual information cutoff ranks. Filled bars are results on the perfect match data set. Empty bars are the results on the relaxed match data set. Predicting motif functions using logistic regression While the mutual information measure appears to give reasonable results, there are three motivations for exploring more sophisticated methods. First, the mutual information value is only meaningful when we compare two candidate terms for a given motif pattern; it is hard to interpret the absolute value. While a user can empirically tune the cutoff based on some utility preferences, it would be highly desirable to attach some kind of confidence value or probability of correctness to all the potential candidate motif-term associations. Second, there may be other features that can also help predict the function (term) for a motif. We hope that the additional features may help a classifier to further separate correct motif-term assignment from wrong ones. Third, there exist many motifs with known functions (e.g., those in the PROSITE database), and it is desirable to take advantage of such information to help predict the functions of unknown motifs. This means that we need methods that can learn from such information. In this section, we show that the use of logistic regression can help achieve all three goals. Specifically, we use logistic regression to combine the mutual information with other features, and produce a probability of correct assignment. The motifs with known functions serve as training examples that are needed for estimating the parameters of the regression function. Feature extraction and parameter estimation We now discuss the features to be used in logistic regression, in addition to the mutual information discussed in the previous section. The goal is to identify a set of features that is helpful to determine whether association of any pair of a GO term and a motif is correct or not, without requiring specific information regarding the function of GO term and motif. For a distinct motif-term pair, we collect following frequency-based features: (1) The number of sequences in which the GO term (T) and PROSITE motif (M) co-occur (NT-M). (2) The number of sequences in which T occurs (NT). (3) The number of sequences in which M occurs (NM). (4) The number of distinct GO terms (G) seen associated with M (NG|M). (5) The number of distinct PROSITE patterns (P) seen associated with T (NP|T ). In addition, we also consider, as a feature, the similarity of the sequences that support a motif-term pair. Intuitively, if a motif is conserved among a set of diverse sequences, it is more likely that the motif is used as a building block in proteins with different functions. Thus, the average pair-wise sequence similarity of the sequence set can potentially be used as a heuristic feature in the logistic regression classifier. Given a set of sequences, we use a BLAST search engine to perform pair-wise sequence comparisons. We devised a metric AvgS to measure the averaged pair-wise sequence similarity per 100 amino acids (see methods) and use it as an input feature for classifier. To cast the prediction problem as a binary classification problem, we augment our data set of motif-term pairs with a class label variable Y, so that Y = 1 means correct assignment and 0 means incorrect. We represent a motif-term pair by a vector of features X = (X[1],..., X[k]), where k is the number of features. The seven features/variables used in our experiments are NT-M, NT, NM, NG|M, NP|T, AvgS, and M.I.. Suppose we have observed n motif-term pairs, then we have n samples of (y[i], x[i]), i = 1, 2, ..., n, where, y[i ]is the correctness label and x[i ]is the feature vector for the corresponding motif-term pair. Our goal is to train a classifier which, when given a motif-term pair and feature vector X, would output a label Y with value 1 or 0. Alternatively, we can also consider building a classifier which outputs a probability that Y = 1 instead of a deterministic label. Thus, our task is now precisely a typical supervised learning problem; many supervised learning techniques can potentially be applied. Here, we choose to use logistic regression as our classification model because it has a sound statistical foundation, gives us a probability of correct assignment, and can combine our features naturally without any further transformation. In order to build a model only with the truly discriminative features, it is a common practice to perform feature selection for logistic regression. We use a combined forward and backward feature selection algorithm. Starting from the intercept, we sequentially add features into the model and test if the log-likelihood increases significantly; we keep the current feature if it does. After the forward selection, we sequentially drop features from the model, to see if dropping a feature would significantly reduce the log-likelihood of the model; if it does, we exclude the feature from the model, otherwise continue. When testing the significance, we use the likelihood ratio statistic G, given by 2l(D|β [f])/l(D|β [-f]), where, l(D|β [f]) and l(D|β [-f]) are the log-likelihood of the model with feature f and the model without feature f, respectively. Since we add or drop one feature at a time, G follows χ^2 distribution with degree of freedom of 1 [21]. We use the p-value of 0.1 as a significant threshold. Figure 3 illustrates the procedure of feature selection. We found that the average pair-wise similarity of supporting sequence set does not contribute to the model significantly and so excluded it; all other variables contribute to the model significantly. The results of parameters estimation are show in the Table 3. Figure 3. The algorithm for feature selection. Table 3. Result of logistic regression parameters estimation Logistic regression classification After fitting the model using the training set, we tested the model on the test set, i.e., we used the model to compute an output p(Y[i ]= 1|X[i]) for each test case. Table 4 shows an example of computed conditional probability of correct assignment for the GO terms associated with the protein motif possible the motif "PS00383", which is the "tyrosine specific protein phosphatases signature and profiles". The table 4 lists top 5 GO terms, which are observed to be associated with the motif and ranked according to the conditional probability returned by logistic regression. Table 4. Top 5 GO terms associated with the motif PS000383 ranked according the conditional probability of correctness of association. The column 2~7 consist of the feature vector for motif-GO association is listed, the conditional probability p(Y = 1|X) is calculated with trained model and the true classes are list in right two columns of the table. The definition of the GO terms is listed at the bottom of the table. As the results from the logistic regression are the conditional probability that an association of a GO term with a given motif is correct, we need to decide the cut off threshold for making decision. We calculate the sensitivity and specificity for a different threshold from 0.1 to 0.9 with a step of 0.1 and plotted the ROC curves as shown in Figure 4. The areas under the logistic regression ROC curves are 0.875 and 0.871 for perfect match and relaxed match test set respectively. The precision of the rules is plotted in panel B, where we see that, as the rule becomes more stringent (using a higher threshold), predictions generally become more accurate. We noticed that the precision on the perfect match test set is more variable. This is probably due to the fact that this data set has fewer cases with Y = 1, thus, a small change in the number of cases introduces a large change in percentage. For example, when the threshold is set at 0.9, only three cases are covered by the rule and two of them are correct, thus percent correct drop to 66%. Figure 4. Comparison of results with probability and M.I. A. ROC curves for classifying motif-term associations at different probability threshold. Filled circles are the results on the perfect match test set with an area under curve of 0.8715. Empty triangles are on the relaxed match test set with an area under curve of 0.871. Data points correspond to thresholds of p(Y = 1|X) from 0.9 to 0.1 (from left to right) with a step of 0.1. B. Precision (positive predictive value) at different probability cutoffs, where solid bars are the result on the perfect match test set and the open bars for the relaxed match test set. C. ROC curve for decision rules based on different M.I. cutoff thresholds with an area under curve of 0.816. D. Precision at different M.I. cutoffs. To see whether the additional features are useful, we also performed ROC analysis using different mutual information cutoff threshold on the perfect match test set. The result is shown in Figure 4 panels C and D. We see that using mutual information alone performs almost as well as logistic regression with additional features. However, the area under the curve (0.816) is smaller than that of logistic regression (0.875), indicating that logistic regression does take advantage of other features and has more discriminative power than mutual information alone. The coefficients β[1], β[2 ]and β[3 ]for the three features NT-M, NT and NM, which are also involved in the calculation of mutual information, have a very interesting interpretation – they indicate that the roles of these three variables in the logistic regression model actually are to compromise the effect of mutual information! Indeed, according to the formula of the mutual information, a strong correlation corresponds to a high NT-M, low NT, and low NM, but the coefficients shown in Table 3 clearly suggest the opposite. We believe that this actually corrects one drawback of mutual information – over-emphasizing the correlation but ignoring the support or the strength of evidence. For example, if a term is rare, say occurs only once in the data set, then it would have a very high mutual information value (due to an extremely low NT) with respect to any pattern matched by the sequence to which the term is assigned. But, intuitively, one occurrence is very weak evidence, and at least should be regarded as weaker than when we have a term occurring 10 times in total and co-occurring 9 times with the same motif. The key issue here is that mutual information only reflects the correlation between variables, but does not take into account the strength of evidence, therefore, tends to over-favor the situation where there is a perfect correlation but very little evidence. However, the number of sequences in which the co-occurrence happens, which is called the "support" for the association, is also very important. The coefficients for the other two parameters, NG|M and NP|T, are also meaningful. Their negative signs indicate that the more terms a motif co-occurs with or the more motifs a term co-occurs with, the less likely a particular association is correct. This also makes sense intuitively, since all those co-occurring terms can be regarded as "competing" for a candidate description of the motif's function, so the more terms a motif is associated with, the competition is stronger, and thus the chance that any particular term is a correct description of function should be smaller. Thus, the logistic regression model not only performs well in terms of prediction accuracy but also gives meaningful and logically plausible coefficient values. In this paper, we explore the use of the Gene Ontology knowledge base to predict the functions of protein motifs. We find that the mutual information can be used as an important feature to capture the association between a motif and a GO term. Evaluation indicates that, even used alone, the mutual information could be useful for ranking terms for any given motif. We further use logistic regression to combine mutual information with several other statistical features and to learn a probabilistic classifier from a set of motifs with known functions. Our evaluation shows that, with the addition of new features and with the extra information provided by the motifs with known functions, logistic regression can perform better than using the mutual information alone. This is encouraging, as it shows that we can potentially learn from the motifs with known functions to better predict the functions of unknown motifs. This means that our prediction algorithm can be expected to further improve, as we accumulate more and more known motifs. Although we have so far only tested our methods on the known motifs, which is necessary for the purpose of evaluation, the method is most useful for predicting the functions of new and unknown motifs. For the future work, we can build a motif function prediction system and apply our algorithm to many candidate new motifs e.g., those discovered using TEIRESIAS, SPLASH or other programs. This would further enable us to perform data mining from the Gene Ontology database in several ways. For example, we can hypothesize the functions of a large number of novel motifs probabilistically, then we will be able to answer a query, such as "finding the five patterns that are most likely associated with the GO term tyrosine kinase". This is potentially very useful because it is not uncommon that substantial knowledge about the functions and sub-cellular location of a given protein is available even though a structural explanation for the functions remains obscure. On the other hand, we believe that our methods will facilitate identifying potentially biological meaningful patterns among the millions of patterns returned by pattern searching programs. A sequence pattern that associates with certain GO term with high M.I. or probability is more like to be a meaningful pattern that that with low scores. Furthermore, our methods can also be used in automatic annotation of novel protein sequences as suggested in Schug et al and Rigoutsos et al [9,13,22]. Our methods provide different approaches to associate sequence patterns with functional descriptions. After associating functional descriptions (in the form of GO term) to motifs, we can determine what motifs a novel protein sequence matches and correspondingly transfer the functional descriptions associated with motifs to the sequence. One key advantage of our methods is that the probability of correctness for a GO-motif association can be considered as confidence or uncertainty. This enables one to optimize the automatic annotation according to Bayesian decision theory and minimize the risk of incorrect annotation. Having stated the potential uses of our approaches, we also realize that there exist some limitations for our methods. For example, in order to predict the function of a newly identified sequence pattern correctly, we would require functional annotations of the sequences of GO database be complete and accurate, which may not always be the case. In this paper, we mainly used the motifs with known function to evaluate the capability of the methods developed in this research. Our result shows that the methods work well with known sequences patterns. Currently, the annotation of motif function with GO term is carried out manually at the European Bioinformatics Institute (the GOA project). Such approach is warranted because human annotation is more accurate than automatic ones. However, as the amount of information regarding protein functions accumulates and a large number of new potential motifs are discovered, it will be very labor intensive to annotate the potential association of protein function and protein patterns. By then, the methods studied in this research will potentially prove to be useful to discover the underlying protein motifs that are responsible for the newly annotated function. For example, the methods can be used as prescreening to narrow down to the most possible associations of protein function and motifs, thus facilitate human In summary, we have developed methods that disambiguate the associations between of Gene Ontology terms and protein motifs. These methods can be used to mine the knowledge contained in the Gene Ontology database to predict the function of novel motifs, discover the basis of a molecular function at primary sequence level and automatically annotated the function of novel proteins. Mutual information Mutual information is defined as follows In which the probabilities p(X = x, Y = y), p(X = x) and p(Y = y) can be empirically estimated from the data by counting occurrence/co-occurrence followed by normalization. Sensitivity and specificity The sensitivity and specificity of the rules are calculated as where TP (True Positive) is the number of associations labeled as correct among the retrieved motif-term pairs meeting the ranking cutoff criteria, FN (False Negative) is the number of associations labeled as correct but not retrieved, TN (True Negative) is the number of associations labeled as incorrect and not retrieved, and FP (False Positive) is the number of associations labeled incorrect but are retrieved. Averaged sequence similarity Calculation of the average pair-wise sequence similarity per 100 amino acids (AvgS) of a sequence set is as follows Where S[ij ]is raw BLAST pair-wise similarity scores between the sequence i and sequence j; L[i ]and L[j ]are the lengths of sequences i and j, respectively; n is the number of sequences in the set; and δ (i, j) is a delta function which equals 1 if i = j and 0 otherwise. Logistic regression The logistic regression model is a conditional model that assumes the following linear relationship between p(Y = 1|X) and X[1], ..., X[k]: where, β = (β[0], β[1], ..., β[k]) is the parameter vector. We can fit the logistic regression model (i.e., estimate the parameters) using the Maximum Likelihood method – essentially setting the parameters to values at which the likelihood of the observed data is maximized (Hosmer and Lemeshow 1989, Hastie et al 2001). In our experiments, we use iteratively reweighted least squares (IRLS) algorithm [23] to fit the logistic regression model. All features are normalized to zero mean and unit variance before training. This research is partially supported by National Library of Medicine (NLM) training grant to Lu, X (No. 3 T15 LM07059-15S1), Gopalakrishnan, V (No. 5 T15 LM07059-15) and NLM grant to Buchanan, B.G. (No. LM06759). We would like to thank Drs. Roger Day, Milos Hauskrecht and Gregory Cooper for insightful discussions. Sign up to receive new article alerts from BMC Bioinformatics
{"url":"http://www.biomedcentral.com/1471-2105/5/122","timestamp":"2014-04-16T11:22:59Z","content_type":null,"content_length":"120362","record_id":"<urn:uuid:60c0f1ab-f06e-45ea-8371-be24f7ad9bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Inner Product Rule May 7th 2009, 07:12 AM #1 Inner Product Rule I would appreciate your help on the following problem: Given the vectors $\vec{u}=(2,-3,4,5)$ and $\vec{v}=(3,4,-7,8)$ 1) Viewing the Inner Product Rule as a type of two vector transform, what can you say about $\vec{v}$? 2) Would this pseudo inner product rule correspond to a one-to-one transform? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/88002-inner-product-rule.html","timestamp":"2014-04-16T20:26:21Z","content_type":null,"content_length":"29825","record_id":"<urn:uuid:25bd516e-54f3-4e84-956c-bbf60ffee141>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Revised paper: Behavioral Equivalence in the Polymorphic Pi-Calculus Revised paper: Behavioral Equivalence in the Polymorphic Pi-Calculus We are pleased to announce a revised and expanded version of our 1997 study of the polymorphic pi-calculus, available through: For those familiar with earlier versions of this paper, the main improvements are as follows: * a major new section on a labeled-bisimulation presentation of the polymorphic pi-calculus (the original paper developed only barbed bisimulation), and * a new section extending our observations on "information leakage" in calculi with both polymorphism and aliasing. We show how the small examples in the original version of our paper can be extended to allow a form of ad-hoc overloading in ML. Benjamin Pierce Davide Sangiorgi BEHAVIORAL EQUIVALENCE IN THE POLYMORPHIC PI-CALCULUS Benjamin Pierce and Davide Sangiorgi We investigate parametric polymorphism in message-based concurrent programming, focusing on behavioral equivalences in a typed process calculus analogous to the polymorphic lambda-calculus of Girard and Polymorphism constrains the power of observers by preventing them from directly manipulating data values whose types are abstract, leading to notions of equivalence much coarser than the standard untyped ones. We study the nature of these constraints through simple examples of concurrent abstract data types and develop basic theoretical machinery for establishing bisimilarity of polymorphic processes. We also observe some surprising interactions between polymorphism and aliasing, drawing examples from both the polymorphic pi-calculus and
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00120.html","timestamp":"2014-04-17T18:28:23Z","content_type":null,"content_length":"4186","record_id":"<urn:uuid:4065978b-9d86-4979-9f23-3d13bf8d98f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Q&A with the author of The Young Child and Mathematics (2nd ed.) Juanita Copley responded to a selection of questions and comments during an online event from April 25–29, 2011. Read the questions and her responses below! My book The Young Child and Mathematics focuses on children ages 3 through 8 and their mathematical learning. The placement of the phrase young child before the word mathematics in the book’s title is not accidental. It comes first because I believe that the child should be the focus of early education. Like the first edition, this is a practical book for teachers that communicates the main ideas of the National Council Teachers of Mathematics' Standards within the context of effective, developmentally appropriate practice. This second edition also reflects recent developments in math education in a wealth of vignettes from classrooms, activity ideas, and strategies for teaching young children about math processes and Let’s explore together how to readily and enjoyably make mathematics an integral part of early childhood classrooms each and every day. I look forward to considering your questions and learning from your own classroom vignettes and examples. — Juanita Copley Submitted by: Mary on Apr 29, 2011 I know a lot about babies, language and early literacy but not much about babies and math. What is there to know about babies and math? Submitted by: Susan Friedman on Apr 29, 2011 Many thanks to Juanita Copley her thoughtful and interesting responses and thanks to all who posted questions. I've learned a lot from this Q&A. Thanks to all who participated. Submitted by: Tina K on Apr 28, 2011 Hi Juanita, I am an education coordinator for a Head Start program. We recently purchased a High/Scope math curriculum that includes teacher-guided activities, and I'm already feeling some resistance from staff about how the new curriculum will be implemented. One of the teachers has said that she prefers using a child-directed approach to math (e.g. have baskets with various activities for counting, patterning, etc. that the children select on their own, complete on their own or in pairs, and the teachers would simply check each child's work afterward) and that these "teacher-directed" math activities in the new curriculum are not child-friendly and don't allow the children choices. We're going to go over this at our classroom team meeting this Friday. Do you have any advice how to approach this situation? I feel that, philosophically, staff and I are looking at teacher-guided (not necessarily teacher-directed) math activities from two completely different angles. Thanks! Submitted by: Juanita V Copley on Apr 29, 2011 I understand your dilemma. I grew up in the age of all child-directed activities and I admit that I have adapted my philosophy over the years to accommodate both child-directed AND teacher-guided activities. As a teacher, I intentionally provide experiences that will engage children AND teach to the standards and/or guidelines that have been shown to be foundational to their education. I find that few early childhood teachers ignore literacy standards and that they intentionally teach letter sounds and a variety of literacy outcomes. I have also observed that often these same teachers believe that mathematics is only involved if children choose the activities. From research studies (see the recent NRC volume), we know that children who experience mathematics early are more successful later; in fact, a recent set of longitudinal studies revealed that early math was a significantly better predictor of success in school than any other indicator. I have found that young children enjoy mathematics and that my role as a teacher is to intentionally guide all children (those who choose math activities and those who do not) to learn the mathematics that is foundational to all students. Submitted by: Ellen on Apr 28, 2011 There's a lot in the news about STEM. What are ways teachers can related math to science, engineering, and technology that would make sense for preschool? For early elementary? What's the importance of approaching these disciplines in a coordinated way? Submitted by: Juanita V Copley on Apr 29, 2011 I was an elementary science specialist in a school district for 6 years and I found that it was impossible to teach science without mathematics. In my professional life, I have often presented with Karen Worth, the author of many early childhood books. When we have discussed our presentations, we have found that we present the same topics (e.g., problem solving, block building, measurement, data analysis, and patterning) and that the integration of these topics simply make sense and provide authentic learning in real contexts. I was privileged to work on the Curious George series as a mathematics consultant a few years ago. One-third of that series was math related, one-third was science related, and one-third was technology related. When I watch them now, I often have difficulty remembering which ones were math related (the ones I reviewed) and which ones were science or technology related! Do I believe these topics should be integrated in a coordinated way in preschool or early elementary? My answer is simply… yes… sometimes… when it makes sense! Forced integration does not make sense. Let me give a rather silly example of what I mean. Many years ago a school district asked me to consult with them on their integrated program. I visited with the kindergarten team. A local farm had helped them create a rabbit community in an inside courtyard. I was amazed at what children learned from their project. The science and literacy activities as well as the care giving and responsibility attributes toward the rabbits were outstanding! Unfortunately, the mathematics objectives were poorly addressed. Instead, I found that they had made rabbit flashcards with numerals and matching dots to cover the mathematics objectives. Realistically, there certainly are many things that could be counted or measured with rabbits. I only wish the important mathematics had been considered in this approach Submitted by: Daniel on Apr 28, 2011 I'm curious about the relationship to math engagement when children are young and their being "good" at math later on. As older students are tracked into more difficult math classes in middle school there are more boys in the advanced math classes than there are girls. The same goes for boys and girls in technology related courses - there are fewer girls in these technology classes then boys. What are ways early childhood educators can make sure to engage girls n math that might ave an impact in the later years? Submitted by: Juanita V Copley on Apr 29, 2011 I wish I could give a good answer to this question. In the early grades, I do not see a difference between girls and boys in math engagement. Both boys and girls seem to be equally engaged in the math I teach. With that said, I am curious about the differences that are seen as students get older. In my research studies, I have investigated what Carol Dweck would call “learning-oriented” students and “performance- oriented” students in problem-solving situations. I have found that more girls (especially those who are gifted) appear to be performance-oriented than boys. Performance-oriented students often over estimate their failures and under estimate their successes and are in danger of becoming “learned helpless.” That idea may provide some of the answer to your question. Perhaps you could investigate your question and send me what you find out…I would love to learn from your work! Submitted by: Deborah Murphy on Apr 26, 2011 Der Ms Copley, I just wanted to say how much I am enjoying using your newly revised edition. I have used this book for several years with my ECE community college students. Your DVD that accompanies this new edition is fantastic. I think it really models how teachers need to be thinking when they are working with children doing these math games and activities. I feel my students this term are having a higher level of understanding about being math teachers which is quite intimidating for many of them.... Thanks so much ... and my new favorite word ( & for my students as well...) is subitizing!! We are really enjoying your new edition! Submitted by: Juanita V Copley on Apr 27, 2011 Thanks for your kind comments... I am happy that you have found the DVD helpful. As you might assume, I know that what you see on the DVD is far from perfect... In fact, I have never taught the perfect lesson! I have found, however, that real tapes in real classrooms are more helpful than more professional, staged events. I especially like the videoclips because they present prompts for discussion and reflection. My goal (and it sounds likeyours as well) is to get teachers to think about what they are doing and to especially observe how children are learning. Submitted by: Deborah Murphy on Apr 27, 2011 Hi again, I actually thought it was very realistic to show something that you felt could be improved upon and was not presented as perfect or "staged" .... We talked in class about how important it is to be a teacher who is trying to be reflective and improving, no matter how long we have been teaching! Thanks again for some great learniing opportunities for us all. Submitted by: Cate Heroman on Apr 23, 2011 Hi Nita! Our Creative Curriculum for Preschool volume on Mathematics that you co-authored continues to help our teachers build a solid foundation in early childhood mathematics. Thank you! I, too, am puzzled by the omission of patterning in the Common Core Standards for kindergarten children and was wondering if you had any insights. It's such an important concept for building algebraic thinking. Cate Heroman Submitted by: Anonymous on Apr 22, 2011 I have worked in a PLC with Prek teachers the past 6 years. We created a math map to help pace ourselves and purposefully teach standards at appropriate times as we look at Fall, Winter, and Spring. Can you share best practices for assessments with preschool children? Submitted by: Juanita V Copley on Apr 29, 2011 I have actually waited to address this question because it is such a BIG one! The National Research Council 2008 volume Early Childhood Assessment Why, What, and How provides some excellent review of assessment principles. However, I don’t think your question involves the big principles of appropriate assessment. Instead, I will address the practices that I believe are essential. First, I assess children’s learning within the daily activities. I prefer to call them “assessment snapshots.” These assessments occur most frequently when I am teaching small groups of students, when I am playing a game or watching them play a game, or when I am observing center work. The objectives I am assessing are intentional and match the instruction that has just been given. I find that these types of assessments are informative to my instruction and immediately at that point, I can adapt my instruction to fit the needs of the students. I intentionally assess some students and some objectives every day. Second, I assess children’s learning after time has passed. I have always been amazed by children’s development and its effect on how and when children learn a particular concept or skill. Yes, we do know the learning paths for some mathematics objectives; however, their understanding often occurs long after I have introduced a concept or a particular skill. My favorite time of the year is January. I am often surprised at what children have processed by that time and how much they remember. Third, I try to assess children using different “windows of learning.” In other words, I need to see children in different settings, at different times of the day, in different subject areas, and with different teachers and/or assessors. Previous experiences with the child can prejudice an assessor so I like to see my children and their work periodically with different eyes. Finally (and there are so many more points), I try to remember that the easiest things to assess are most often the most unimportant and the hardest things to assess are often the most important. I wish you the best as you work on this topic. We all could use ideas on best practices in assessment and it would be great to share them with other early childhood teachers. Submitted by: Kathleen on Apr 20, 2011 Are you acquainted with Math in Focus which is based on Singapore Math and what do you think od it? Submitted by: Juanita V Copley on Apr 27, 2011 As a member of an author team for a major publisher, I will not comment on another program. However, I do want to talk about the bar diagrams that are part of Singapore Math. Many programs use these types of diagrams to help children solve word problems. In fact, the author team I am on has developed a more simplified version that has been very successful. We know from research that training students in using diagrams to solve problems results in more improved problem-solving performance than training in any other strategy. Helping children visualize a problem by looking at the quantitative relationships in a problem should be an important part of any program. Submitted by: Kathy on Apr 20, 2011 I am a university lecturer in Australia teaching pre-school teachers how to include math in their programs. Any suggestions for those who start the unit with an "I was never any good at math in school and just hate it now" attitude? This is also part of the topic I am researching for my PhD as it is so frustrating to see young children miss out on many opportunities to develop math skills because their educators aren't Submitted by: Juanita V Copley on Apr 26, 2011 Oh, I understand this issue! When I first started teaching at the university level, it was my job to teach the math methods to all preservice teachers. Initially, I was so discouraged when the majority of my students told me how much they dreaded teaching mathematics! After a few years, I loved the challenge of introducing them to the way mathematics should be taught and I delighted in the idea that some of the class experiences could open their eyes to the excitement of teaching mathematics. Over the past twenty years, I have been able to talk to hundreds of practicing early childhood teachers. It has been my privilege to introduce them to the importance of mathematics and ideas The recent National Research Council report found that typically early childhood classrooms “are emotionally positive and intellectually passive” and that mathematics was definitely not one of their favorite subjects. In my work with teachers at both the pre and inservice levels, I have found that you can’t make someone “love” math, nor can you make them teach mathematics well. Instead, teachers need some positive experiences with mathematics – both personally and with their students AND they need some information before they truly feel comfortable teaching mathematics. I use many methods to help them gain experiences. At the preservice level, they experienced teaching small groups of children using easy-to-implement games and activities that were designed by me or another faculty member. At the end of the day, we debriefed all lessons and their observations were particularly exciting. In many cases, they learned some mathematics and in all cases, they gained experiences teaching mathematics. For both inservice and preservice teachers, I have found they need information in the following areas: 1) mathematics content --- in many cases, they have had poor instruction in mathematics and often teachers don’t really understand the mathematics behind the procedures they have learned, 2) child development or learning paths for young children in mathematics, 3) connection of the content to their standards or guidelines, 4) ideas for use in centers, small group instruction, project ideas, circle time, read alouds, routines, and 5) instructional strategies for use in mathematics classes (e.g., questioning strategies, use of manipulatives, in-class assessment opportunities, management of mathematics classes). I teach week – long seminars involving all of these areas and they seem to be effective. In addition, I use real examples of teaching episodes with debriefing opportunities (You can see some of these on the DVD enclosed in the book.) These are useful in my coaching seminars. This past year I have adopted a prekindergarten school with 6 classes. On my days at the school, I teach different lessons in all six classes all around a mathematics topic. Then, we meet for an hour at the end of the day to debrief the lessons. This has been so exciting! The teachers have learned lots, I have had a chance to experiment with new ideas/activities, and the children are enjoying math! Submitted by: Sherry on Apr 17, 2011 It looks like the common core wants kindergarteners to understand that teen numbers are made of ten and extra ones...any extra good ways to teach that? And also...what does it mean when they say for young children to be able to "fluently" add and subtract up to 5? Thanks! Submitted by: Juanita V Copley on Apr 26, 2011 I can tell you have really analyzed the Common Core Standards! Recently, I have spent lots of time analyzing the Standards especially at the Kindergarten level. The National Research Council’s study in 2009, Mathematics Learning in Early Childhood: Paths toward Excellence and Equity, reports research that strongly supports an emphasis on our base 10 system as well as an understanding of addition and subtraction. The decomposition and composition of number is one of the most important aspects of mathematics for young children. For example, children need to understand that the number 14 can be decomposed into a ten and 4 extra ones; conversely, to make the number 14, you could use a ten and 4 extra ones. This idea is the beginning of an understanding of place value in the base ten system and understanding this idea is foundational to adding and subtracting multi-digit numbers as well as other operations in our system. How do I teach this? In my opinion, the ten frame is the primary graphic model for Kindergarten students. I do lots of activities that involve estimating if a bag of objects contains MORE than 10 or FEWER than 10. After children have made their estimation by circling either MORE than 10 or FEWER than 10 objects, they count out the objects in the bag by trying to fill a ten frame. If the ten frame cannot be filled, there were FEWER than 10 objects in the bag. If the ten frame can be filled and there are extras, there were MORE than 10 objects in the bag. We also play lots of games where we try to fill a ten frame exactly OR try to make a teen number on the ten frame. Children roll a die that has 1, 2, or 3 pips on a face of the die. If the goal is to make 15, the person who places exactly 15 is the first winner of the game. There are many simple games like this and I have no doubts that you will find many of them that have been published or that you can create. The important thing about these games is that children see the teen number as a ten and some extra ones, and not simply as two digits. The research indicates that many young children see a teen number such as 15 as a ONE and a FIVE, not as one TEN and a FIVE. In my experience, this standard is very important and can and should be taught intentionally and appropriately to young children. The idea of fluency as stated by the National Research Council in their 2009 report is “Fluency means accurate and (fairly) rapid and (relatively) effortlessly with a basis of understanding that can support flexible performance when needed. There are fluency standards at every grade level and you are correct when you say that kindergarteners are to be able to fluently add and subtract up to 5. This standard is best addressed by teaching young children to compose and decompose the numbers 4 and 5. In my prekindergarten classes this past year, I have emphasized subitizing the quantities 1 to 4 as well as stressing conceptual subitizing of the number 5. (Subitizing is explained in more detail in the book) That means that we do many activities with the numbers 4 and 5. We play the hoop game where we toss five counters into a hoop and then record how many are inside and how many are outside of the hoop. We toss 5 counters that are red on one side and yellow on the other side and record how many of each color are face up after the toss. We make unifix trains of 5 using two different colors and record how many of the two colors are in the train. We display five fingers using our two hands and count how many are on each hand. We play, “How many are missing?” games with five tokens and identify how many are hidden. The purpose of all these activities (and there are many more) is to provide many experiences for children with the parts of five. By the end of the year in my six prekindergarten classes, most children can identify the parts of five (0 and 5, 1 and 4, 2 and 3, 3 and 2, 4 and 1, and 5 and 0). With that foundation, children can easily use symbols and identify the operations of addition and subtraction. Fluency for numbers up to five WITH UNDERSTANDING is quite easy when it is introduced in this way. I suggest one caution here. Often, I have had parents tell me, “My child knows all their facts” and they have demonstrated their understanding by giving the answers to flash cards containing addition and subtraction facts. Please note that fluency for numbers up to five is not simply a memorization task. Rather, it is understanding the number quantity and the parts of the number five so that they can demonstrate fluency with the numbers and use them flexibly. Submitted by: Chanel Wilson on Apr 07, 2011 I am an early childhood educator who is very familiar with the instructional strategies you developed for Teaching Strategies the creative curriculum. I use them everyday in my Pre-k 3 classroom. I am a concerned parent of an eight year old third grader. Although 3rd grade is considered an early childhood grade level, I find the approach his school is taking to meet state standards is not engaging and developmentally inappropriate. In this high stakes world, they are teaching to the test. What steps can I take to ensure that he develop a real world understanding and love of math? Submitted by: Juanita V Copley on Apr 26, 2011 Oh, do I understand your concern! In most states, high-stakes testing is a major concern that typically begins in third grade. I, too, am concerned that third graders are learning things at strictly a procedural fluency level and never think about the conceptual understanding that is so important. If our mathematics instruction is only at the procedural level, we are ignoring the reasoning, higher level thinking that young children can do as well as their disposition toward mathematics. In addition, a focus on the tested items only narrows the curriculum and as children progress through the later grades, they have a very limiting education in mathematics. The National Council Teachers of Mathematics (NCTM) and NAEYC worked together to write a position statement about mathematics at the PK-3 level. In this statement, they defined what would be appropriate mathematics for young children. Perhaps some of the phrases they used would help you take some steps to inform teachers and others about the importance of appropriate and intentional mathematics teaching. The position statement states that quality mathematics for young children should: “…build on children’s experience and knowledge……base mathematics curriculum and teaching practices on knowledge of children’s …development.…provide for children’s deep and sustained interaction…provide ample time, materials, teacher support for…play. I do not think the era of high stakes testing will disappear in the near future; in fact, it may become even more defined in this time of little financial support for schools. As a parent and grandparent, I do everything I can do to help teachers understand that IF you focus on the bigger ideas of mathematics rather than just the tested objectives, children will develop a conceptual understanding of mathematics. If you are in a state that has adopted the Common Core Standards, I suggest that you review the Common Core Practice Standards. These are essential to developing mathematical understanding and must be addressed in this time of high stakes testing. The position statement and the Common Core Practice Standards were both developed from research and can be found as PDF files at NAEYC.org (position statement) and Common Core Standards (practice standards). I would also set up some centers/activities for your child to use. I know you can help him/her develop the foundations that are necessary for success in mathematics. Thanks again for your concern and all of your efforts in this regard! Submitted by: Gina on Apr 01, 2011 What are the most effective strategies for the K-3 age group in terms of mathematical understanding? What are the best basis for teaching math? Submitted by: Juanita V Copley on Apr 25, 2011 I especially like your question because it deals with mathematical understanding. I too want my students to UNDERSTAND mathematics, not just know specific procedures or have particular skills. I vividly remember that my seventh grade teacher told me, “To divide fractions, take the second fraction, invert it and multiply, and don’t ask the reason why!” I wanted to know WHY and my teacher ignored my questions. I want the children I teach to understand why they do what they do. I really like the definition of understanding that is described in an NCTM publication, “We understand something if we see how it is related or connected to other things we know.” To help children connect what they know with new learning, I most often use a strategy that is identified as “problem-based interactive learning” (John Van de Walle). Defined simply, it means that I generally start with a problem and then children interact with materials, other children, and/or the teacher and they suggest some possible solutions. Then, I connect their answers directly to the specific objective or standard using as many manipulatives, visual graphics and pictures as possible. We continue to work on similar problems and I differentiate instruction as needed to small groups of children or individuals. In many cases, center activities are used for these differentiated Submitted by: Reginald Harrison Williams on Apr 01, 2011 One issue that I and my teaching colleagues have observed is how many of us work within a discovery and experimental philosophy toward teaching math, yet get frustrated by the pressure to "cover all of the standards" for our grade levels. What is your take on how to integrate math with other content areas and within academic standards while simultaneously creating experiences where young children can still discover and experiment as they gain mathematical skill and knowledge proficiency? Submitted by: Juanita V Copley on Apr 25, 2011 What a good question… it is one that I continue to ask myself as I teach in early childhood classrooms! First, let me say that I have appreciated (and use) the NCTM focal points and lately the common core standards in my teaching. I am happy that the standards are more focused and give me more clarity about what is important to teach at a skill and proficiency level. I view the Common Core Standards as consistent with research and the learning paths that are based on current research. They help me as a teacher KNOW where I am going, WHAT I should be helping children discover and experience. I also note that the standards document does not define HOW I should teach them and that a discovery and experimental philosophy could (and should) be used to create the experiences children should have to gain mathematical understanding as well as skills and proficiency in those skills. Second, I want to acknowledge that I have the most fun when I am intentionally doing a discovery and experimental activity with children… IF I intentionally create experiences that will allow children to understand mathematics and IF I help them summarize, picture, or verbalize what they have learned, it takes more time, but oh… the children really remember it! Discovery learning fits my way of teaching young children and my philosophy of teaching. In fact, as a beginning teacher, I most frequently taught those type of lessons and expected children to just transfer what they learned from the discovery approach into what they needed to know at the skill and proficiency level (by the way… that didn’t work as well as I wanted it to!) In my mind, there needs to be a balanced approach. Some standards are best taught through a discovery approach; others are best taught with more direction toward the skill to be learned. I do include mathematics throughout the day in my classrooms and I love and often use a project-type approach. For example, In the kindergarten classes I have taught this year, we have introduced measurement and geometry concepts using an advanced curriculum about a frog who is communicating with other frogs in space. Throughout the unit, we have integrated science and writing and the mathematics learning has been outstanding. In another example, the young children in a prekindergarten class grew a vegetable garden, measured their growth, sold them at the local farmer’s market and then had to decide how to share their earnings fairly. I continue to be amazed at what young children can learn when I facilitate their learning in the most appropriate way Submitted by: Daniel on Mar 31, 2011 I recently read that understanding patterns is not addressed in the kindergarten common core standards. Isn't patterning an important step in algebraic understanding for children in kindergarten? Submitted by: Juanita V Copley on Apr 25, 2011 You are right! Patterning is an important step in algebraic understanding. I have often heard people say that patterning is not in the standards in kindergarten and that is correct if you are only looking at the Common Core Content Standards. However, if you analyze the Common Core Practice Standards, patterning is listed there as a practice with all mathematics. As you know, there are patterns in nature, there are literacy patterns, there are patterns in language (i.e. days of the week. Most of us (please hear the word, “us”) have spent a great deal of time on color patterns (e.g., blue, red, red, blue, red, red) or action patterns (e.g., snap, clap, snap, clap). What we need to be emphasizing are the patterns specific to number and geometry, the two important areas in content for young children. The seventh practice standard states, “ 7. Look for and make use of structure. Mathematically proficient students look closely to DISCERN A PATTERN OR STRUCTURE. Young students, for example, might notice that three and seven more is the same amount as seven and three more…” In my opinion, the Common Core Standards do address the importance of patterns. They just need to be focused on patterns in number and geometry. As an example, the counting patterns are important. While our verbal number patterns are not good patterns especially in the teens (i.e., eleven, twelve, thirteen, fourteen, fifteen, sixteen, seventeen, eighteen, nineteen, twenty should be ten-one, ten-two, ten-three, ten-four, etc.), our written counting numbers follow a definite number pattern (i.e., the ones digits repeat… 1,2,3,4,5,6… and the decades increase in a similar pattern the teens, (1__, 2___, 3___, etc.). Similarly, the +1 pattern is a critical to understanding addition. (1 more than 3 is 4, 1 more than 17 is 18, etc.) I believe that the 100s chart that displays this pattern using the written symbols should be in early childhood classrooms and the numerals pointed out as children count. Yes, patterns are important to algebraic reasoning and that the Common Core Practice Standards address and emphasize their importance. Submitted by: Linda on Mar 31, 2011 I understand that children learn math concepts in sequence. What are some commonly overlooked steps that teachers should be aware of. Thank you Submitted by: Juanita V Copley on Apr 25, 2011 I am glad that you used the word “concepts!” One of the steps that I believe that we often overlook is “conceptual development.” Instead, we tend to teach the procedures without developing the meaning or connecting the information to what the child already knows. Let me illustrate this with two different topics that I see missing in the early grades. Measurement is often a topic that is relegated to the end of the year, ignored entirely, or taught by showing children how to measure the length of something using paper clips, plastic links, or some other unit. Then, in third grade children in most states are tested on measurement and children have not had the experiences that are necessary to really understand what they are doing when they measure. Children need a chance to discover the concepts of conservation, iteration, the use of same size units, and the inverse relationship between unit size and number of units before they are taught the specific measuring procedures. I have been teaching classes in prekindergarten and kindergarten this year. Recently I used licorice sticks to measure how far cars traveled when they were released on a ramp. We selected the car that went the farthest distance using licorice sticks; it traveled 8 licorice sticks. Then, I brought in MY car bragging how far MY car would go! At the end of the day, MY car traveled down the ramp and only when a short distance. When children cheered, I reminded them that we needed to measure it first. This time when I measured it, I used very short licorice sticks and it went 10 licorice sticks. While a few children didn’t see a problem, most of the students told me I did it wrong. They were able to tell me what I had done incorrectly (i.e., used a different size unit) and they also told me what I needed to do right! They were beginning to understand the importance of using the same size units to compare and measure. I didn’t need to teach it… they remembered it every time we measured lengths. Counting is a familiar procedure that we all address. However, we often overlook some very important concepts and procedures. For example, we have children count items in a row, a particular pattern like a ten frame or a dice/domino or in a random order. We often forget that counting OUT a specific number of items is important as well. When I assess four year olds as I have done much of this year, I find that many children can count items that are displayed but have difficulty counting OUT that same number of items and putting them into my hand. Counting has many other important ideas that should be addressed as well: 1) You can count in any order and you will have the same number of items. 2) The last number said answers the “how many” question, 3) It is important when you are counting that you keep track of what has been counted and what has not been counted, 4) The + 1 pattern is really a counting sequence. In other words, one more than a quantity is the next counting
{"url":"http://www.naeyc.org/event/young-child-and-mathematics","timestamp":"2014-04-19T09:36:54Z","content_type":null,"content_length":"79166","record_id":"<urn:uuid:404be511-b2e3-4a49-bd91-810956b18ba9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
More Precisely Math for philosophers. I like when people I like write books I like (what’s not to like?). Case in point: More Precisely: The Math You Need to Do Philosophy by Eric Steinhart, my good buddy and colleague at the Department of Philosophy of William Paterson University. [Link to publisher’s website.] (Have you Brain Hammer-heads seen the ad on Leiter’s blog?) I am geekily excited to own a single reference work where I can look up philosopher-friendly explanations of, for example, Bayes’s theorem, transfinite cardinalities, counterpart-theoretic modal semantics, and finite state automata. Damn! That’s cool. Dig this table of contents: [link]. I look forward to seeing what the general uptake of this book is going to be. I wonder, for instance, about the viability of an undergraduate philosophy course designed around such a text. Imagine a philosophy curriculum that, say, de-emphasized the cranking out of proofs in the sentential and predicate calculi and created room for a broad survey of the math needed to keep up with advances in contemporary analytic philosophy. Imagine increasing numbers in the profession who assuage their math envy with fewer fake formalisms (”S knows that P…”) and more real math. I treated myself to a copy and I’ve really enjoyed it so far. It’s a fantastic resource. [...] Book Review: More Percisely - Via Petemandik - I like when people I like write books I like (what’s not to like?). Case in [...] Hi Clayton. Glad to hear that you’re diggin’ it.
{"url":"http://www.petemandik.com/blog/2009/03/11/more-precisely/","timestamp":"2014-04-16T10:16:46Z","content_type":null,"content_length":"9705","record_id":"<urn:uuid:03e6a2da-40ee-4829-8d4f-b6de93e2e5e1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00159-ip-10-147-4-33.ec2.internal.warc.gz"}