content
stringlengths
86
994k
meta
stringlengths
288
619
Vernon, CA ACT Tutor Find a Vernon, CA ACT Tutor ...Beyond memorizing facts, I help children to see how their lessons apply to real life. I work with parents so life-long learning habits are established. I have tutored elementary students from the following Los Angeles area schools: John Thomas Dye School, Brentwood School, Crossroads, St. 24 Subjects: including ACT Math, chemistry, writing, geometry ...While I can essentially teach anybody anything, writing's definitely my wheelhouse, as most of my work history has been in fiction editing and coverage. These days, when I'm not tutoring students, I can be found on The Disney Channel playing one! I get along with just about everybody, and I expect nothing but the best from and for my students. 26 Subjects: including ACT Math, reading, English, writing I tutored fellow students for four years during high school in subjects such as Algebra I, Algebra II, Pre-calculus, AP Calculus AB and BC, Biology, Anatomy and Physiology, Physics, Chemistry, World History, US History, US Government, and Economics. As part of my volunteer work with various clubs, ... 24 Subjects: including ACT Math, chemistry, reading, anatomy ...I make sure the student thoroughly understands the grammar concepts and verb tenses. In Spanish 2, it can be very rigorous as in this level; they work on many types of verb formats and tenses. This can be overwhelming for the Spanish 2 student, so practice, practice, practice is the key. 20 Subjects: including ACT Math, reading, Spanish, geometry ...I am also currently serving as a mentor/adviser for pre-health students at Duke University, so questions about the application process are welcome too! I have studied math and science all my life and really enjoy helping others. I am calm, friendly, easy to work with and have been tutoring for many years. 27 Subjects: including ACT Math, chemistry, Spanish, physics
{"url":"http://www.purplemath.com/Vernon_CA_ACT_tutors.php","timestamp":"2014-04-20T09:08:13Z","content_type":null,"content_length":"23810","record_id":"<urn:uuid:ec526964-6133-4587-bc69-96c0cf9a757a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
GCSE Science/Chemistry Calculations From Wikibooks, open books for an open world You need to know how to do quite a few chemistry calculations e.g. how to calculate a mole. Moles are really quite simple to figure out. All you need is periodic table which you will be given in your exam so don't worry! The Mole is simply the name given to a certain number: 602 300 000 000 000 000 000 000 or 6.023 x 10 to the power of 23. When you get precisely that number of atoms of carbon-12 it weighs exactly 12g. So if you get that number of any element or compound and it will weigh exactly the same number of grams as the relative atomic mass, Ar OR Mr Some Notes on Avogadro's Number, 6.023 x 1023 Chemists use Avogadro's number every day. It is a very valuable number for a chemist to know how to use, and use properly. Where did Avogadro's number come from? Did Avogadro himself do all the calculations? Was it just arbitrarily made up? How can it be measured? Some possible answers follow. Amadeo Avogadro (1776-1856) was the author of Avogadro's Hypothesis in 1811, which, together with Gay-Lussac's Law of Combining Volumes, was used by Stanislao Cannizzaro to elegantly remove all doubt about the establishment of the atomic weight scale at the Karlsruhe Conference of 1860. The name "Avogadro's Number" is surely just an honorary name attached to the calculated value of the number of atoms, molecules, etc. in a gram molecular weight of any chemical substance. Of course if we used some other mass unit for the mole such as "pound mole", the "number" would be different. The first person to have calculated the number of molecules in any mass of substance seems to have been Josef Loschmidt, (1821-1895), an Austrian high school teacher, who in 1865, using the new Kinetic Molecular Theory (KMT) calculated the number of molecules in one cubic centimeter of gaseous substance under oridnary conditions of temperature of pressure, to be somewhere around 2.6 x 1019 molecules. This has always been known as the "Loschmidt Number." I have been searched for some time to determine the first time the term "Avogadro Number" was used. My best estimate is that it was first used in a 1909 paper by Jean Baptiste Jean Perrin (1870-1942) entitled "Brownian Movement and Molecular Reality." This paper was translated into English from the French in "Annales De Chimie et de Physique" by Fredric Soddy and is available. Perrin, was the 1926 Nobel Laureate in Physics for his work on the discontinuous structure of matter, and especially for his discovery of sedimentation equilibrium. Perrin should be very well known to Dr. Northrup, of our chemistry department here at TTU who does calculation in molecular dynamics using methods developed by Perrin. In his paper Perrin says "The invariable number N is a universal constant, which may be appropriately designated "Avogadro's Constant." In a simple minded way, here is how it might have come about: First chemists discovered the combining ratios of the elements by mass. Those are the atomic weights. e.g. Na and Cl combine on a 1:1 ratio BY ATOM; BY MASS they combine on a 23 gram to 35.5 gram ratio. This has to mean that an atom of Na weighs less than an atom of Cl. The Atomic Weight scale was worked out by observations like these. Next chemists decided to take the Atomic Weight of an element and define that many grams of it to be ONE MOLE. That meant that 1 gram of H is a mole of H; 12 grams of C is a mole of C 23 grams of Na is a mole of Na, etc. Finally they measured how many atoms are in ONE MOLE. It comes out to be the number 6.023x1023 atoms Therefore 1 mole of substance = 6.023x1023 particles. This is called Avogadro's Number. Some Links related to this essay: A Biographical interview with Amadeo Avogadro Avogadro' Hypothesis
{"url":"http://en.wikibooks.org/wiki/GCSE_Science/Chemistry_Calculations","timestamp":"2014-04-17T18:25:51Z","content_type":null,"content_length":"27075","record_id":"<urn:uuid:2f79d069-7d86-4fa6-8e45-49dc616d8ded>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Thermal Performance and Energy Benefits of Using Massive Walls in Residential Buildings Dynamic Hot Box Test and Development of Finite Difference Computer Model of the Wall Complex Wall Assemblies - Equivalent Walls Generated for Use in One-Dimensional Whole-Building Energy Modeling Dynamic Whole Building Energy Simulations of House Containing Wood-Framed and Massive Walls Effective R-values and Dynamic Benefits for Massive Systems (DBMS) Masonry or concrete walls having a mass greater than or equal to 30 lb./ft ^2 (146 kg/m^2) and solid wood walls having a mass greater than or equal to 20 lb./ft ^2(98 kg/m^2) are defined by the Model Energy Code [1] as massive walls. They have heat capacities equal to or exceeding 6 Btu/ft^2 ^0F [266 J/(m^2k)]. The same classification is used in this work. The evaluation of the dynamic thermal performance of massive wall systems is a combination of experimental and theoretical analysis. It is based on dynamic three-dimensional finite difference simulations, whole building energy computer modeling, and dynamic guarded hot box tests. Dynamic hot box tests serve to calibrate computer models. However, they are not needed for all wall assemblies. For simple one-dimensional walls, theoretical analysis can be performed without compromising the accuracy when the computer model was calibrated earlier using similar material The massive wall is typically tested in a guarded hot box under steady-state and dynamic conditions. These tests enable calibration of the computer models and estimation of the steady-state R-value as well as wall dynamic characteristics. Dynamic hot box tests performed on massive walls consist of two steady-state test periods connected by a rapid temperature change on the climate side. The finite difference computer code Heating 7.2. [2] is applied to model the wall under dynamically changing boundary conditions (recorded during the hot box test). The Heating 7.2 computer model was validated in the past using steady-state hot box test results [3]. For each individual wall, a finite difference computer model is developed. The accuracy of the computer simulation is determined in several ways. The first check is to compare test and simulated R-values. The simulated steady-state R-value has to match the experimental R-value within 5% to be consistent with the accuracy of hot box measurements [3]. Also, computer heat flow predictions are compared with the hot box measured heat flow through the 2.4 m x 2.4 m (8 ft by 8 ft) specimen exposed to dynamic boundary conditions. The computer program uses boundary conditions recorded during the test (temperatures and heat transfer coefficients). Values of heat flux on the surface of the wall generated by the computer program are compared against the values measured during the dynamic hot box test. Response factors, heat capacity, and R-value are computed using the finite-difference computer code. They enable calculation of the wall thermal structure factors and development of the simplified one-dimensional “thermally equivalent wall” configuration [4,5,6]. Thermal structure factors reflect the thermal mass heat storage characteristics of wall systems. A thermally equivalent wall has a simple multiple-layer structure and the same thermal properties as the nominal wall. Its dynamic thermal behavior is identical to the complex wall tested in the hot box. Development of a thermally equivalent wall enables usage of whole-building energy simulation programs with hourly time steps (DOE-2 or BLAST). These whole building simulation programs require simple one-dimensional descriptions of the building envelope components. The use of the equivalent wall concept provides a direct link from the dynamic hot box test to accurate modeling of buildings containing walls which have three-dimensional heat flow within them, such as the Insulating Concrete Form (ICF) Wall Systems [7,1]. The DOE-2.1E computer code is utilized to simulate a single-family residence in representative U.S. climates. The space heating and cooling loads from the residence with massive walls are compared to loads for an identical building simulated with lightweight wood-frame exterior walls. Twelve lightweight wood-frame walls with R-values from 0.4 to 6.9 Km^2 / W (2.3 to 39.0 hft^2 F/Btu) are simulated in six U.S. climates. The heating and cooling loads generated from these building simulations are used to estimate the R-value equivalents which would be needed in conventional wood-frame construction to produce the same loads as for the house with massive walls in each of the six climates. The resulting values account for not only the steady state R-value but also the inherent thermal mass benefit. This procedure is almost identical to that used to create the thermal mass benefits tables in the Model Energy Code [1]. The thermal mass benefit is a function of the climate. R-value Equivalent for Massive Systems is obtained by comparison of the thermal performance of the massive wall and light-weight wood-frame walls, and they should be understood only as the R-value needed by a house with wood-frame walls to obtain the same space heating and cooling loads as an identical house containing massive walls. There is not a physical meaning of the term “R-value Equivalent for Massive Systems.” A dynamic hot box test takes about 200 hours. It serves mainly to calibrate the computer model of the tested wall. This time consuming test-based calibration of the computer model is required only for complex massive wall configurations. In the case of simple one-dimensional walls only the theoretical analysis can be performed without compromising the accuracy. A dynamic hot box test performed on the massive walls consists of two steady-state test periods connected by a rapid temperature change on the climate side. In addition to calibration of the computer model, the test enables estimation of the steady-state R-value and wall dynamic characteristics. Dynamic three-dimensional computer modeling is used to analyze the response of the complex massive walls to a triangular surface temperature pulse. This analysis enables estimation of the steady-state R-value of the wall, thermal capacity, response factors, and wall thermal structure factors [4,5,6]. The wall thermal structure factors are used later to create the one-dimensional equivalent wall, necessary for whole-building energy simulations. A calibrated heat conduction, finite-difference computer code Heating 7.2, is used for this analysis [2]. The accuracy of Heating 7.2 is validated by examining its ability to predict the dynamic process measured during the dynamic hot box test for the massive wall [7]. The computer program uses recorded test boundary conditions (temperatures and heat transfer coefficients) at one hour time Values of heat flux on the surface of the wall generated by the program are compared with the values measured during the dynamic test. The computer program has to reproduce the same wall thermal response as was recorded during the hot box test. Later, this calibrated computer model is used to generate the equivalent wall which enables one-dimensional whole building energy analysis. The dynamic thermal performance analysis for massive walls that is described herein is based on whole building energy modeling results. A “real” three-dimensional description of complex walls cannot be used directly by whole-building simulations. Such walls must be simplified to a one-dimensional form to enable the dynamic whole building thermal analysis using DOE-2, BLAST or similar computer programs. The usage of the equivalent wall enables more accurate modeling of buildings containing complicated three and two-dimensional internal structures. Very often, such complicated walls are composed of several different materials with drastically different thermal properties. In prior works by Kossecka and Kosny [4,5,6] the equivalent wall concept was introduced. The thermal structure factors constitute, together with wall R-value, and overall thermal capacity C, the basic thermal wall characteristics which can be determined experimentally. They represent the fractions of heat stored in the volume of the separated wall element, which are transferred across each of its surfaces. A calibrated three-dimensional computer model of the complex wall serves for calculation of response factors. For a triangular pulse which is simulated on one wall side, the dynamic finite different computer code calculates a series of response factors. Equivalent wall has the same steady-state and dynamic thermal performance as a real complex wall. As is shown in works of Kossecka and Kosny [4,5] even for the complex thermal bridge configuration, response factors for both walls (nominal complex wall and equivalent wall) as well as steady-state R-values and thermal structure factors have the same values. Comparative analysis of the space heating and cooling loads from two identical residences, one with massive walls and one containing lightweight, wood-frame exterior walls, was introduced for development of massive wall thermal requirements in the Model Energy Code [1]. This procedure was adopted by the authors. The DOE-2.1E computer code was utilized to simulate a single-family residence in six representative U.S. climates. Twelve lightweight wood-frame walls with R-values from 0.4 to 6.9 Km^2 / W (2.3 to 39.0 hft^2 F/Btu) were simulated. The heating and cooling loads generated from these building simulations were used to estimate the R-value equivalents for massive walls. A list of the cities and then climate data is presented in Table 1. Table 1. Six U.S. climates (TMY) used for DOE 2.1E computer modeling │ Cities: │ HDD 18.3 C(65 deg F) │ CDD 18.3 C(65 deg F) │ │ Atlanta │ 1705 (3070) │ 870 (1566) │ │ Denver │ 3379 (6083) │ 315 (567) │ │ Miami │ 103 (185) │ 2247 (4045) │ │ Minneapolis │ 4478 (8060 ) │ 429 (773) │ │ Phoenix │ 768 (1382) │ 2026 (3647) │ │ Washington D.C. │ 2682 (4828) │ 602 (1083) │ To normalize the calculations, a standard North American residential building is used. The standard building selected for this purpose is a single-story ranch style house that has been the subject of previous energy efficiency modeling studies [9]. All U.S. residential building thermal standards, including ASHRAE 90.2 and Model Energy Code, are based on the whole building energy modeling performed with the use of this house. A schematic of the house is shown in Figure 1. The house has approximately 143 m^2 (1540 ft^2) floor area, 123 m^2 (1328 ft^2) of exterior wall elevation area, 8 windows, and 2 doors (one door is a glass slider; its impact is included with the windows). The elevation wall area includes 106 m^2 (1146 ft^2) of opaque wall area, 14.3 m^2 (154 ft^2) of window area and 2.6 m^2 (28 ft^2) of door area. For the base-case calculation of infiltration we used the Sherman-Grimsrud Infiltration Method, an option in the DOE 2.1E whole-building simulation model [12]. An average total leakage area of 0.0005 expressed as a fraction of the floor area [10,11,12] is assumed. This is considered average for a single-zone wood-framed residential structure. This number cannot be converted directly to average air changes per hour because it is used in an equation driven by hourly wind speed and temperature difference between the inside and ambient air data which varies for the six climates analyzed for this study. However, for the six climates this represents an air change per hour range which will not fall below an annual average of 0.35 ACH. DOE-2.1E energy simulations for six U.S. climates are performed for light-weight wood frame walls (50 mm x 200 mm [2x4in] construction) of R-values from 0.4 to 6.9 m2 K / W (2 to 39 hft2F/Btu ). Steady-state R-values were computed for wood-framed walls using the Heating 7.2 - finite difference computer code. The accuracy of Heating 7.2's ability to predict wall system R-values was verified by comparing simulation results with published test results for twenty-eight masonry, wood-frame, and metal-frame walls tested at other laboratories. The average differences between laboratory test and Heating 7.2 simulation results for these walls were +/- 4.7 percent [13]. Considering that the precision of the guarded hot box method is reported to be approximately 8 percent, the ability of Heating 7.2 to reproduce the experimental data is within the accuracy of the test method [14]. Because of the high accuracy of these simulations, all steady-state clear wall R-values used in this procedure are directly linked to existing thermal standards where wall thermal requirements are based on clear wall R-values. The total space heating and cooling loads for twelve lightweight wood-frame walls were calculated using DOE-2.1E simulations. Regression analysis was performed to analyze the relation between steady-state clear wall R-values (of wood-stud walls) and the total building loads for six U.S. climates. For all six climates, there was a strong correlation (r^2 was about 0.99). Regression equation parameters are presented in Table 2. where: E - total building load [Mbtu/year], R- wall R-value. Table 2. Parameters in equation (1) expressing the relation between steady-state clear wall R-values (of wood-stud walls) and total building loads for six U.S. climates.* │ Cities: │ a (Y-intercept) │ b (slope) │ │ Atlanta │ 1.8e8 │ -4.69 │ │ Denver │ 1.65e9 │ -4.76 │ │ Miami │ 2.97e18 │ -11.0 │ │ Minneapolis │ 3.25e12 │ -5.95 │ │ Phoenix │ 3.56e9 │ -5.27 │ │ Washington │ 3.01e9 │ -5.01 │ * R-value range 0.4-6.9 m2 K / W (2-39 hft2F/Btu ). The heating and cooling loads generated for 12 lightweight wood-frame walls can be used to estimate the R-value equivalents for massive walls. Equation (1) yields the wall R-value which would be needed in conventional wood-frame construction to produce the same load as the house with massive walls in each of six climates. There is no physical meaning for the term R-value equivalent for massive walls. This value accounts not only for the steady-state R-value but also the inherent thermal mass benefit. This procedure is similar to that used to create the thermal mass benefits tables in the Model Energy Code [1]. Thermal mass benefits are a function of the material configuration and the climate. A dimensionless measure of the wall thermal dynamic performance is proposed in this paper - Dynamic Benefit for Massive Systems (DBMS) defined by equation (2); where: : DBMS - Dynamic Benefit for Massive Systems, mR[eqv] - R-value equivalent for massive wall, and R- steady-state R-value. Equation (2) documents thermal benefits of using massive wall assemblies in residential buildings regardless of the level of the wall steady-state R-value. © Oak Ridge National Labs and Polish Academy of Sciences Updated August 9, 2001 by Diane McKnight
{"url":"http://web.ornl.gov/sci/roofs+walls/research/detailed_papers/dyn_perf/thermal.html","timestamp":"2014-04-17T10:27:27Z","content_type":null,"content_length":"19487","record_id":"<urn:uuid:1be8831e-d095-45a9-baa3-ce77ff0a0d79>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
The TeX Catalogue OnLine, Entry for esint, Ctan Edition The esint package permits access to alternate integral symbols when you're using the Computer Modern fonts. In the original set, several integral symbols are missing, such as \oiint. Many of these symbols are available in other font sets (pxfonts, txfonts, etc.), but there is no good solution if you want to use Computer Modern. The package provides Metafont source and LaTeX macro support. See also esint-type1. The author is Eddie Saudrais. License: pd Version: 1.1 Catalogued: 2006-10-27
{"url":"http://ftp.rediris.es/mirror/tex-archive/help/Catalogue/entries/esint.html","timestamp":"2014-04-18T16:12:01Z","content_type":null,"content_length":"4423","record_id":"<urn:uuid:e2203435-adbc-428f-88c9-45ab563ad33e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Zelda Universe Forums - The world's largest Legend of Zelda fan community - View Single Post - Has It Ever Been Proven that One is Greater than Zero? ) [ ] 11-04-2010, 10:36 AM Guardian Dragon Join Date: Jun 2005 Location: Milwaukee WI Re: Has It Ever Been Proven that One is Greater than Zero? If you subtract a from b, and the result c is a positive number, then b is greater than a. This fact can be proven, and since 1 - 0 = 1, and 1 > 0, oops, circular reasoning! I believe you can prove it using a few different axioms and subsets, but the proof isn't as simple as most would think.
{"url":"http://www.zeldauniverse.net/forums/3728003-post31.html","timestamp":"2014-04-18T08:06:32Z","content_type":null,"content_length":"11239","record_id":"<urn:uuid:cdb68a27-7a1e-47a4-800a-f6dc713f7b04>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 213 Fall 2010 CAS MA 213: Basic Statistics & Probability Fall 2010 Instructor: Dr. Surajit Ray Department of Mathematics and Statistics MCS 222, 111 Cummington Street, Boston, MA 02215 Phone: (617) 353-5209, Fax: (617) 353-8100 Tue,Thu 12:30pm-2:00pm (Class Room: CAS 224) Office Hours: Tuesday 2:00pm-3:00pm, Monday 12pm-1pm BLACKBOARD Login ( BU Students) Course Description: This course serves as an introduction to basic concepts and tools in probability and statistics. We begin with how to describe data. Then we study the elements of probability theory. Finally, we combine data description and probability theory into an approach to statistical inference. Students should emerge from this course with the ability to incorporate a variety of skills in analyzing and reasoning from data. Week 1 1 Lecture Introduction, overview of statistics. Week 2 2 Lectures Statistical data- types and methods of description. Week 3 2 Lectures Summarizing data- mean, standard deviation, five-point summary. Week 4 2 Lectures Introduction to Probability. Week 5 2 Lecture Solving problems involving probability Week 6 1 Lecture Discrete random variables Midterm 1 (Oct 7) Week 7 1 Lecture Continuous random variables. The Uniform distribution. Week 8 2 Lectures Continuous random variables. The Normal distribution. Z-tables. Week 9 2 Lectures Sampling Distribution and the Central Limit Theorem. Week 10 1 Lecture Confidence Interval estimation of the population mean and proportion. Midterm 2 (Nov 4). Week 11 2 Lectures Confidence Interval estimation of the population mean and proportion Week 12 2 Lectures Hypothesis Testing. Large-sample test about a population mean. Week 13 1 Lectures Hypothesis testing for the population mean. (Thursday (Nov 25) HOLIDAY ) Week 14 2 Lectures Hypothesis testing for the population proportion. Week 15 2 Lectures Two sample inference (Time permitting) Text Book: Statistics, 11/E James T. McClave Terry Sincich William Mendenhall ISBN-10: 0132069512 View larger cover ISBN-13: 9780132069519 Publisher: Prentice Hall Buy from myPearsonStore
{"url":"http://math.bu.edu/people/sray/teaching/math213/","timestamp":"2014-04-18T21:00:19Z","content_type":null,"content_length":"9700","record_id":"<urn:uuid:acb999de-d6e8-4a26-8ef1-3471b4030e53>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Perform Meaningful Estimates of Genetic Effects Although the genotype-phenotype map plays a central role both in Quantitative and Evolutionary Genetics, the formalization of a completely general and satisfactory model of genetic effects, particularly accounting for epistasis, remains a theoretical challenge. Here, we use a two-locus genetic system in simulated populations with epistasis to show the convenience of using a recently developed model, NOIA, to perform estimates of genetic effects and the decomposition of the genetic variance that are orthogonal even under deviations from the Hardy-Weinberg proportions. We develop the theory for how to use this model in interval mapping of quantitative trait loci using Halley-Knott regressions, and we analyze a real data set to illustrate the advantage of using this approach in practice. In this example, we show that departures from the Hardy-Weinberg proportions that are expected by sampling alone substantially alter the orthogonal estimates of genetic effects when other statistical models, like F[2] or G2A, are used instead of NOIA. Finally, for the first time from real data, we provide estimates of functional genetic effects as sets of effects of natural allele substitutions in a particular genotype, which enriches the debate on the interpretation of genetic effects as implemented both in functional and in statistical models. We also discuss further implementations leading to a completely general genotype-phenotype map. Author Summary The rediscovery of Mendel's laws of inheritance of genetic factors gave rise to the research field of Genetics at the very beginning of the last century. The idea of traits being determined by the effects of inherited genes is thus the conceptual core of Genetics. After more than one century, however, we still lack a completely general mathematical description of how genes can control traits. Such descriptions are called genotype-phenotype maps, or models of genetic effects, and they become particularly cumbersome in the presence of interaction among genes, also referred to as epistasis. The models of genetic effects are necessary for unraveling the genetic architecture of traits—finding the genes underlying them and obtaining estimates of their individual effects and interactions—and for meaningfully using that information to investigate their evolution and to improve response to selection in traits of economical importance. Here, we illustrate the convenience of using a recently developed model of genetic effects with arbitrary epistasis, NOIA, to inspect the genetic architecture of traits. We implement NOIA for practical use with a regression method and exemplify that theory with a real dataset. Further, we discuss the state of the art of genetic modeling and the future perspectives of this subject. Citation: Álvarez-Castro JM, Le Rouzic A, Carlborg Ö (2008) How To Perform Meaningful Estimates of Genetic Effects. PLoS Genet 4(5): e1000062. doi:10.1371/journal.pgen.1000062 Editor: David B. Allison, University of Alabama at Birmingham, United States of America Received: May 23, 2007; Accepted: April 1, 2008; Published: May 2, 2008 Copyright: © 2008 Álvarez-Castro et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: ÖC acknowledges funding from the Knut and Alice Wallenberg Foundation. ALR was funded by a grant to ÖC from the Swedish Foundation for Strategic Research. Competing interests: The authors have declared that no competing interests exist. The authors have declared that no competing interests exist. There is an increasing interest in Quantitative Genetics and Evolutionary Biology to identify genetic effects, and more particularly gene interactions, on a genome-wide scale and to understand its role in the genetic architecture of complex traits [1],[2]. Genome scans for quantitative trait loci (QTL) have proven to be a successful strategy for identifying genetic effects and interactions. Two of the main issues in the development of QTL mapping methods are which models of genetic effects to use and how to test for effects in regions between marker locations. The second issue is important not only for considering the genome as a virtually continuous space where to map the QTL, but also to efficiently analyze incomplete data sets, which are the norm in practice [3]. Lander and Botstein [4] developed the classic interval mapping (IM) method, in which they showed how to perform a QTL mapping strategy implemented with the most likely genotypes for the genome regions in between marker locations, given the genotypes at the flanking markers. This method has been extended in several ways [5]–[8]. Albeit the computation of those likelihoods is complex and time demanding, Haley and Knott [9], (see also [10]) provided a convenient approximation of them by means of a simple regression method. Regarding now the first issue mentioned above—the models of genetic effects—the definition of the genetic effects in Haley and Knott's [9] regression (hereafter HKR) comes from a model that has been extensively used in Quantitative Genetics, the F[∞] model [11],[12]. However, other models of genetic effects have recently been shown to be more appropriate in QTL mapping. The genetic effects depend not only on the genotypic values but also on the genotype frequencies of the analyzed population (e.g. [13]–[16]). By taking into account these frequencies, it is possible to build orthogonal models that are convenient for several reasons [13]–[19]. First, orthogonal estimates do not change in reduced models, which considerably facilitates model selection for finding the genetic architecture of traits. Second, the estimates of genetic effects obtained by orthogonal models are meaningful in the population under study—they provide the effects of allele substitutions in that population. Third, they directly lead to a proper, orthogonal decomposition of the genetic variance from which to compute important measures, like the heritability of that trait in that population. The statistical properties of HKR could therefore be improved by implementing it with a genetic model that is orthogonal for any possible genotype frequencies in the population under study. The statistical formulation of the recently developed NOIA (Natural and Orthogonal InterActions) model of genetic effects is orthogonal in situations where previous models are not—for departures from the Hardy-Weinberg proportions (HWP) at any number of loci—and it is therefore more appropriate choice for estimating genetic effects from data in genetic mapping [16]. Furthermore, a novel feature of NOIA is its implementation to transform the genetic effects estimated in the population under study, in two ways. First, they can be transformed into how they would look like in a population with different genotype frequencies at each locus, like an ideal F[2] population or into an outbred population of interest. Second, using the functional formulation of NOIA, it is possible also to express the genetic effects as effects of allele substitutions from reference individual genotypes—instead of from population means like in the statistical formulation. In other words, starting from the orthogonal genetic effects of a population or sample under study, which are the ideal ones for performing model selection and have a particular meaning, NOIA enables us to obtain the values of the genetic effects that are associated to other desired meanings and are useful, therefore, to inspect different aspects of the evolution of a population, or selective breeding for increasing or decreasing a trait values. Our motivation for this communication is to show how to use models of genetic effects to obtain estimates of genetic effects from data that have the desired meaning of any particular scientific purpose. To this end we first inspect how much of a difference it makes to use the classical models for ideal populations, such as ideal F[2] populations, to compute genetic effects in a non-ideal situation, under departures from the HWP. We address this issue by generating simulated populations that depart from the HWP in several degrees and analyzing them with NOIA and other models. We quantify the deviances from orthogonal estimates due to using models that assume ideal conditions in the populations under study, thus showing the practical convenience of using the NOIA model for performing real estimates of genetic effects in QTL experiments. Second, we develop an implementation of NOIA with HKR, allowing it for immediate practical use and illustrate its performance using an example with real data. By this example we provide estimates of genetic effects with different meanings and, for the first time, functional estimates of genetic effects—using an individual genotype as reference—from a real data set. We discuss on how this feature opens new possibilities of using real data to analyze important topics in Evolutionary Genetics. Genetic Models under Departures from Hardy-Weinberg Figure 1 shows the results of estimating, with three different models (NOIA, G2A and F[2]), the genetic effects of a two-locus and two-allele genetic system (Table 1) in nine simulated populations under linkage equilibrium (LE) with various degrees of departure from the HWP (see Methods). The eight genetic effects plus the population mean in the only model that is orthogonal in all simulated populations—the statistical formulation of the NOIA model—respond to the increasing departures from HWP in three groups. The first and most influenced group contains the three genetic effects involving the additive effect of the locus affected by departures from HWP, α[A], αα, and αδ. These genetic effects increase substantially with increasing departures from HWP and are doubled when the homozygote A[2]A[2] is almost completely absent. The second group contains the reference point—the mean of the population, μ—and the single locus effects of locus B (the one at HWP), α[B] and δ[B]. The estimates in this group decreased with increasing departures from HWP. The third group contains the remaining three genetic effects, δ[A], δα and δδ, whose estimates are not affected by departures from HWP at locus B. The genetic effects measured by the G2A model show the same qualitative behavior described above for NOIA (i.e. also responds in three distinct groups), but are quantitatively different. The reason for this is that G2A can adapt the measurements to the changes in the allele frequencies of the population, but not to the precise departures of the genotype frequencies from the HWP. The genetic estimates obtained using the F[2] model always give the same values independently of the genetic constitution of the population. The F[2] thus fails to capture the effects of departures from HWP at all. Thus, unless when the studied population is an ideal F[2] (and the deviances from HWP are zero, see Figure 1), the estimate of the population mean from G2A and F[2] is biased and the genetic estimates do not reflect the average effects of allele substitutions in the population under study. Those deviations become more severe as the departure from HWP increases (Figure 1). Figure 1. Effects of departures from the HWP on genetic effects. The genetic effects were obtained using the F[2], G2A and NOIA models in a two locus genetic system that was simulated in nine F[2] populations with departures from HWP ranging from zero to 97% (see text for details). Table 1. Genotype-phenotype map of the two-locus system used in the simulated populations to evaluate the effect of departures from HWP on genetic effects estimated using the F[2], G2A and NOIA Figure 2 shows the variance component estimates obtained in the nine simulated populations, which were obtained by computing the variance over the individuals of the sample population of the correspondent genetic effects (additive effect at locus A, additive effect at locus B, etc). For orthogonal models, the sum of the three components of variance gives the total genetic variance—which in this case equals the phenotypic variance, since there is no environmental variance in the simulated populations. Here, this is only observed for the variances computed using NOIA. The other two models are not orthogonal in the populations under study (except in the ideal F[2] population, where the three models coincide), and thus there exist covariances between the genetic effects that would need to be accounted for to obtain the true genetic variance of the population [20]. The decomposition of the genetic variance made by the G2A and F[2] models is, thus, non-orthogonal. The G2A leads to a greater departure form an orthogonal decomposition of variance than the F[2] model by the particular kind of departures from HWP simulated here. Both the G2A and F[2] models underestimate the additive variance and therefore also the heritability of the trait in the simulated populations. Figure 2. Effects of departures from the HWP on the variance components. The variance decomposition was performed for the same cases as in Figure 1. V[P] is the phenotypic variance, which (in absence of environmental variance) is equal to V[G], the genetic variance. V[A] is the additive variance, V[D] is the dominance variance and V[I] is the epistatic (interaction) variance. An Example Using Experimental Data For illustrating the advantage of using NOIA for analyzing experimental data, we reanalyze a two-locus (A and B) genetic system with epistasis affecting growth rate in an F[2] cross between Red junglefowl and White leghorn layer chickens [21]. The two loci are on different chromosomes, thus avoiding linkage disequilibrium (LD). Locus A departs significantly from the HWP when considered alone, but not when correcting for multiple testing (see Methods). Table 2 shows the genetic effects and the components of variance for this two-locus system using several models of genetic effects—NOIA, G2A, F[2] and F[∞]. As explained in the previous subsection, NOIA is orthogonal under departures from the HWP, whereas the other models are not. The F[∞] model deviates severely from the estimates obtained by NOIA. Deviations are expected since the F[∞] model is non-orthogonal even in an ideal F[2] population with no deviations from the expected frequencies due to sampling errors. The F[2] and G2A models, on the other hand, would be orthogonal under ideal circumstances and the observed deviations from orthogonality of those models when analyzing these experimental data are due to sampling (as explained above). Table 2 shows that the estimates obtained using F[2] and G2A differ substantially from these of NOIA (up to 18/42% for the G2A and 53/138% for the F[2] model, for the genetic effects/variance component estimates). This example with real data, thus, shows that it makes a substantial improvement to use NOIA to compute genetic effects and variance decomposition in QTL mapping experiments over the classical models of genetic effects designed to fit ideal experimental situations. Table 2. Estimates of statistical genetic effects (to the left of each cell) and components of the genetic variance (to the right) for an epistatic QTL for growth rate pair in a Red junglefowl×White leghorn layer intercross [21] using four different models. Transformation To Get Functional Genetic Effects From the statistical estimates in Table 2, we have computed functional estimates of genetic effects using an analogous expression to (S6), shown in Text S1, derived by Álvarez-Castro and Carlborg [16]. The variances of the statistical estimates can also be transformed to give the variances of the functional estimates using (6), as derived in the Methods section. Choosing “A[1]A[1]B[1]B[1]” as reference genotype, the estimates of functional genetic effects, and the standard deviations associated to these estimates, are shown in Table 3. Whereas statistical genetic effects describe the average effects of allele substitutions in a population, functional genetic effects describe the genotype-phenotype map as a series of allele substitutions performed in the genotype of a particular—reference—individual genotype [16],[22], in this case the genotype of the Red junglefowl, “A[1]A[1]B[1]B[1]”. Table 3. Estimates of functional genetic effects from the reference of genotype A[1]A[1]B[1]B[1], G[1111]±σ[G][1111] = 265.18±8.35 grams, and their standard deviations for an epistatic QTL pair for growth rate in a Red junglefowl×White leghorn intercross [21]. To illustrate the usefulness of these functional genetic effects for understanding how epistatic effects can contribute to phenotype change, we consider the role of this QTL pair in increasing the growth rate in the Red junglefowl. For simplicity, we assume hereafter that A and B are the only two loci affecting growth rate. From the marginal genetic effects in Table 3, it can be deduced that the White leghorn layer allele at locus A slightly increases the phenotype whereas the White leghorn allele at locus B actually decreases it, when considered in homozygotes. However, the dominance effects are positive and have a higher absolute value than the additive effects. Therefore, if one White leghorn layer allele appeared by mutation in a Red junglefowl population at any of the two loci, A or B, it would be maintained at a certain frequency because of balancing selection—superiority of the heterozygote—but it would neither disappear nor reach fixation. This suggests that one mutation could be present at some frequency in the population when the second one appeared. For analyzing what would happen if eventually the two mutations were present at the same time in the population, we have to consider also the interaction effects. The double homozygote for White leghorn layer allele increases the phenotype with roughly forty grams (four times aa, in Table 3 as it can be deduced from G = S⋅E, with the reference of R = G[1111]), relative to the expected value without epistasis, which is a decrease in roughly 20 grams from the Red junglefowl. In total, this makes the phenotype of the White leghorn layer 20 grams higher than the Red junglefowl. However, for inspecting if this results support the White leghorn layer alleles being likely to reach fixation we also need to consider the phenotypes of the heterozygotes. Interactions involving dominance in locus B are all negative, thus favoring the fixation of the White leghorn layer allele, B[2]. The role of allele A[2] is not as obvious, since da is positive. The genotypic value of “A[1]A[2]B[2]B[2] ” is roughly 30 grams higher than the Red junglefowl (computed again from Table 3 and G = S⋅E) and ten grams higher than the pure White leghorn layer. The expected, therefore, would be that the two alleles segregate at locus A. The standard deviations of the estimates are however rather large and thus do not rule out the possibility of fixation of the White leghorn layer allele at locus A. The Meaning of the Statistical Estimates The statistical formulation of NOIA is orthogonal under random deviations from ideal experimental populations and outbreeding pedigrees [16]. Therefore, NOIA can provide meaningful estimates of genetic effects—as allele substitutions made in the population or sample under study—and a proper decomposition of the genetic variance under those circumstances. In this article, we illustrate the practical implications of these achievements for estimation of genetic effects and QTL analysis in two ways. First, we simulated a two-locus genetic system under departure from the HWP affecting one of the loci underlying the trait under study. This scenario can have a biological origin or be due to sampling alone and it is commonly occurring in experimental data both from natural and experimental populations, such as for the QTL pair we have studied (see below). We therefore deemed it relevant to test the performance of NOIA in practice—by assessing how departures from HWP cause other models to deviate from the orthogonal values. Our results show that departures from HWP substantially affect both the genetic effects and the decomposition of variance. The cause for this is that epistasis makes the genetic effects dependent on the genetic background, which is different under different degrees of departures from HWP. NOIA can capture the proper, orthogonal genetic effects, and thus also their orthogonal variances, in the simulated populations whereas the deviances from these values due to using the other—nonorthogonal—models increases with the departures from Second, we used experimental data on epistatic QTL from a previously published study [21] to explore how much of a difference it makes to use NOIA instead of previous statistical models, when departures from HWP are not larger than expected by sampling. Even though the population we studied was rather large (approximately 800 individuals), the random deviations from the HWP in this set of available individuals cause considerable differences in the estimates of genetic effects performed with models that would be orthogonal in totally ideal situations, as compared to the estimates obtained using NOIA. These differences become even more noteworthy for the components of variance estimated using the different models. These values influence consequential quantities, like the heritability of one trait, which may be needed for instance for performing artificial selection at the available sample of individuals. Orthogonal models are also important for finding the genetic architecture of traits—albeit this has not been our focus in this communication. In principle, when testing the effect of a particular locus or set of loci in a QTL analysis, the choice of the model of genetic effects to use does not matter. However, it does matter when it comes to compare which of several putative sets of loci is the most likely genetic architecture underlying the trait, i.e., when performing model selection in QTL analysis. This is so because orthogonal models have the convenient property that the estimates and their variances remain the same when considering reduced models, which facilitates model selection strategies [19]. Translating Estimates To Fit Other Meanings After model selection and the estimation of genetic effects have been properly carried out using an orthogonal model, the obtained estimates provide the effects of allele substitutions in the sample of individuals used in the study, and the decomposition of variance is also the appropriate one in that particular sample of individuals. The NOIA model provides convenient tools for transforming those estimates into the ones with any other desired meaning, like the orthogonal estimates and the decomposition of variance in a different population [16]. This is useful to compare results from QTL studies performed in different populations, and to use the results obtained with one orthogonal model in one population to study the evolution of the same trait in a different population. One example of the previous is removing the characteristics of the data that are not supposed to be properties of a target population from the estimates. The departures from HWP of the experimental data we dealt with in this article are in fact supposed to be only due to sampling, instead of being caused by real Hardy-Weinberg disequilibrium in the F[2] population. If we were interested in the genetic effects or in the decomposition of variance of the ideal F[2] as a target population—in which the departures from HWP are absent—we could use the transformation tool of NOIA to obtain (from the original estimates with the reference of the mean of the sample population) the ones with the reference of the mean of an ideal F[2] population. Further, as illustrated in the example with real data, it is possible to transform statistical estimates of genetic effects into functional ones, using a particular reference genotype. Another situation in which these transformations are valuable is, for instance, in a three-locus genetic system with pairwise epistasis. In this case, NOIA would easily permit to consider only the significant genetic effects and to re-compute the genotypic values only from the significant genetic effects (assuming the non-significant third-order interactions to be zero). Functional Estimates of Genetic Effects Statistical models of genetic effects are necessary for QTL analysis and for performing orthogonal decompositions of the genetic variance in populations. Functional models of genetic effects, on the other hand, are convenient—especially in the presence of epistasis—for studying evolutionary properties of the populations such us adaptation in the presence of drift and speciation (see e.g. [23], [24]). NOIA is the first model framework that successfully unifies functional and statistical modeling of genetic effects [16]. This enables researchers to feed models of functional genetic effects, so far mainly used in simulation studies (see e.g. [2],[24]), with real data obtained using statistical models in QTL mapping experiments. Here, we have actually transformed statistical genetic effects, obtained from real data of an F[2] experimental population, into functional genetic effects as allele substitutions performed from a reference individual. Concerning these functional estimates of genetic effects, we have shown in the previous section how they can improve the understanding of the genetic system by inspecting a two-locus model obtained from real data. Notice that when changing the reference of the model, the genetic effects can change their magnitudes and even their signs (see Tables 2 and 3). Therefore, for reaching the kind of conclusions we obtain above for the evolution of a population from an ancestral genotype “A[1]A[1]B[1]B[1]”, the genetic effects have to be described with a model that uses that particular genotype as reference point. Those are the only ones that are meaningful for analyzing the problem under consideration. The HKR with NOIA The computation of genetic effects using NOIA in the example with real data required the use of the theory developed in this article, the implementation of the model to handle missing data (1). When performing IM for searching for the positions and estimates of genetic effects in QTL mapping experiments, missing data occurs at two levels. First, the genotype of the QTL located in a marker interval is not known and needs to be estimated from the observed flanking marker genotypes. Second, in most experimental datasets there are missing genotypes for many genetic markers that can be imputed from genotypes at closely linked informative markers. Thus, the implementation of HKR with NOIA enables us to perform IM with a regression method and using a model of genetic effects that is orthogonal regardless of how far the available data is from the HWP. The HKR has been assessed as a good approximation of IM when dense marker maps are available and missing data are few and random [25],[26], but some disadvantages of this method have also been reported. The residual variance of the HKR has been found to be biased, as first pointed out by Xu [27]. Kao [26] further characterized that bias and found it to be noticeable under LD or strong epistasis. Nevertheless, even in those cases, the estimated genetic effects themselves are not biased [26]. Feenstra et al. [25] have developed a new method, the estimating equation method, which reduces the reported bias of the HKR and is therefore more suitable in the cases when it has proven to be strongly biased. However, the traditional HKR is still popular and convenient mainly due to its dramatic advantage in computational time [25], and this is why in this study we have chosen this method for implementing NOIA for IM. Toward a Completely General Model of Genetic Effects Models of genetic effects need to be further generalized. Two important cases that need to be accounted for are multiple-alleles and LD, which have been addressed in several recent publications dealing with statistical models of genetic effects. Yang [18] has developed a model to test the importance of LD in QTL data, by designing a component of variance due to LD. This statistical model, like the statistical formulation of NOIA, actually accounts for departures from HWP, although it is restricted to the two-locus case. Wang and Zeng [20] have developed a statistical model with multiple alleles in which they also test the importance of LD, in this case by computing all the covariances between the components of variance, due to LD. It is, however, restricted to HWP. Mao et al. [28] have developed a model to account for LD when computing genetic effects in a two-locus model specially designed for single nucleotide polymorphisms. The desired situation, which we are currently aiming toward is to consider all the different departures from ideal situations gathered under the umbrella of a general formal framework of genetic effects. Genetic Models under Departures from Hardy-Weinberg We use a simulated numerical example to show how departures from the HWP affect the estimates of genetic effects in several models of genetic effects. We simulate a trait controlled by two biallelic loci, A and B, generating several populations with the second locus affected by departures from the HWP in several degrees. The genotype-phenotype map corresponds to the phenotype mean of the population and all the genetic effects being equal to one in an ideal F[2] population (Table 1). We first constructed data for an ideal F[2] population of 800 individuals in strict HWP and LE. From this population we subsequently removed 24 A[2]A[2] individuals and added eight A[1]A[1] and 16 A[1]A[2] individuals in a balanced way, without affecting the population size, the frequencies at locus B, the proportion of A[1]A[1] versus A[1]A[2] individuals or LE. Only deviations from the HWP against the A[2]A[2] homozygote were introduced in the data. We repeated this procedure eight times in total and saved each population data, until only eight A[2]A[2] individuals remained. We measured the departures from HWP in these populations by computing the percentage of reduction of A[2]A[2] individuals relative to A[1]A[1], which of course was zero in the ideal F[2] population we started from. We analyzed the simulated data by computing the genetic effects of the system using three models: NOIA, G2A and F[2]. The F[2] model, described in Text S1, is constructed for F[2] populations, although it is only orthogonal in ideal F[2] populations with the genotypic frequencies being exactly ¼, ½, ¼. The NOIA model is as described in Text S1. The G2A model [19] accounts for any gene frequencies of—and it is orthogonal at—populations under exact HWP. Álvarez-Castro and Carlborg [16] obtained it as a particular case of NOIA by constraining (S5), in Text S1, to HWP: where p is the frequency of allele A[1]. The genetic effects were computed for each individual genotype using the genetic-effects design matrices and the estimates of genetic effects from each of the three models, which produced different outcomes. The additive, dominance and interaction variances were obtained as the correspondent sums of the variances of each genetic effect (for instance, the sum of the variances of the additive effects of each of the loci gives the additive variance). Implementing the Haley-Knott Regression with NOIA We recall the required theory behind the HKR and NOIA in Text S1. Here we extend the NOIA model to IM with HKR. We do this by implementing the genetic-effects design matrix of the statistical formulation of NOIA, S[S] (S5), in the HKR method, as we do with the F[2] model in Text S1. The original genotype frequencies p[11], p[12] and p[22] in the NOIA statistical formulation (S5) are the exact genotype frequencies at the considered loci. In the HKR, the genotype frequencies are not known, but can be estimated as: where N is the number of individuals in the population under study. We implement this model in the general expression of the HKR (S4), in Text S1, and obtain: Let G^* be the column-vector of observed phenotypes, G^*[k], k = 1,…,N, ε the corresponding vector of errors, and Z, which is an N×3-matrix whose rows are the vectors ω[k] (S4). With this notation, the general expression of regression (S4) is:(1) This has a straightforward extension to several loci with LE. The S[S] matrix and the E vector can be extended as in Álvarez-Castro and Carlborg [16]. The Z matrix can be extended as the row-wise Kronecker product of the matrices of the single loci, also as in Álvarez-Castro and Carlborg [16], albeit in that article the matrix accounted for only complete marker information, instead of for IM with HKR, or for missing data probabilities. For instance, for a two-locus (A and B) case, the Z[AB] matrix is an N×9-matrix that is built as: Experimental Data Carlborg et al. [21] identified 10 genome-wide significant QTL for growth rate in chicken from eight to 46 days of age in an F[2] intercross of roughly 800 individuals between one Red junglefowl male and three White leghorn females. A simultaneous two-dimensional genome scan was performed to identify pairs of interacting loci regardless of whether their marginal effects were significant or not. We have studied in more detail one of the detected pairs involving QTL on chromosome 2 (486 cM) and 3 (117 cM), hereafter loci A and B respectively. This pair was selected for a number of reasons. First, these loci interact epistatically, in spite of showing no significant marginal effects in the studied population. Second, since they are located in different chromosomes, there is no physical linkage between them. Third, the genotype frequencies at locus A depart significantly from the HWP (p<0.05) when considered independently, but the departure is not significant after applying multiple testing correction accounting for the rest of the detected QTL. Thus, locus A is an example of the departure of the HWP that is expected in QTL experiments just due to sampling. The level of departure from the HWP for the evaluated pair roughly equals the 30% deviation in Figures 1 and 2. We have computed the genetic effects of the epistatic pair involving loci A and B, using several models of genetic effects. First we used the F[∞] model, which was the one also used by Carlborg et al. [21] as it was the model originally implemented in HKR [9],[29]. Second, the F[2] model, which was designed for F[2] populations. Third, the G2A model, which can account for departures of the gene frequencies from ½, and finally the statistical formulation of NOIA, which can adapt to the genotype frequencies of the sample used for the estimation of QTL effects. In these analysis we have made use of the theory developed in this article: the implementation of HKR with NOIA. These developments enable us to deal both with missing data and with the estimation of genetic effects of positions inside the marker intervals. Transforming Errors of the Estimates in NOIA Álvarez-Castro and Carlborg [16] have shown how to transform genetic effects obtained using an orthogonal-statistical model in one population, into statistical genetic effects at any other population or into functional genetic effects from any reference individual. In each of these two cases, the transformation is done as in expression (S6), in Text S1, using the S matrix—the genetic-effect design matrix—of the orthogonal system, G = S[1]⋅E[1], and the inverse of the S matrix in the new system, G = S[2]⋅E[2]:(2) be the transformation matrix. From (2) and (3), the estimates in E[1] can be expressed as functions of the estimates in E[2] as:(4) where the letters and their superindexes indicate the vector, or matrix, they are scalars of and the subindexes indicate the position of the scalars inside the vectors or matrices. From (2), the variances of the estimates E[2], can be computed from the ones in E[1] as:(5) Now for obtaining the vector of variances of the estimates E[2], V[2], from the vector of variances of the estimates E[1], V[1], we just rewrite (3) in algebraic notation as:(6) where the open circle stands for the Hadamard product—giving the matrix whose scalars are the product of the scalars at the same position in the original matrices. Supporting Information Background information on the HKR and NOIA. Concepts and equations related to the original formulation of the HKR and to the NOIA statistical formulation that will help the reader to deeper understand some details of the methods used in the article. (0.09 MB DOC) The authors thank Lars Rönnegård and Carl Nettelbald for fruitful discussion. Örjan Carlborg acknowledges founding from Knut and Alice Wallenberg Foundation. Author Contributions Analyzed the data: JÁ AL. Contributed reagents/materials/analysis tools: JÁ AL ÖC. Wrote the paper: JÁ. Conceived part of the content of the article and developed part of the theory: JÁ ÖC. A question on Equation (6) Posted by jzma Updated email address Posted by PLoS_Genetics Publisher's Note: Correction to Figure 2 Posted by PLoS_Genetics
{"url":"http://www.plosgenetics.org/article/info:doi/10.1371/journal.pgen.1000062?imageURI=info:doi/10.1371/journal.pgen.1000062.g002","timestamp":"2014-04-17T18:14:12Z","content_type":null,"content_length":"145066","record_id":"<urn:uuid:23a525d4-7be4-496e-a39b-4be221c6c1b1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Lagrange Theorem etc. August 31st 2008, 09:18 PM #1 Jun 2008 Lagrange Theorem etc. Define a sequence $\{a_{i} \}$ by $a_1 = 4$ and $a_{i+1} = 4^{a_{i}}$ for $i \geq 1$. Which integers between $00$ and $99$ inclusive occur as the last two digits in the decimal expansion of infinitely many $a_{i}$. It would take a long time to do this by brute force. What exactly does this mean (this was a hint that was given)?: If $4$ does not divide $n$, then $4^{a} \mod n$ is determined by $a \mod \phi (n)$. So for example, $4^{a}\mod15$ is determined by $a \mod \phi(15)$. I think you are expected to know that if $gcd(a,m) =1$ then $a^{\phi(m)}\equiv 1 \quad (mod m)$, where phi is the Euler phi function I suspect you are capable of sussing things out from there. September 1st 2008, 12:50 AM #2 Senior Member Dec 2007
{"url":"http://mathhelpforum.com/advanced-algebra/47301-lagrange-theorem-etc.html","timestamp":"2014-04-19T17:48:27Z","content_type":null,"content_length":"33879","record_id":"<urn:uuid:068601d1-c4c3-4334-b04d-feaf3868c004>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Topological Defects in Cosmology - A. Gangui 2.1. Local and global monopoles and domain walls Monopoles are yet another example of stable topological defects. Their formation stems from the fact that the vacuum expectation value of the symmetry breaking Higgs field has random orientations (<^ a> pointing in different directions in group space) on scales greater than the horizon. One expects therefore to have a probability of order unity that a monopole configuration will result after the phase transition (cf. the Kibble mechanism). Thus, about one monopole per Hubble volume should arise and we have for the number density n[monop] ~ 1 / H^-3 ~ T[c]^6 / m[P]^3, where T[c] is the critical temperature and m[P] is Planck mass, when the transition occurs. We also know the entropy density at this temperature, s ~ T[c]^3, and so the monopole to entropy ratio is n[monop] / s T[c] / m[P])^3. In the absence of non-adiabatic processes after monopole creation this constant ratio determines their present abundance. For the typical value T[c] ~ 10^14 GeV we have n[monop] / s ~ 10^ -13. This estimate leads to a present [monop] h^2 ^11, for the superheavy monopoles m[monop] ^16 GeV that are created ^(6). This value contradicts standard cosmology and the presently most attractive way out seems to be to allow for an early period of inflation: the massive entropy production will hence lead to an exponential decrease of the initial n[monop] / s ratio, yielding [monop] consistent with observations. ^(7) In summary, the broad-brush picture one has in mind is that of a mechanism that could solve the monopole problem by `weeping' these unwanted relics out of our sight, to scales much bigger than the one that will eventually become our present horizon today. Note that these arguments do not apply for global monopoles as these (in the absence of gauge fields) possess long-range forces that lead to a decrease of their number in comoving coordinates. The large attractive force between global monopoles and antimonopoles leads to a high annihilation probability and hence monopole over-production does not take place. Simulations performed by Bennett & Rhie [1990] showed that global monopole evolution rapidly settles into a scale invariant regime with only a few monopoles per horizon volume at all times. Given that global monopoles do not represent a danger for cosmology one may proceed in studying their observable consequences. The gravitational fields of global monopoles may lead to matter clustering and CMB anisotropies. Given an average number of monopoles per horizon of ~ 4, Bennett & Rhie [1990] estimate a scale invariant spectrum of fluctuations ([H] ~ 30 G ^2 at horizon crossing ^(8). In a subsequent paper they simulate the large-scale CMB anisotropies and, upon normalization with COBE-DMR, they get roughly G ^2 ~ 6 × 10^-7 in agreement with a GUT energy scale Bennett & Rhie, 1993]. However, as we will see in the CMB sections below, current estimates for the angular power spectrum of global defects do not match the most recent observations, their main problem being the lack of power on the degree angular scale once the spectrum is normalized to COBE on large scales. Let us concentrate now on domain walls, and briefly try to show why they are not welcome in any cosmological context (at least in the simple version we here consider - there is always room for more complicated (and contrived) models). If the symmetry breaking pattern is appropriate at least one domain wall per horizon volume will be formed. The mass per unit surface of these two-dimensional objects is given by ~ ^1/2 ^3, where ^1/2 ^3 H^-2. This implies a mass energy density roughly given by [DW] ~ ^3 t^-1 and we may readily see now how the problem arises: the critical density goes as [crit] ~ t^-2 which implies [DW](t) ~ (m[P])^2 t. Taking a typical GUT value for [DW](t ~ 10^-35sec) ~ 1 already at the time of the phase transition. It is not hard to imagine that today this will be at variance with observations; in fact we get [DW](t ~ 10^18sec) ~ 10^52. This indicates that models where domain walls are produced are tightly constrained, and the general feeling is that it is best to avoid them altogether [see Kolb & Turner, 1990 for further details; see also Dvali et al., 1998, Pogosian & Vachaspati,2000 ^(9) and Alexander et al., 1999 for an alternative solution]. ^6 These are the actual figures for a gauge SU(5) GUT second-order phase transition. Preskill [1979] has shown that in this case monopole antimonopole annihilation is not effective to reduce their abundance. Guth & Weinberg [1983] did the case for a first-order phase transition and drew qualitatively similar conclusions regarding the excess of monopoles. Back. ^7 The inflationary expansion reaches an end in the so-called reheating process, when the enormous vacuum energy driving inflation is transferred to coherent oscillations of the inflaton field. These oscillations will in turn be damped by the creation of light particles (e.g., via preheating) whose final fate is to thermalise and reheat the universe. Back. ^8 The spectrum of density fluctuations on smaller scales has also been computed. They normalize the spectrum at 8 h^-1 Mpc and agreement with observations lead them to assume that galaxies are clustered more strongly than the overall mass density, this implying a `biasing' of a few [see Bennett, Rhie & Weinberg, 1993 for details]. Back. ^9 Animations of monopoles colliding with domain walls can be found in `LEP' page at http://theory.ic.ac.uk/~LEP/figures.html Back.
{"url":"http://ned.ipac.caltech.edu/level5/March02/Gangui/Gangui2_1.html","timestamp":"2014-04-21T02:37:00Z","content_type":null,"content_length":"10225","record_id":"<urn:uuid:a7ef6a22-faa4-4976-9d52-823581fa8adf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Internet Routing and Internet Service Provision Henry Lin EECS Department University of California, Berkeley Technical Report No. UCB/EECS-2009-105 July 29, 2009 The recent development and expansion of the Internet has created many technical challenges in diverse research areas. In this thesis, we study recent problems arising in the area of Internet routing and Internet service provision. In the area of Internet routing, we analyze properties of the selfish routing model, which is a mathematical model of users selfishly routing traffic in a network without regard to their effect on other users. Additionally, we study the properties of various random graph models which have been used to model the Internet, and utilize the properties of graphs generated by those random models to develop simple compact routing schemes, which can allow network routing without having each node store very much information. In the area of Internet service provision, we study an online bipartite matching problem, in which a set of servers seeks to provide service to arriving clients with as little interruption as possible. The central theme of this thesis is to analyze precise mathematical models of Internet routing and Internet service provision, and in those models, we show certain properties hold or derive algorithms which work with high probability. The first model we study is the selfish routing model. In the selfish routing model, we analyze the efficiency of users selfishly routing traffic and study a counterintuitive phenomenon known as Braess's Paradox, which states that adding a link to a network with selfish routing may actually increase the latency for all users. We produce tight and nearly tight bounds on the maximum increase in latency that can occur due to Braess's Paradox in single-commodity and multicommodity networks, respectively. We also produce the first nearly tight bounds on the maximum latency that can occur when traffic is routed selfishly in multicommodity networks, relative to the maximum latency that occurs when traffic is routed optimally. In the second part of the thesis, we study random graph models which have been used to model the Internet, and look for properties of graphs generated by those models, which can be used to derive simple compact routing schemes. The goal of compact routing is to derive algorithms which minimize the information stored by nodes in the network, while maintaining the ability of all nodes to route packets to each other along relatively short paths. In this research area, we show that graphs generated by several random graph models used to model the Internet (e.g. the preferential attachment model), can be decomposed in a novel manner, which allows compact routing to be achieved easily. Along the way, we also prove that a Polya urns random process has good load balancing properties, which may be of independent interest. In the last part of the thesis, we study an online bipartite matching problem, which models a problem occurring in the area of Internet service provision. In our online bipartite matching problem, we imagine that we have some servers capable of providing some service, and clients arrive one at a time to request service from a subset of servers capable of servicing their request. The goal of the problem is to assign the arriving clients to servers capable of servicing their requests, all while minimizing the number of times that a client needs to be switched from one server to another server. Although prior worst case analysis for this problem has not yielded interesting results, we show tight bounds on the number of times clients need to be switched in a few natural models. As we analyze these problems arising in the Internet from a precise mathematical perspective, we also seek to reflect on the process used to solve mathematical problems. Although the thought process can sometimes be difficult to describe, in one case, we attempt to provide a step-by-step account of how the final result was proved, and attempt to describe a high level algorithm, which summarizes the methodology that used to prove it. Advisor: Christos Papadimitriou and Satish Rao BibTeX citation: Author = {Lin, Henry}, Title = {Internet Routing and Internet Service Provision}, School = {EECS Department, University of California, Berkeley}, Year = {2009}, Month = {Jul}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-105.html}, Number = {UCB/EECS-2009-105}, Abstract = {The recent development and expansion of the Internet has created many technical challenges in diverse research areas. In this thesis, we study recent problems arising in the area of Internet routing and Internet service provision. In the area of Internet routing, we analyze properties of the selfish routing model, which is a mathematical model of users selfishly routing traffic in a network without regard to their effect on other users. Additionally, we study the properties of various random graph models which have been used to model the Internet, and utilize the properties of graphs generated by those random models to develop simple compact routing schemes, which can allow network routing without having each node store very much information. In the area of Internet service provision, we study an online bipartite matching problem, in which a set of servers seeks to provide service to arriving clients with as little interruption as possible. The central theme of this thesis is to analyze precise mathematical models of Internet routing and Internet service provision, and in those models, we show certain properties hold or derive algorithms which work with high probability. The first model we study is the selfish routing model. In the selfish routing model, we analyze the efficiency of users selfishly routing traffic and study a counterintuitive phenomenon known as Braess's Paradox, which states that adding a link to a network with selfish routing may actually increase the latency for all users. We produce tight and nearly tight bounds on the maximum increase in latency that can occur due to Braess's Paradox in single-commodity and multicommodity networks, respectively. We also produce the first nearly tight bounds on the maximum latency that can occur when traffic is routed selfishly in multicommodity networks, relative to the maximum latency that occurs when traffic is routed optimally. In the second part of the thesis, we study random graph models which have been used to model the Internet, and look for properties of graphs generated by those models, which can be used to derive simple compact routing schemes. The goal of compact routing is to derive algorithms which minimize the information stored by nodes in the network, while maintaining the ability of all nodes to route packets to each other along relatively short paths. In this research area, we show that graphs generated by several random graph models used to model the Internet (e.g. the preferential attachment model), can be decomposed in a novel manner, which allows compact routing to be achieved easily. Along the way, we also prove that a Polya urns random process has good load balancing properties, which may be of independent interest. In the last part of the thesis, we study an online bipartite matching problem, which models a problem occurring in the area of Internet service provision. In our online bipartite matching problem, we imagine that we have some servers capable of providing some service, and clients arrive one at a time to request service from a subset of servers capable of servicing their request. The goal of the problem is to assign the arriving clients to servers capable of servicing their requests, all while minimizing the number of times that a client needs to be switched from one server to another server. Although prior worst case analysis for this problem has not yielded interesting results, we show tight bounds on the number of times clients need to be switched in a few natural models. As we analyze these problems arising in the Internet from a precise mathematical perspective, we also seek to reflect on the process used to solve mathematical problems. Although the thought process can sometimes be difficult to describe, in one case, we attempt to provide a step-by-step account of how the final result was proved, and attempt to describe a high level algorithm, which summarizes the methodology that used to prove it.} EndNote citation: %0 Thesis %A Lin, Henry %T Internet Routing and Internet Service Provision %I EECS Department, University of California, Berkeley %D 2009 %8 July 29 %@ UCB/EECS-2009-105 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-105.html %F Lin:EECS-2009-105
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-105.html","timestamp":"2014-04-17T12:36:30Z","content_type":null,"content_length":"12569","record_id":"<urn:uuid:dd633660-47ef-473b-870d-822d291b8d4a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Cube root asymptotics Results 1 - 10 of 96 , 2000 "... this paper, we consider the asymptotic behaviour of regression estimators that minimize the residual sum of squares plus a penalty proportional to ..." - Econometrica , 2000 "... Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously-distributed variable such as firm size. In addition, thr ..." Cited by 83 (3 self) Add to MetaCart Threshold models have a wide variety of applications in economics. Direct applications include models of separating and multiple equilibria. Other applications include empirical sample splitting when the sample split is based on a continuously-distributed variable such as firm size. In addition, threshold models may be used as a parsimonious strategy for nonparametric function estimation. For example, the threshold autoregressive model Ž TAR. is popular in the nonlinear time series literature. Threshold models also emerge as special cases of more complex statistical frameworks, such as mixture models, switching models, Markov switching models, and smooth transition threshold models. It may be important to understand the statistical properties of threshold models as a preliminary step in the development of statistical tools to handle these more complicated structures. Despite the large number of potential applications, the statistical theory of threshold estimation is undeveloped. It is known that threshold estimates are super-consistent, but a distribution theory useful for testing and inference has yet to be provided. This paper develops a statistical theory for threshold estimation in the regression context. We allow for either cross-section or time series observations. Least squares estimation of the regression parameters is considered. An asymptotic distribution theory for the regression estimates Ž the threshold and the regression slopes. is developed. It is found that the distribution of the threshold estimate is nonstandard. A method to construct asymptotic confidence intervals is developed by inverting the likelihood ratio statistic. It is shown that this yields asymptotically conservative confidence regions. Monte Carlo simulations are presented to assess the accuracy of the asymptotic approximations. The empirical relevance of the theory is illustrated through an application to the multiple equilibria growth model of Durlauf and Johnson Ž 1995.. - In Handbook of Econometrics , 2001 "... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..." Cited by 75 (1 self) Add to MetaCart The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as - J. Comput. Graph. Statist "... A distribution that arises in problems of estimation of monotone functions is that of the location of the maximum of two-sided Brownian motion minus a parabola. Using results from the � rst author’s earlier work, we present algorithms and programs for computation of this distribution and its quantil ..." Cited by 34 (11 self) Add to MetaCart A distribution that arises in problems of estimation of monotone functions is that of the location of the maximum of two-sided Brownian motion minus a parabola. Using results from the � rst author’s earlier work, we present algorithms and programs for computation of this distribution and its quantiles. We also present some comparisons with earlier computations and simulations. , 1995 "... Maximum likelihood estimation for the proportional hazards model with interval censored data is considered. The estimators are computed by profile likelihood methods using Groeneboom's iterative convex minorant algorithm. Under appropriate regularity conditions, the maximum likelihood estimator for ..." Cited by 32 (5 self) Add to MetaCart Maximum likelihood estimation for the proportional hazards model with interval censored data is considered. The estimators are computed by profile likelihood methods using Groeneboom's iterative convex minorant algorithm. Under appropriate regularity conditions, the maximum likelihood estimator for the regression parameter is shown to be asymptotically normal and efficient. Two approaches for estimation of the variance-covariance matrix for the estimated regression parameter are proposed: one uses the inverse of the observed information matrix, another uses the curvature of the profile likelihood function. An example is given to illustrate the proposed methods. - Ann. Statist , 2001 "... \Ve study the problem of testing for equality at a fixed point in the setting of nonparametric estimation of a monotone function. The likelihood ratio test for this hypothesis is derived in the particular case of interval censoring (or current status data) and its limiting distribution is obtained. ..." Cited by 27 (17 self) Add to MetaCart \Ve study the problem of testing for equality at a fixed point in the setting of nonparametric estimation of a monotone function. The likelihood ratio test for this hypothesis is derived in the particular case of interval censoring (or current status data) and its limiting distribution is obtained. The limiting distribution is that of the integral of the difference of the squared slope processes corresponding to a canonical version of the problem involving Brownian motion + t2 and greatest convex minorants thereof. 2ROI AI291968- - BERNOULLI , 1998 "... ..." , 1993 "... JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms ..." Cited by 17 (5 self) Add to MetaCart JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms - Ann. Statist , 1998 "... This paper completes the extension to higher dimensions for both regression and multivariate location models. 1 ..." - Econometrica , 2006 "... This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variabl ..." Cited by 14 (1 self) Add to MetaCart This paper provides a first order asymptotic theory for generalized method of moments (GMM) estimators when the number of moment conditions is allowed to increase with the sample size and the moment conditions may be weak. Examples in which these asymptotics are relevant include instrumental variable (IV) estimation with many (possibly weak or uninformed) instruments and some panel data models that cover moderate time spans and have correspondingly large numbers of instruments. Under certain regularity conditions, the GMM estimators are shown to converge in probability but not necessarily to the true parameter, and conditions for consistent GMM estimation are given. A general framework for the GMM limit distribution theory is developed based on epiconvergence methods. Some illustrations are provided, including consistent GMM estimation of a panel model with time varying individual effects, consistent limited information maximum likelihood estimation as a continuously updated GMM estimator, and consistent IV structural estimation using large numbers of weak or irrelevant instruments. Some simulations are reported.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2691494","timestamp":"2014-04-20T00:27:23Z","content_type":null,"content_length":"35167","record_id":"<urn:uuid:5c3310d6-7a60-4cb5-ac47-56a6d83190cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 3 of 3 1. CMB 2011 (vol 55 pp. 48) Freyd's Generating Hypothesis for Groups with Periodic Cohomology Let $G$ be a finite group, and let $k$ be a field whose characteristic $p$ divides the order of $G$. Freyd's generating hypothesis for the stable module category of $G$ is the statement that a map between finite-dimensional $kG$-modules in the thick subcategory generated by $k$ factors through a projective if the induced map on Tate cohomology is trivial. We show that if $G$ has periodic cohomology, then the generating hypothesis holds if and only if the Sylow $p$-subgroup of $G$ is $C_2$ or $C_3$. We also give some other conditions that are equivalent to the GH for groups with periodic cohomology. Keywords:Tate cohomology, generating hypothesis, stable module category, ghost map, principal block, thick subcategory, periodic cohomology Categories:20C20, 20J06, 55P42 2. CMB 2008 (vol 51 pp. 81) Homotopy Formulas for Cyclic Groups Acting on Rings The positive cohomology groups of a finite group acting on a ring vanish when the ring has a norm one element. In this note we give explicit homotopies on the level of cochains when the group is cyclic, which allows us to express any cocycle of a cyclic group as the coboundary of an explicit cochain. The formulas in this note are closely related to the effective problems considered in previous joint work with Eli Aljadeff. Keywords:group cohomology, norm map, cyclic group, homotopy Categories:20J06, 20K01, 16W22, 18G35 3. CMB 1997 (vol 40 pp. 341) The stable and unstable types of classifying spaces The main purpose of this paper is to study groups $G_1$, $G_2$ such that $H^\ast(BG_1,{\bf Z}/p)$ is isomorphic to $H^\ast(BG_2,{\bf Z}/p)$ in ${\cal U}$, the category of unstable modules over the Steenrod algebra ${\cal A}$, but not isomorphic as graded algebras over ${\bf Z}/p$. Categories:55R35, 20J06
{"url":"http://cms.math.ca/cmb/msc/20J06?fromjnl=cmb&jnl=CMB","timestamp":"2014-04-17T09:46:48Z","content_type":null,"content_length":"29180","record_id":"<urn:uuid:f0ac8d1f-31c4-4680-b4f6-22b154387f73>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: DDD In the given figure, O is the center of the circle. What is m∠QPR, if m∠OQR = 26? View Solution DDD A regular hexagon is inscribed in a circle. What is the measure of ∠AOC? View Solution DDD In the given figure, O is the center of the circle. What is m∠QPR, if m∠OQR = 18? View Solution DDD In circle with center O, the arc intercepted by ∠ABC is _______. View Solution DDD Find the value of x . DDD View Solution DDD Identify the appropriate relation among x, y & z from the figure. DDD View Solution DDD The circle with center O is circumscribed about quadrilateral ________. DDD View Solution DDD Find m∠ACB in circle shown. DDD View Solution DDD In circle with center O, x equals ______. DDD View Solution DDD Find the value of x. DDD View Solution DDD Find ∠CAB using the figure shown. DDD View Solution DDD In fig. Find m(Arc DE). View Solution DDD Use the figure to find ∠SRQ . DDD View Solution DDD A, B, C are three points on a circle such that ∠ACB = 90^o. Then line segment AB¯ is the DDD View Solution DDD In fig.Find m Arc BE? DDD View Solution DDD In fig.Find the value of x. View Solution DDD Find the value of x where O is the center of the circle. DDD View Solution DDD Find ∠CBD. DDD View Solution DDD Find ∠ACB. View Solution DDD Find m∠ABC. View Solution DDD Find (x + y). DDD View Solution DDD In the circle with center O, m∠ABD equals View Solution DDD In the circle with center O, (x - y) equals ______. DDD View Solution DDD In fig. Find m(Arc AB). DDD View Solution DDD In the circle with center O, ABCD is a cyclic quadrilateral, with AB¯ parallel to DC¯. (x + y) equals ____. e View Solution e Use the figure to find x, given AB¯ || DC¯. e View Solution e In the circle with center O, PT¯ is a tangent at M. ∠MQR equals _____. e View Solution e In the circle with center O, QR↔ is the tangent at P. ∠MPQ + ∠NPR equals _____. View Solution e In the circle with center O, DK↔ is the tangent at C. Find ∠BCK. View Solution e In the circle with center O, DP↔ is the tangent at C. Find ∠ACD. View Solution e In the circle with center O, DK↔ is the tangent at C. ∠BCK = ? e View Solution e If central angle is 65°, what is the measure of the major arc whose end points are at the intersection of the central angle and the circle. View Solution e A circular wire of diameter 132 cm is cut and placed along the circumference of a circle of radius 1.43 meter. The angle subtended by the wire at the center of the circle is ________ View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgbmkxkjabf&.html","timestamp":"2014-04-19T05:13:37Z","content_type":null,"content_length":"75853","record_id":"<urn:uuid:42505dd1-06c6-489d-a31d-76697cce68e8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Foundations of Physics: Edward Witten Versus Joy Christian Versus Stephen Wolfram Authors: David Brown The pioneers of string theory have not convinced such critics as Philip Anderson, Sheldon Glashow, Robert B. Laughlin, and Burton Richter. Does nondeterministic M-theory need to be replaced by deterministic M-theory according to the ideas of J. Christian? Does nondeterministic M-theory need to be replaced by modified M-theory with Wolfram’s mobile automaton operating on a network of information below the Planck scale? If X is to M-theory as Kepler’s laws are to Newton’s theory of gravity, then what is X? There is overwhelming empirical evidence in favor of the Rañada-Milgrom effect because of two things: (1) the work of Milgrom, McGaugh, and Kroupa and (2) an easy scaling argument. Are the ideas of Witten, Christian, and Wolfram essential for understanding Milgrom’s acceleration law in terms of the foundations of physics? Comments: 4 Pages. Download: PDF Submission history [v1] 2012-05-17 09:03:32 Unique-IP document downloads: 390 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/1205.0070","timestamp":"2014-04-17T15:28:08Z","content_type":null,"content_length":"7708","record_id":"<urn:uuid:d40fb89e-c0ef-45d0-904f-a8f4bbb61ccd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse secant: Representations through more general functions Representations through more general functions Through hypergeometric functions Involving [2]F[1] Involving [p]F[q] Through hypergeometric functions of two variables Through Meijer G Classical cases for the direct function itself Classical cases involving algebraic functions Classical cases involving algebraic functions in the arguments Classical cases involving unit step theta Classical cases for powers of sec^-1 Generalized cases for the direct function itself Generalized cases involving algebraic functions Generalized cases involving algebraic functions in the arguments Generalized cases involving unit step theta Generalized cases for powers of sec^-1 Through other functions Involving inverse Jacobi functions Involving some hypergeometric-type functions
{"url":"http://functions.wolfram.com/ElementaryFunctions/ArcSec/26/ShowAll.html","timestamp":"2014-04-18T23:32:44Z","content_type":null,"content_length":"54265","record_id":"<urn:uuid:87255713-3785-4150-aa3f-3982c45bc41d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximating the minimum spanning tree weight in sublinear time Results 11 - 20 of 29 "... Detecting and counting the number of copies of certain subgraphs (also known as network motifs or graphlets), is motivated by applications in a variety of areas ranging from Biology to the study of the World-Wide-Web. Several polynomial-time algorithms have been suggested for counting or detecting t ..." Cited by 8 (2 self) Add to MetaCart Detecting and counting the number of copies of certain subgraphs (also known as network motifs or graphlets), is motivated by applications in a variety of areas ranging from Biology to the study of the World-Wide-Web. Several polynomial-time algorithms have been suggested for counting or detecting the number of occurrences of certain network motifs. However, a need for more efficient algorithms arises when the input graph is very large, as is indeed the case in many applications of motif counting. In this paper we design sublinear-time algorithms for approximating the number of copies of certain constant-size subgraphs in a graph G. That is, our algorithms do not read the whole graph, but rather query parts of the graph. Specifically, we consider algorithms that may query the degree of any vertex of their choice and may ask for any neighbor of any vertex of their choice. The main focus of this work is on the basic problem of counting the number of length-2 paths and more generally on counting the number of stars of a certain size. Specifically, we design an algorithm that, given an approximation parameter 0 < ɛ < 1 and query access to a graph G, outputs an estimate ˆνs such that with high constant probability, (1−ɛ)νs(G) ≤ ˆνs ≤ (1+ɛ)νs(G), where νs(G) denotes the number of stars of size s + 1 in the graph. The expected query ( complexity and { running time of}) the algorithm are O - In Property Testing , 2010 "... Abstract. The aim of this article is to introduce the reader to the study of testing graph properties, while focusing on the main models and issues involved. No attempt is made to provide a comprehensive survey of this ..." Cited by 7 (0 self) Add to MetaCart Abstract. The aim of this article is to introduce the reader to the study of testing graph properties, while focusing on the main models and issues involved. No attempt is made to provide a comprehensive survey of this - In 32nd International Colloquium on Automata, Languages, and Programming , 2005 "... Abstract. In this paper we present a randomized constant factor approximation algorithm for the problem of computing the optimal cost of the metric Minimum Facility Location problem, in the case of uniform costs and uniform demands, and in which every point can open a facility. By exploiting the fac ..." Cited by 6 (1 self) Add to MetaCart Abstract. In this paper we present a randomized constant factor approximation algorithm for the problem of computing the optimal cost of the metric Minimum Facility Location problem, in the case of uniform costs and uniform demands, and in which every point can open a facility. By exploiting the fact that we are approximating the optimal cost without computing an actual solution, we give the first algorithm for this problem with running time O(n log 2 n), where n is the number of metric space points. Since the size of the representation of an n-point metric space is Θ(n 2), the complexity of our algorithm is sublinear with respect to the input size. We consider also the general version of the metric Minimum Facility Location problem and we show that there is no o(n 2)-time algorithm, even a randomized one, that approximates the optimal solution to within any factor. This result can be generalized to some related problems, and in particular, the cost of minimum-cost matching, the cost of bichromatic matching, or the cost of n/2-median cannot be approximated in o(n 2)-time. 1 "... We consider the problem of computing the weight of a Euclidean minimum spanning tree for a set of n points in Rd. We focus on the setting where the input point set is supported by certain basic (and commonly used) geometric data structures that can provide efficient access to the input in a struct ..." Cited by 5 (2 self) Add to MetaCart We consider the problem of computing the weight of a Euclidean minimum spanning tree for a set of n points in Rd. We focus on the setting where the input point set is supported by certain basic (and commonly used) geometric data structures that can provide efficient access to the input in a structured way. We present an algorithm that estimates with high probability the weight of a Euclidean minimum spanning tree of a set of points to within 1 + &quot; using only eO(pn poly(1=&quot;)) queries for constant d. The algorithm assumes that the input is supported by a minimal bounding cube enclosing it, by orthogonal range queries, and by cone approximate nearest neighbors queries. - Proc. Sixth Int’l Workshop Approximation Algorithms for Combinatorial Optimization Problems , 2003 "... Abstract. We initiate a study of property testing as applied to visual properties of images. Property testing is a rapidly developing area investigating algorithms that, with a small number of local checks, distinguish objects satisfying a given property from objects which need to be modified signif ..." Cited by 4 (1 self) Add to MetaCart Abstract. We initiate a study of property testing as applied to visual properties of images. Property testing is a rapidly developing area investigating algorithms that, with a small number of local checks, distinguish objects satisfying a given property from objects which need to be modified significantly to satisfy the property. We study visual properties of discretized images represented by n × n matrices of binary pixel values. We obtain algorithms with query complexity independent of n for several basic properties: being a half-plane, connectedness and convexity. 1 - IN PROCEEDINGS OF THE 22TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY , 2007 "... There exists a positive constant α < 1 such that for any function T (n) ≤ n α and for any problem L ∈ BPTIME(T (n)), there exists a deterministic algorithm running in poly(T (n)) time which decides L, except for at most a 2 −Ω(T (n) log T (n)) fraction of inputs of length n. The proof uses a nove ..." Cited by 2 (0 self) Add to MetaCart There exists a positive constant α < 1 such that for any function T (n) ≤ n α and for any problem L ∈ BPTIME(T (n)), there exists a deterministic algorithm running in poly(T (n)) time which decides L, except for at most a 2 −Ω(T (n) log T (n)) fraction of inputs of length n. The proof uses a novel derandomization technique based on a new type of randomness extractors, called exposure-resilient extractors. An exposure-resilient extractor is an efficient procedure that, from a random variable with imperfect min-entropy, produces randomness that passes all statistical tests including those that have bounded access to the random variable, with adaptive queries that can depend on the string being tested. More precisely, EXT: {0, 1} n × {0, 1} d → {0, 1} m is a (k, ɛ)-exposure resilient extractor resistant to q queries if, when the minentropy of x is at least k and y is random, EXT(x, y) looks ɛ-random to all statistical tests modeled by oracle circuits of unbounded complexity that can query q bits of x. Besides the extractor that is needed for the above derandomization (whose parameters are tailored for this application), we construct, for any δ < 1, a (k, ɛ)-exposure resilient extractor with query resistance n δ, k = n − n Ω(1) , ɛ = n −Ω(1) , m = n Ω(1) and d = O(log n). , 2010 "... We study the time and query complexity of approximation algorithms that access only a minuscule fraction of the input, focusing on two classical sources of problems: combinatorial graph optimization and manipulation of strings. The tools we develop find applications outside of the area of sublinear ..." Cited by 2 (0 self) Add to MetaCart We study the time and query complexity of approximation algorithms that access only a minuscule fraction of the input, focusing on two classical sources of problems: combinatorial graph optimization and manipulation of strings. The tools we develop find applications outside of the area of sublinear algorithms. For instance, we obtain a more efficient approximation algorithm for edit distance and distributed algorithms for combinatorial problems on graphs that run in a constant number of communication rounds. - ACM Transactions on Algorithms , 2007 "... Given a Euclidean graph G over a set P of n points in the plane, we are interested in verifying whether G is a Euclidean minimum spanning tree (EMST) of P or G differs from it in more than ǫn edges. We assume that G is given in adjacency list representation and the point/vertex set P is given in an ..." Cited by 2 (2 self) Add to MetaCart Given a Euclidean graph G over a set P of n points in the plane, we are interested in verifying whether G is a Euclidean minimum spanning tree (EMST) of P or G differs from it in more than ǫn edges. We assume that G is given in adjacency list representation and the point/vertex set P is given in an array. We present a property testing algorithm that accepts graph G if it is an EMST of P and that rejects with probability at least 2 3 if G differs from every EMST of P in more than ǫn edges. Our algorithm runs in O ( � n/ǫ · log2 (n/ǫ)) time and has a query complexity of O ( � n/ǫ · log(n/ǫ)). "... Abstract. There exists a positive constant α < 1 such that for any function T (n) ≤ n α and for any problem L ∈ BPTIME(T (n)), there exists a deterministic algorithm running in poly(T (n)) time which decides L, except for at most a 2 −Ω(T (n) log T (n)) fraction of inputs of length n. The proof use ..." Cited by 1 (0 self) Add to MetaCart Abstract. There exists a positive constant α < 1 such that for any function T (n) ≤ n α and for any problem L ∈ BPTIME(T (n)), there exists a deterministic algorithm running in poly(T (n)) time which decides L, except for at most a 2 −Ω(T (n) log T (n)) fraction of inputs of length n. The proof uses a novel derandomization technique based on a new type of randomness extractors, called exposure-resilient extractors. An exposure-resilient extractor is an efficient procedure that, from a random variable with imperfect randomness, produces randomness that passes all statistical tests including those that have bounded access to the random variable, with adaptive queries that can depend on the string being tested. More precisely, EXT: {0, 1} n × {0, 1} d → {0, 1} m is a (k, ɛ) -exposure-resilient extractor resistant to q queries if, when the minentropy of the random variable x is at least k and the random variable y is uniformly distributed, EXT(x, y) looks ɛ-random to all statistical tests modeled by oracle circuits of unbounded size that can query q bits of x. Besides the extractor that is needed for the proof of the main result (whose parameters are tailored for this application), we construct, for any δ < 1, a polynomial-time computable (k, ɛ)-exposure-resilient extractor with query resistance n δ, k = n − n Ω(1) , ɛ = n −Ω(1) , m = n Ω(1) and d = O(log n). , 2011 "... We give a nearly optimal sublinear-time algorithm for approximating the size of a minimum vertex cover in a graph G. The algorithm may query the degree deg(v) of any vertex v of its choice, and for each 1 ≤ i ≤ deg(v), it may ask for the ith neighbor of v. Letting VCopt(G) denote the minimum size of ..." Cited by 1 (0 self) Add to MetaCart We give a nearly optimal sublinear-time algorithm for approximating the size of a minimum vertex cover in a graph G. The algorithm may query the degree deg(v) of any vertex v of its choice, and for each 1 ≤ i ≤ deg(v), it may ask for the ith neighbor of v. Letting VCopt(G) denote the minimum size of vertex cover in G, the algorithm outputs, with high constant success probability, an estimate ̂VC (G) such that VCopt(G) ≤ ̂VC(G) ≤ 2VCopt(G) + ǫn, where ǫ is a given additive approximation parameter. We refer to such an estimate as a (2, ǫ)-estimate. The query complexity and running time of the algorithm are Õ ( ¯ d · poly(1/ǫ)), where ¯ d denotes the average vertex degree in the graph. The best previously known sublinear algorithm, of Yoshida et al. (STOC 2009), has query complexity and running time O(d4 /ǫ2), where d is the maximum degree in the graph. Given the lower bound of Ω ( ¯ d) (for constant ǫ) for obtaining such an estimate (with any constant multiplicative factor) due to Parnas and Ron (TCS 2007), our result is nearly optimal. In the case that the graph is dense, that is, the number of edges is Θ(n2), we consider another model, in which the algorithm may ask, for any pair of vertices u and v, whether there is an edge between u and v. We show how to adapt the algorithm that uses neighbor queries to this model and obtain an
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1162816&sort=cite&start=10","timestamp":"2014-04-16T23:26:43Z","content_type":null,"content_length":"39797","record_id":"<urn:uuid:54b969f8-d406-45a7-a127-03a9dd391ece>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Area of a Circle vs. Area of a Triangle I first saw this video over at MathFail. It’s cute, and I have to admit, kind of cool. But anything this simplistic always sends me into skeptical mode. Before we go any further, check it out: Do you believe it? This might be the question to start a discussion with a student. It’s certainly the first question that comes to my mind. If it were really this simple, wouldn’t we have used it to “prove” the formula for the area of a circle much earlier? What’s wrong with it? For the “proof” in the video to work, you have to assume (or believe) that the circumference is 2πr. This seems a bit cheesy to me, since that formula is as complex as the one we’re trying to prove. Not to mention quite closely related. But I’ll let this one go. The thing that really bothers me is that they use only a few chains – each of which has thickness. If you filled the inside of a circle (a disk) with concentric circles, none of those circles would have a thickness. In fact there’s an infinite number of those circles. Is it realistic to take each of those circles and fold them out and get a triangle? I believe the makers of the video intended this to be a fun way to remember the area formula of a circle. But the video would be better used to allow students to ponder the relationship of a circle to an isosceles triangle. What do you think? Are you okay with this video? Are you as skeptical as I am, or am I a little too sensitive? Share your thoughts in the comments and share the video on twitter and Facebook! You might also like: This post may contain affiliate links. When you use them, you support us so we can continue to provide free content! 4 Responses to Area of a Circle vs. Area of a Triangle 1. I liked this video a lot when I first saw it. Yes, it’s not a rigorous proof that I had to learn in school. But I think it’s a great starting point for a conversation. And it’s a lot less scary. So, as an intro or a supplement to the topic it works. Plus I can see how it can open up a conversation about how close should a “close enough” be, precision, and approximation error. Yelena recently posted..Last-Minute Christmas Decorations and Tangled Lights □ Good point, Yelena. Conversation starters are always welcome! 2. If you formulate it like Archimedes did (Proposition One on the page below) you do not need to assume anything about the circumference. □ Thanks so much for that link. I had no idea! Leave a reply
{"url":"http://mathfour.com/geometry/area-of-a-circle-vs-area-of-a-triangle","timestamp":"2014-04-21T02:00:46Z","content_type":null,"content_length":"32636","record_id":"<urn:uuid:fcde164f-c37d-4edd-a457-80455b1d32b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Guile Reference Manual: SRFI-27 7.5.19 SRFI-27 - Sources of Random Bits This subsection is based on the specification of SRFI-27 written by Sebastian Egner. This SRFI provides access to a (pseudo) random number generator; for Guile’s built-in random number facilities, which SRFI-27 is implemented upon, See Random. With SRFI-27, random numbers are obtained from a random source, which encapsulates a random number generation algorithm and its state.
{"url":"http://www.gnu.org/software/guile/manual/html_node/SRFI_002d27.html","timestamp":"2014-04-17T05:31:04Z","content_type":null,"content_length":"5101","record_id":"<urn:uuid:b3230b4f-6d47-49ed-b370-ac8683587130>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
UD PBLC - Problem Detail: Author's Teaching Notes "A Bad Day for Sandy Dayton" was written to introduce general education, non-science majors to the concepts of motion, force and mechanical energy in a problem-based learning format. I chose to create the scenario of a rear-end car crash just outside the Physics Building where class was held. The intersection was easily accessible for taking measurements and timing the traffic light as we worked through the problem, and the location made the problem realistic for students. I teach this course to one hundred and twenty students in a fixed seat auditorium. The students meet in the large class twice weekly for seventy-five minutes, then meet with their graduate teaching assistants one hour weekly for discussion, and two hours weekly for lab. The problem drives instruction in all three, where students in their permanent groups of four are challenged to learn the concepts of velocity, acceleration, Newton's First and Second Laws, and kinetic energy by working through the concepts related to the problem experimentally in lab, and by sharing information and research in the large class and discussion. I introduced the first page of the problem on the second day of the semester, and cycled in and out of the four-part problem for five weeks as students researched the learning issues related to the problem. Because many of the students in this course are weak in math skills, I chose to have students interpret graphs to find stopping distances of vehicles rather than use algebraic formulas. This problem could be adapted for use in an algebra-based physics course, by having students work formally through the mathematics involved. As I comment on how I used each page of the problem, I will make reference to how the problem could be used in higher level courses. Major Issues Part 1 of the problem introduces students to the concept of speed (graphically and algebraically), forces on objects traveling at constant speed, and reaction time. As students explore these topics in class, they look at a variety of representations of motion, learn to interpret graphs of motion, and sketch graphs from data they derive experimentally, or from strobe photos. Newton's First Law is discussed as they begin to look at the effect of forces on motion. Students find their own reaction time (by catching a dropped ruler and interpreting a graph of "Dropped Distance" vs. "Time" (see graph 1)) and discuss variables that would affect that time. The first two questions on page one are designed initiate student discussion about the concepts addressed in the unit: motion and forces. Most of us have experience with car accidents, either personally, through family or friends, or through TV and the movies. Students generally bring up questions such as the following: • "What were the speeds of the car and van?" • "How close was the van to the car?" • "What was the speed limit?" • "Where did the car and van end up?" • "Were there witnesses?" Students also mention measuring skid marks, finding the weight of the car and van, testing drivers for drugs and alcohol. In question three and four, I cycled out of the problem to introduce the concept of speed, and begin students discussing various ways of representing motion using strobe photos, and simulators. Students gathered their own data in lab and graphed it. Forces are introduced at this point, with students understanding that when objects move at a constant speed, the forces on the object are balanced. Newton's First Law is also introduced at this point in the problem. In answering questions five and six students recognize that drinking, drugs, speed of the car, condition of the road, lack of sleep, talking on the cell phone, and a variety of other factors affect a driver's reaction time. Part 2 motivates students to explore the safety issues of seatbelts and airbags, how they work, and what happens if they are not used. Newton's First Law is revisited when students learn about "three collisions in a collision" while answering the first and second question. The "first collision" refers to the impact that the car makes with some other object (such as another car, a wall, or a tree). Once the vehicle stops, the people (and other objects) in the car continue to move at the initial speed of the car until they are brought to a stop by an object (such as the steering wheel, windshield, seatbelt). In the "third collision", the internal organs (heart, lungs, brain) in the body will continue to move at the speed of the car until they are stopped by the frame of the body. The "third collision" is the deadliest. In answering question one, students will assume that Sandy will be getting X-rays for neck injury. This is a good time to describe the "three collisions in a collision" since the EKG will be checking for heart damage. Before answering question number three, I show crash dummy collisions on video and then ask students to analyze the collision in terms of forces on the car and person. This is a good time to remind students of Newton's First Law. You may want to have students research safety issues such as seatbelts and airbags, understand how they work, and how they minimize the forces on the body in a collision. Part 3 asks students to sift through information in an accident report in order to separate fact from conjecture. They also explore the relationship of the speed of a vehicle to its stopping distance, and factors that affect the ability of a car to stop. They analyze the intersection in question, measuring the width of the intersection, the timing of the yellow light, the speed limit to determine whether the intersection is safe (see Eisenkraft, A.(1998) Active Physics: Transportation It's about Time, Inc. Armonk, NY.). Since these students are non-science majors, I have them analyze graphs of Stopping Distance vs. Initial Velocity (see Graph 2) for specific deceleration rates (or coefficients of friction) rather than have them analyze this mathematically. The graphical interpretation seems to make this analysis more accessible to them. In a higher level physics class, students could analyze the frictional forces involved and calculate the initial velocity prior to In Part 4 the students are given more information to use in doing a final reconstruction of the accident. They use the Stopping Distance VS Initial Velocity graphs to find Sandy's and Jerry's speed prior to braking. They use an average reaction time (between one and two seconds) to determine how far the vehicle traveled while the driver was reacting. Since I did not introduce Conservation of Momentum in this course, I gave students the information they needed to determine the speed that Jerry's van decreased during impact. In a higher level course, students would be expected to work through that analysis on their own. Most students are able to recognize that Sandy and Jerry were both driving over the speed limit, and that Jerry was following to closely to Sandy's car. After the problem is finished, I introduce kinetic energy and energy transformations. Students, in their groups, analyze the transformation of energy in car crashes by recognizing that moving objects have kinetic energy. Since the car and van have no kinetic energy after they come to rest after the impact, student explore the question, "Where did the energy go?" They look at the heat energy produced by the frictional forces in braking and skidding, as well as the energy used to produce sound, the energy used in deforming materials, and the kinetic energy of the pieces of the vehicles that fly off in different directions. Classroom Management Each page of the problem sets the stage for students to explore and research the concepts connected to that part of the problem. Students in their groups discussed the answers to the questions on each page, and also listed their own questions (or learning issues). In a general class discussion, all the learning issues from the groups were listed on an overhead. Those learning issues drove instruction in the next class. For example on the first page, one group may ask how you can represent the speed of an object. That question directs the teacher to explore the various ways to graphically represent motion, asking students to graph the motion of an object going at constant speed, constantly increasing speed, or constantly decreasing speed. During the five-week unit, the class cycled through group discussion, general class mini-lecture, class discussion, independent research, and introduction to a new page in the problem.
{"url":"http://www.udel.edu/pblc/samples/badday/teaching.html","timestamp":"2014-04-20T18:52:35Z","content_type":null,"content_length":"12227","record_id":"<urn:uuid:1bfa98ce-b9eb-444d-a93c-c4f4b375b6f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphs generated by predicates assmann@karlsruhe.gmd.de (Uwe Assmann) Mon, 5 Apr 1993 13:28:03 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.theory,comp.databases,comp.compilers From: assmann@karlsruhe.gmd.de (Uwe Assmann) Followup-To: comp.theory Keywords: theory, question Organization: GMD Forschungsstelle an der Universitaet Karlsruhe Date: Mon, 5 Apr 1993 13:28:03 GMT I wonder whether there is a classification of graphs with different edge colors based on the 'generating predicates'. By this I mean that a graph with different edge colors is described by its vertices and its relations (which represent the edge colors); the relations, however, can be described as binary predicates. Regard the famous 'ancestor example' which describes the transitive hull of the 'son' ancestor(A,D) :- son(A,D). ancestor(A,D) :- ancestor(A,A1), son(A1,D). That means, that the ancestor-relation (ancestor-edges) can be defined in terms of the son-relation, respectively the ancestor-graph in terms of the son-graph. Now my question is: is there a classification of graphs that takes into account, which form of predicates 'generate which forms of Uwe Assmann GMD Forschungsstelle an der Universitaet Karlsruhe Vincenz-Priessnitz-Str. 1 7500 Karlsruhe GERMANY Email: assmann@karlsruhe.gmd.de Tel: 0721/662255 Fax: 0721/6622968 Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/93-04-020","timestamp":"2014-04-19T17:11:41Z","content_type":null,"content_length":"4747","record_id":"<urn:uuid:a0b3d5c8-7515-45e1-914f-c212c9ccb132>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Codable sets and orbits of computably enumerable sets - this Bulletin , 1996 "... . We announce and explain recent results on the computably enumerable (c.e.) sets, especially their definability properties (as sets in the spirit of Cantor), their automorphisms (in the spirit of Felix Klein's Erlanger Programm), their dynamic properties, expressed in terms of how quickly eleme ..." Cited by 8 (2 self) Add to MetaCart . We announce and explain recent results on the computably enumerable (c.e.) sets, especially their definability properties (as sets in the spirit of Cantor), their automorphisms (in the spirit of Felix Klein's Erlanger Programm), their dynamic properties, expressed in terms of how quickly elements enter them relative to elements entering other sets, and the Martin Invariance Conjecture on their Turing degrees, i.e., their information content with respect to relative computability (Turing reducibility). 1. Introduction. All functions are on the nonnegative integers, # = {0, 1, 2, . . . }, and all sets will be subsets of #. Turing and G odel informally called a function computable if it can be calculated by a mechanical procedure, and regarded this as being synonymous with being specified by an "algorithm" or a "finite combinatorial procedure." They each formalized it as follows. 1 A function is Turing computable if it is definable by a Turing machine, as defined by Turing - ANN. PURE APPL. LOGIC , 2000 "... ..." , 2007 "... The goal of this paper is to show there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; th ..." Cited by 3 (3 self) Add to MetaCart The goal of this paper is to show there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; there is no arithmetic description of all orbits of E; for all finite α ≥ 9, there is a properly ∆0 α orbit (from the proof). - BULLETIN OF SYMBOLIC LOGIC , 2008 "... The goal of this paper is to announce there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; ..." Cited by 2 (0 self) Add to MetaCart The goal of this paper is to announce there is a single orbit of the c.e. sets with inclusion, E, such that the question of membership in this orbit is Σ1 1-complete. This result and proof have a number of nice corollaries: the Scott rank of E is ωCK 1 + 1; not all orbits are elementarily definable; there is no arithmetic description of all orbits of E; for all finite α ≥ 9, there is a properly ∆0 α orbit (from the proof). "... The purpose of this article is to summarize some of the results on the algebraic structure of the computably enumerable (c.e.) sets since 1987 when the subject was covered in Soare 1987 , particularly Chapters X, XI, and XV. We study the c.e. sets as a partial ordering under inclusion, (E; `). We do ..." Cited by 2 (0 self) Add to MetaCart The purpose of this article is to summarize some of the results on the algebraic structure of the computably enumerable (c.e.) sets since 1987 when the subject was covered in Soare 1987 , particularly Chapters X, XI, and XV. We study the c.e. sets as a partial ordering under inclusion, (E; `). We do not study the partial ordering of the c.e. degrees under Turing reducibility, although a number of the results here relate the algebraic structure of a c.e. set A to its (Turing) degree in the sense of the information content of A. We consider here various properties of E: (1) deønable properties; (2) automorphisms; (3) invariant properties; (4) decidability and undecidability results; miscellaneous results. This is not intended to be a comprehensive survey of all results in the subject since 1987, but we give a number of references in the bibliography to other results. - In Computability, Enumerability, Unsolvability, volume 224 of London Math. Soc. Lecture Note Ser , 1995 "... A set A ` ! is computably enumerable (c.e.), also called recursively enumerable, (r.e.), or simply enumerable, if there is a computable algorithm to list its members. Let E denote the structure of the c.e. sets under inclusion. Starting with Post [1944] there has been much interest in relating t ..." Cited by 1 (0 self) Add to MetaCart A set A ` ! is computably enumerable (c.e.), also called recursively enumerable, (r.e.), or simply enumerable, if there is a computable algorithm to list its members. Let E denote the structure of the c.e. sets under inclusion. Starting with Post [1944] there has been much interest in relating the denable (especially E-denable) properties of a c.e. set A to its iinformation contentj, namely its Turing degree, deg(A), under T , the usual Turing reducibility. [Turing 1939]. Recently, Harrington and Soare answered a question arising from Post's program by constructing a nonemptly E-denable property Q(A) which guarantees that A is incomplete (A !T K). The property Q(A) is of the form (9C)[A ae m C & Q \Gamma (A; C)], where A ae m C abbreviates that iA is a major subset of Cj, and Q \ Gamma (A; C) contains the main ingredient for incompleteness. A dynamic property P (A), such as prompt simplicity, is one which is dened by considering how fast elements elements enter A relat... - Proceedings of the Oberwolfach Conference on Computability Theory , 1996 "... Post 1944 began studying properties of a computably enumerable (c.e.) set A such as simple, h-simple, and hh-simple, with the intent of finding a property guaranteeing incompleteness of A. From observations of Post 1943 and Myhill 1956, attention focused by the 1950's on properties definable in the ..." Add to MetaCart Post 1944 began studying properties of a computably enumerable (c.e.) set A such as simple, h-simple, and hh-simple, with the intent of finding a property guaranteeing incompleteness of A. From observations of Post 1943 and Myhill 1956, attention focused by the 1950's on properties definable in the inclusion ordering of c.e. subsets of!, namely E = (fWngn2! ; ae). In the 1950's and 1960's Tennenbaum, Martin, Yates, Sacks, Lachlan, Shoenfield and others produced a number of elegant results relating E-definable properties of A, like maximal, hh-simple, atomless, to the information content (usually the "... This paper contains some results and open questions for automorphisms and definable properties of computably enumerable (c.e.) sets. It has long been apparent in automorphisms of c.e. sets, and is now becoming apparent in applications to topology and dierential geometry, that it is important to ..." Add to MetaCart This paper contains some results and open questions for automorphisms and definable properties of computably enumerable (c.e.) sets. It has long been apparent in automorphisms of c.e. sets, and is now becoming apparent in applications to topology and dierential geometry, that it is important to know the dynamical properties of a c.e. set We , not merely whether an element x is enumerated in We but when, relative to its appearance in other c.e. sets. We present here
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2293914","timestamp":"2014-04-18T17:57:02Z","content_type":null,"content_length":"32046","record_id":"<urn:uuid:28769a04-96f3-4592-beee-e72f7a68985a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Did I find the diagonal of a rectangle correctly? General Question Did I find the diagonal of a rectangle correctly? The diagonal of a rectangle is 25 cm. a. Find the dimensions of the rectangle if it is 5 cm longer than it is wide. My answer… Question b. What are the dimensions if the rectangle is a square? s=length of each side That’s how far I got and I’m confused what the answer might be. I don’t know how to get to that final step. Please help. Observing members: 0 Composing members: 0 13 Answers Take the square root of both sides. I did do that which leaves me with s= sq rt 625/2 I don’t know the answer…it doesn’t ask to round or anything so I’m not sure. Square root of 625/2….am I supposed to reduce or something? Someone told me it was s= 25 square root 2 It’s s = 25/(sqrt2). The diagonal and two of the sides form a 45–45-90 triangle. In any such triangle, the diagonal is always equal to the side length times sqrt2. So if the diagonal is 25, then the side lengths are 25/ Your method works just as well. When you take the square root of 625/2, you take the square root of the numerator and denominator separately. The square root of 625 is 25, so the square root of 625/2 is 25/sqrt2, just like we found using the other method. @Ivan it.s not a square- the length is 5cm longer than the width Pythagoras says a^2+b^2=c^2 or 25^2 = w^2 + (w+5)^2 you can figure it out from there. the basic length of the hypotenuse of a square relating to the sides is a multiple of square root of 2. Think of a 1×1 square.so you’re on the right track with 2s^2=625. but then when you have 2^s2=625 you get s^2=625/2 then square root both sides then you get S= square root 625/2 and then I’m stuck. Do I just hit square root 625/2 on my calc and approximate the answer? do I try to simplify?? It looks to me like something isn’t quite right there… Correct me if I am wrong, but it seems to me that that first one should read: (w^2) + ((w+5)^2) = 25^2 Was that a typo? As for the second one, I would simplify it first; s^2 = 625/2—> s^2 = 312.5 Who cares what format you have the answer in. The point is that you know how to solve it, and you have the correct answer. I would leave it as simply 25/sqrt2, but that’s just me. The exact form of the answer depends on what form your teacher expects. Usually, 25/sqrt2 should be simplified. It is generally preferred not to have a square root in the denominator. To take care of that, multiply both the numerator and denominator by sqrt2 to get 25sqrt2/2. Most of my math and science teachers wanted the simplest answer possible, so the form does matter quite a bit. Hell, there have been many times where I got the correct answer and got only 1 point on a 5 point question since the method matters more than the final answer! That is why I went with sqrt(312.5), though most teachers I’ve had would want that evaluated/calculated to an actual number. In this case, I think rounding to 17.7 would be acceptable as it’s close to the correct answer (17.67766952966…) and, more importantly, it shows that you have the math skills to actually do it right. Answer this question This question is in the General Section. Responses must be helpful and on-topic.
{"url":"http://www.fluther.com/105140/did-i-find-the-diagonal-of-a-rectangle-correctly/","timestamp":"2014-04-20T08:19:09Z","content_type":null,"content_length":"50406","record_id":"<urn:uuid:330ddc0a-c648-4ea4-b73a-64ae3054b690>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
The polynomial hierarchy and intuitionistic bounded arithmetic, Structure in Complexity Theory Results 1 - 10 of 13 - Annals of Pure and Applied Logic , 1993 "... i ..." , 1992 "... The purpose of this thesis is to give a "foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct r ..." Cited by 45 (3 self) Add to MetaCart The purpose of this thesis is to give a "foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct reference to polynomials, time, or even computation. Complexity classes characterized in this way include polynomial time, the functional polytime hierarchy, the logspace decidable problems, and NC. After developing these "resource free" definitions, we apply them to redeveloping the feasible logical system of Cook and Urquhart, and show how this first-order system relates to the second-order system of Leivant. The connection is an interesting one since the systems were defined independently and have what appear to be very different rules for the principle of induction. Furthermore it is interesting to see, albeit in a very specific context, how to retract a second order statement, ("inducti... - Feasible Mathematics: A Mathematical Sciences Institute Workshop , 1990 "... Stephen A. Cook and Bruce M. Kapron Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 1 Introduction Functionals are functions which take natural numbers and other functionals as arguments and return natural numbers as values. The class of "feasible" functionals of finit ..." Cited by 27 (6 self) Add to MetaCart Stephen A. Cook and Bruce M. Kapron Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 1 Introduction Functionals are functions which take natural numbers and other functionals as arguments and return natural numbers as values. The class of "feasible" functionals of finite type was introduced in [6] via the typed lambda calculus, and used to interpret certain formal systems of arithmetic: systems capturing the notion of "feasibly constructive proof" (we equate feasibility with polynomial time computability) . Here we name the functionals of [6] the basic feasible functionals and justify the designation by presenting results which include two programming style characterizations of the class. We also give examples of both feasible and infeasible functionals, and argue that the notion plays a natural role in complexity theory. Type 2 functionals take numbers and ordinary numerical functions as arguments. When these argument functions are 0-1 valued (i.e. sets) ... - JOURNAL OF COMPUTER AND SYSTEM SCIENCE , 1997 "... This paper investigates analogs of the Kreisel-Lacombe-Shoenfield Theorem in the context of the type-2 basic feasible functionals. We develop a direct, polynomial-time analog of effective operation in which the time boundingon computations is modeled after Kapron and Cook's scheme for their basic po ..." Cited by 10 (0 self) Add to MetaCart This paper investigates analogs of the Kreisel-Lacombe-Shoenfield Theorem in the context of the type-2 basic feasible functionals. We develop a direct, polynomial-time analog of effective operation in which the time boundingon computations is modeled after Kapron and Cook's scheme for their basic polynomial-time functionals. We show that if P = NP, these polynomial-time effective operations are strictly more powerful on R (the class of recursive functions) than the basic feasible functions. We also consider a weaker notion of polynomial-time effective operation where the machines computing these functionals have access to the computations of their procedural parameter, but not to its program text. For this version of polynomial-time effective operations, the analog of the Kreisel-Lacombe-Shoenfield is shown to hold---their power matches that of the basic feasible functionals on R. , 1997 "... on the World Wide Web (\the Web") (www.cs.cornell.edu/Info/NuPrl/nuprl.html) ..." - Logic and Computational Complexity, LNCS Vol. 960 , 1995 "... . A formal approach to feasible numbers, as well as to middle and small numbers, is introduced, based on ideas of Parikh (1971) and improving his formalization. The "vague" set F of feasible numbers intuitively satisfies the axioms 0 2 F , F + 1 ` F and 2 1000 62 F , where the latter is stronger ..." Cited by 5 (1 self) Add to MetaCart . A formal approach to feasible numbers, as well as to middle and small numbers, is introduced, based on ideas of Parikh (1971) and improving his formalization. The "vague" set F of feasible numbers intuitively satisfies the axioms 0 2 F , F + 1 ` F and 2 1000 62 F , where the latter is stronger than a condition considered by Parikh, and seems to be treated rigorously here for the first time. Our technical considerations, though quite simple, have some unusual consequences. A discussion of methodological questions and of relevance to the foundations of mathematics and of computer science is an essential part of the paper. 1 Introduction How to formalize the intuitive notion of feasible numbers? To see what feasible numbers are, let us start by counting: 0,1,2,3, and so on. At this point, A.S. Yesenin-Volpin (in his "Analysis of potential feasibility", 1959) asks: "What does this `and so on' mean?" "Up to what extent `and so on'?" And he answers: "Up to exhaustion!" Note that by cos... - Proof and System-Reliability, Proceedings of International Summer School Marktoberdorf, July 24 to August 5, 2001, volume 62 of NATO Science Series III , 2002 "... The basic concepts of type theory are fundamental to computer science, logic and mathematics. Indeed, the language of type theory connects these regions of science. It plays a role in computing and information science akin to that of set theory in pure mathematics. There are many excellent accounts ..." Cited by 5 (1 self) Add to MetaCart The basic concepts of type theory are fundamental to computer science, logic and mathematics. Indeed, the language of type theory connects these regions of science. It plays a role in computing and information science akin to that of set theory in pure mathematics. There are many excellent accounts of the basic ideas of type theory, especially at the interface of computer science and logic — specifically, in the literature of programming languages, semantics, formal methods and automated reasoning. Most of these are very technical, dense with formulas, inference rules, and computation rules. Here we follow the example of the mathematician Paul Halmos, who in 1960 wrote a 104-page book called Naïve Set Theory intended to make the subject accessible to practicing mathematicians. His book served many generations well. This article follows the spirit of Halmos ’ book and introduces type theory without recourse to precise axioms and inference rules, and with a minimum of formalism. I start by paraphrasing the preface to Halmos ’ book. The sections of this article follow his chapters closely. Every computer scientist agrees that every computer scientist must know some type theory; the disagreement begins in trying to decide how much is some. This article contains my partial answer to that question. The purpose of the article is to tell the beginning student of advanced computer science the basic type theoretic facts of life, and to do so with a minimum of philosophical discourse and logical formalism. The point throughout is that of a prospective computer scientist eager to study programming languages, or database systems, or computational complexity theory, or distributed systems or information discovery. In type theory, “naïve ” and “formal ” are contrasting words. The present treatment might best be described as informal type theory from a naïve point of view. The concepts are very general and very abstract; therefore they may - In Proceedings of IEEE 34th Annual Symposium on Foundations of Computer Science, Nov 3--5 , 1993 "... ) Peter Clote A. Ignjatovic y B. Kapron z 1 Introduction to higher type functionals The primary aim of this paper is to introduce higher type analogues of some familiar parallel complexity classes, and to show that these higher type classes can be characterized in significantly different way ..." Cited by 4 (4 self) Add to MetaCart ) Peter Clote A. Ignjatovic y B. Kapron z 1 Introduction to higher type functionals The primary aim of this paper is to introduce higher type analogues of some familiar parallel complexity classes, and to show that these higher type classes can be characterized in significantly different ways. Recursion-theoretic, proof-theoretic and machine-theoretic characterizations are given for various classes, providing evidence of their naturalness. In this section, we motivate the approach of our work. In proof theory, primitive recursive functionals of higher type were introduced in Godel's Dialectica [13] paper, where they were used to "witness" the truth of arithmetic formulas. For instance, a function f witnesses the formula 8x9y\Phi(x; y), where \Phi is quantifier-free, provided that 8x\Phi(x; f(x)); while a type 2 functional F witnesses the formula 8x9y8u9v\Phi(x; y; u; v), provided that 8x8u\Phi(x; f(x); u; F (x; f(x); u)): Godel's formal system T is a variant of the
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=704697","timestamp":"2014-04-20T22:00:26Z","content_type":null,"content_length":"36492","record_id":"<urn:uuid:fe46cce1-5f0e-444a-8160-8e7c60d555d7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Example 1: Low-Pass Filtering by FFT Convolution Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search In this example, we design and implement a length FIR lowpass filter having a cut-off frequency at Hz. The filter is tested on an input signal consisting of a sum of sinusoidal components at frequencies Hz. We'll filter a single input frame of length , which allows the FFT to be samples (no wasted zero-padding). % Signal parameters: f = [ 440 880 1000 2000 ]; % frequencies M = 256; % signal length Fs = 5000; % sampling rate % Generate a signal by adding up sinusoids: x = zeros(1,M); % pre-allocate 'accumulator' n = 0:(M-1); % discrete-time grid for fk = f; x = x + sin(2*pi*n*fk/Fs); Next we design the lowpass filter using the window method: % Filter parameters: L = 257; % filter length fc = 600; % cutoff frequency % Design the filter using the window method: hsupp = (-(L-1)/2:(L-1)/2); hideal = (2*fc/Fs)*sinc(2*fc*hsupp/Fs); h = hamming(L)' .* hideal; % h is our filter Figure 8.3 plots the impulse response and amplitude response of our FIR filter designed by the window method. Next, the signal frame and filter impulse response are zero-padded out to the FFT size and transformed: % Choose the next power of 2 greater than L+M-1 Nfft = 2^(ceil(log2(L+M-1))); % or 2^nextpow2(L+M-1) % Zero pad the signal and impulse response: xzp = [ x zeros(1,Nfft-M) ]; hzp = [ h zeros(1,Nfft-L) ]; X = fft(xzp); % signal H = fft(hzp); % filter Figure 8.4 shows the input signal spectrum and the filter amplitude response overlaid. We see that only one sinusoidal component falls within the pass-band. Figure 8.4: Overlay of input signal spectrum and desired lowpass filter pass-band. Now we perform cyclic convolution in the time domain using pointwise multiplication in the frequency domain: Y = X .* H; The modified spectrum is shown in Fig.8.5. The final acyclic convolution is the inverse transform of the pointwise product in the frequency domain. The imaginary part is not quite zero as it should be due to finite numerical precision: y = ifft(Y); relrmserr = norm(imag(y))/norm(y) % check... should be zero y = real(y); Figure 8.6: Filtered output signal, with close-up showing the filter start-up transient (``pre-ring''). Figure 8.6 shows the filter output signal in the time domain. As expected, it looks like a pure tone in steady state. Note the equal amounts of ``pre-ringing'' and ``post-ringing'' due to the use of a linear-phase FIR filter.^9.5 For an input signal approximately samples long, this example is 2-3 times faster than the conv function in Matlab (which is precompiled C code implementing time-domain convolution). Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] [Lecture Video] [Exercises] [Examination]
{"url":"https://ccrma.stanford.edu/~jos/sasp/Example_1_Low_Pass_Filtering.html","timestamp":"2014-04-17T04:52:17Z","content_type":null,"content_length":"21018","record_id":"<urn:uuid:c50f4062-a6b0-4791-af4e-b97474585915>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Contemporary Abstract Algebra When reviewing a text, the question “How well will the students understand the subject just by reading this book?” always runs through the back of my mind. And according to this criterion, Contemporary Abstract Algebra by Joseph Gallian is a well-written book. After introducing a new concept or theorem, he always provides a plethora of examples. He explains, “The best way to grasp the meat of a theorem is to see what it says in specific cases.” Furthermore, Gallian keeps the big picture in mind. Rather than just proving theorem after theorem, he often steps back to explain why a theorem is important or how the theorem can be used, or even to explain what a theorem means in words, rather than just in symbols. Another plus are the numerous tables he includes, which sum up a lot of information in a concise way. For example, there is a summary of group examples and their properties. What makes Contemporary Abstract Algebra unique is Gallian´s focus on showing that abstract algebra is a contemporary subject. He incorporates examples of physics, cryptography, chemistry, and computer science into the text. For instance, there is a description of how your credit card number is encrypted when buying online from Amazon.com. In another section, he explains how molecules with chemical formulas of the form AB[4,] such as methane (CH[4]), have the same symmetry as the group A[4]. Gallian also shows that abstract algebra is an ever-expanding field of research by telling stories of how recent mathematicians pushed to solve certain problems. For example, he gives the history of “the enormous effort put forth by hundreds of mathematicians” since the 1960s “to discover and classify all finite simple groups.” Contemporary Abstract Algebra is appropriate for a first or second course in abstract algebra. The text does not spend much time on preliminary number theory topics, like the division algorithm or modular arithmetic, so the students need to have familiarity with these topics. The text does provide a solid introduction to the traditional topics of groups, rings, and fields, but there is depth in his coverage of these topics. He includes, for example, internal and external direct products and the Fundamental Theorem of Finite Abelian Groups, in his section on groups. The special topics cover a selection of interesting themes, such as Cayley digraphs, which “provide a method of visualizing groups” and cyclotomic extensions which tie together many themes explored in the text. The text also has some interesting extras. Gallian starts and ends each chapter with quotes from famous mathematicians, popular songs, and even few from Homer Simpson. Some of the quotes are simply amusing (“If you really want something in this life, you have to work for it — Now quiet, they’re about to announce the lottery numbers.” –Homer Simpson). Others offer sound mathematical advice (“‘For example’ is not proof.” –Jewish proverb). At the end of each chapter, he offers a list of suggested readings with summaries, as well as suggested websites, and films. The text also has a companion website which has true/false questions and software for computer exercises. In the preface, Gallian lays out his goals for the text. Briefly, they are to give the student “a solid introduction to the traditional topics,” to show readers that “abstract algebra is a contemporary subject,” to provide students with an enjoyable text, and finally to help students gain competency in doing computations and writing proofs. He definitely meets all of these goals, and as such, I certainly recommend this textbook. Kara Shane Colley studied physics at Dartmouth College and math education at Teachers College. She is currently taking a break from teaching math to volunteer at a meditation center in Mexico.
{"url":"http://www.maa.org/publications/maa-reviews/contemporary-abstract-algebra-1?device=mobile","timestamp":"2014-04-19T00:30:10Z","content_type":null,"content_length":"37376","record_id":"<urn:uuid:678aca5f-496f-42d6-9f84-dba4ca382653>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Cichon's diagram In set theory, Cichoń's diagram Cichon's diagram is a table of 10 infinite cardinal numbers related to the set theory of the reals displaying the provable relations between these cardinal invariant s. All these cardinals are greater or equal than , the smallest uncountable cardinal, and they are bounded above by , the cardinality of the continuum . Four cardinals describe properties of the of sets of measure zero , four more describe the corresponding properties of the ideal of meager sets (first category sets) Let I be an ideal of subsets of a fixed infinite set X, containing all finite subsets of X. We define the following "cardinal coefficients" of I:
{"url":"http://www.reference.com/browse/Cichon's+diagram","timestamp":"2014-04-17T16:40:19Z","content_type":null,"content_length":"80547","record_id":"<urn:uuid:e53815ea-e1bb-4ef7-8abb-31c5816c2650>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2006 [00399] [Date Index] [Thread Index] [Author Index] Re: Re: Re: Re: Mathematica and Education • To: mathgroup at smc.vnet.net • Subject: [mg65822] Re: [mg65721] Re: [mg65075] Re: [mg64957] Re: [mg64934] Mathematica and Education • From: bsyehuda at gmail.com • Date: Mon, 17 Apr 2006 02:29:19 -0400 (EDT) • References: <200603141059.FAA24082@smc.vnet.net> <200604160748.DAA11061@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com I do not believe that there is a single way of understanding mathematical/physical/... ideas. After all, if there was a single way, we wouldn't see any in scientific textbooks, isn't it? The required set of theorems and their proofs would be sufficient in this case. I fully support Andrej's comments, but I have the feeling that an important issue is missing in this discussion. Mathematica can be a valuable tool in this context due to the option to combine text (typesetting), graphics, and programs (symbolic, algorithmic, numerics, etc.). This makes Mathematica a wonderful tool for EXPRESSING ideas. I strongly emphasize this property to our students and encourage them to be fluent with Mathematica. I still let them be aware of not using it as a "black box" by giving them problems that require innovative and critical Also there are enough ways to a professor to check the theoretical level and skills of the students and prevent them from making Mathematica a "black box" or a "magic calculator". This might require a little more dedication but it benefit with better students. One may find solutions to theoretical problems in various theoretical subjects available today on the Internet. Using this "blindly" is similar to using Mathematica as a "black box". Although we have no control of using such "resources" we can control the final level that student need to have in order to pass the final exams which should have a substantial theoretical As was described earlier in one of the posts, Mathematica was used as the "machinery" to study a course in electromagnetic fields. Although I find it a little bit extreme in this specific case, I find no harm if this is used to enhance understanding of the theory and basics of the subject. I remember well while being an undergraduate (many years ago) most of the students had difficulties in ectromagnetic fields and electromagnetic waves. The text books at the time were lacking enough enlightening examples (especially in EM waves). Mathematica would be great for that matter (but did not exist at the time). I just wonder how Mathematica was allowed to be used in the final exam. This (at least for me) seems a "line crossing" of the academic institute.. As for Andrej's remark of better programming skills and less theoretical skills of current day students, I find it (painfully ) true. But, still, I will not bet on such students to be leading something in the future of their professional life. Such students will always be on the (lower) technical side and the better ones with the better theoretical capabilities and understanding will determine the path. Being in high-tech employee (for example) does not guarantee of having strong "theoretical skills". Summarizing, one need to define what are the building blocks of knowledge that are needed to be taught to the students and what is the best way of teaching that. Also, one need to define the "studying habits" of the students. Mathematica certainly can be a helpful tool for both with a careful use and control from the teaching staff. with best regards On 4/16/06, Andrzej Kozlowski <akoz at mimuw.edu.pl> wrote: > [This post has been delayed due to email problems - moderator] > I have been using Mathematica as a basic teaching aid for over ten > years. In fact I nowadays use in on all the undergraduate courses I > teach in very different environments (on the one hand I use it for > teaching "Mathematics for Physics" at Tokyo Denki University in > Japan, and on the other hand "(Financial) Derivative Pricing with > Mathematica@ at Warsaw University in Poland.) So clearly I agree with > most things that have been said in favour of using Mathematica (and > also other CAS - just in case RJF is reading this ;-)). I also agree > with all the comments below. However, there is just one note of > caution I would like to add that seems to have been omitted. I think > it is a big mistake to identify all mathematics with what should be > called "computational" or "algorithmic" mathematics. Many people have > written about the relations and the differences between the two. > Particularly interesting are are various essays by Donald Knuth (see > his "Selected Papers on Computer Science") as well as various > writings of Roger Penrose particularly "The Emperor's New Mind" where > he actually tries to describe the difference between algorithmic and > non-algorithmic thinking. In the early days when computer science > was very new many mathematicians disdained this upstart subject, > which they considered as essentially trivial. One does not often meet > such attitudes today. However, there is now the danger than many > institutions are falling into the opposite extreme and reduce all > mathematics basically to the algorithmic approach (the bad ones do > not even do that, instead they reduce learning mathematics to > memorising algorithms and the worst ones simply to learning how to > push buttons or write commands in a CAS). The result is that fewer > people learn to think geometrically although it was actually > geometric thinking and not computation that was behind most of the > great discoveries both in mathematics and mathematical physics . Non > algorithmic mathematics (particularly various kinds of geometry) is > usually much harder to teach than computational one because to an > even larger extent it depends on inspiration and talent. Hence there > is a strong temptation to eliminate such courses from the syllabus > thus causing irreparable damage to the quality of understanding of > mathematics. I talk to people who teach mathematics and universities > in many different countries and everywhere I hear the same story: > while student's expertise in various aspects of computing has been > increasing by leaps and bounds the quality of their mathematical > understanding has been correspondingly declining. So the point I want > to make is basically this: by all means teach students to use and > understand Mathematica for all kinds of tasks in computational and > algorithmic mathematics but do not give them the impression that the > kind of things you can do with Mathematica is all there is to > mathematics. The remark I first heard about 20 years ago: "Oh, you > are a mathematician, I thought that is all done by computers > nowadays" can be heard even from scientifically educated people, and > I would hate to think that Mathematica is making a contribution to > spreading this totally wrong and harmful idea. > Andrzej Kozlowski > On 14 Mar 2006, at 11:59, King, Peter R wrote: > > David, (and all the othes who responded), > > > > I have now had the time to read all the responses to my initial > > response > > and I can't really argue with the main points, in fact I don't think I > > ever stated that Mathematica should not be used in the teaching of > > mathematics (with the caveat below). Yes it does enable you to do all > > the things that you and others have stated and can enormously > > increase a > > student's abilities to do things. This wasn't the thrust. My > > concern was > > about students who claimed never to have used pen and paper and > > only to > > have used Mathematica. I think this is dangerous. Why? > > > > 1) suppose there is a bug (shock horror they do exist) or the student > > has mistyped things, how do they check the results if they can't do > > some > > kind of manual check themselves? Can the student do a rough > > estimate of > > what they expect the answer to look like? Do they understand the > > answer > > and what it means? Sure they could plot it out (but then why not just > > write a program to solve the problem numerically in the first place). > > This doesn't mean that students shouldn't use MAthematica but it does > > mean they should also be able to do calculations by pen and paper when > > they are comfortable with that they can move on and use the tools that > > enable them to do "more advanced" and "more interesting" things. > > > > 2) Related to this, actually I am very concerned about the current > > generation that has been brought up on calculators. it HAS generated > > people who cannot do simple calculations without one. When a student > > asks me how to divide 1 by 2/3 because he hasn't got a calculator I > > get > > worried. When I see exam scripts where people give the answer E (for > > error) when they take the square root of a negative number I get > > worried. More importantly students (not all but a significant > > minority) > > don't actually understand what numbers mean. I see lengths quoted > > to 10 > > significant figures (implying a measurement accuracy on the sub atomic > > scale). This has happened over a period of probably 20 years and > > reflects poor education policies towards mathematics and is probably > > beyond the scope of this thread (or indeed this list) but it has > > happened because people have taken the attitude why bother to learn to > > do multiplication when a calculator can do it quicker and more > > accurately than you can. I would be worried to go down the same track > > with more advanced mathematics. I strongly believe that the basic > > skills > > should be learnt first on pen and paper and then reinforced using > > tools > > like mathematica. I do also believe that Mathematica can be used as > > part > > of the learning and reinforcing of the basic skills - just not as a > > sole > > substitute. This isn't just an issue of preserving old skills. > > After all > > we bother to teach people to read. Why? technology can give us spoken > > text. I think there are some skills (and this includes mathematics) > > that > > are so basic that if we cannot perform them we are missing something. > > Also often we are forced to operate without the use of these tools. > > Such > > as in the field, in meetings without access to computers, in companies > > that can't afford or don't want to pay for software licenses (I spent > > many years working for a large multinational that I had to convince > > very > > hard to buy a single licence for MAthematica because they couldn't see > > how it would affect their business performance - this is not > > uncommon). > > > > 3) Why Mathematica (this is the caveat I referred to above). Now > > this is > > probably heresy or blasphemy to this list but there are other computer > > tools for doing mathematics. All these tools have there pros and cons. > > They all have their quirks some of which distract from the underlying > > mathematics (some of which may enhance). There is a danger that > > students > > get caught up with the intricacies of how to do a particular operation > > in that particular package rather than the underlying mathematics. You > > could argue that the mathematics is the basic "truth" and the > > implementation package is something different (a bit like Plato's > > shadow > > worlds). However, this is an interesting philosophical question that I > > don't really want to go into here (pen and paper, is if you like, > > another package and how much is mathematics limited by our ability to > > write things down and solve analytically by hand and how much is it > > enhanced by using the power of computers, expecially for visualising > > complex data or phenomena). I haven't seen this with mathematical > > packages but for other commercial software I have seen students held > > back by learning the idosincracies of packages and claiming something > > can't be done simply because the software can't do it. In other > > owrds it > > can limit the student's abilities to do things because of the > > limitations of the package. Again this is not a reason for not using > > Mathematica in education but it is a reason not to rely on it > > solely and > > to teach students there are other ways of doing things (including by > > hand or with other packages). > > > > Finally I would like to say that the response on this list has been > > almost overwhelmingly in favour of using Mathematica in education > > and I > > would support that wholeheartedly. But that support is tempered by the > > requirement that the students are actually learning how to do the > > mathematics properly, when required they can think on their own > > feet and > > not rely any particular package and that they are learning not just > > how > > to use a tool but how to use the underlying subject. > > > > I would also point out that that the support for Mathematica on this > > list is not entirely unbiased (it is after all made up of people > > who are > > Mathematica users and experts). If I went to the other packages forums > > (which must exist, I have never checked) I expect i would see them > > strongly advocate the use of their own particular package and if I > > were > > to go to the general group of educators I expect i would see a very > > different response. It is easy to dismiss them as being behind the > > times > > or out of touch, but they do represent a very big experience bank. > > > > Peter King > > > >> -----Original Message----- > >> From: David Park [mailto:djmp at earthlink.net] To: mathgroup at smc.vnet.net > >> Subject: [mg65822] [mg65721] [mg65075] RE: [mg64957] Re: [mg64934] Mathematica > and > >> Education > >> > >> > >> Peter, > >> > >> I find your remarks very interesting and I think you state > >> the principal > >> reasons for NOT making the maximum use of Mathematica in > >> education. It > >> certainly helps to get the objections and perceived limitations on > >> the > >> table. However, I would like to try, to the best of my > >> ability, to make the > >> counter arguments. > >> > >> If I may summarize the reasons you, and others, have put forward. > >> > >> 1) Mathematica allows a student to get an answer without > >> truly understanding > >> the underlying theory and reasons. Pencil and paper forces > >> the student to > >> understand things more deeply and provides additional experience. > >> > >> 2) We have to preserve the old skills. In emergencies we may > >> be forced to > >> fall back on them, such as in the field, in exams without > >> computers and > >> after the next nuclear war. Good penmanship and mental > >> arithmetic will save > >> us. > >> > >> 3) Mathematica will automatically make choices for us that we do not > >> understand. I would like to state this in a more general > >> sense. Students > >> haven't mastered Mathematica well enough to use it as a reliable > >> tool. > >> > >> I have often argued here that students should be taught to think of > >> Mathematica as 'pencil and paper'. They should use it just > >> like they would > >> use pencil and paper. Theodore Gray has provided us with the > >> wonderful > >> notebook interface. You can have titles, sections, text > >> cells, equations and > >> diagrams. It's the style of textbooks, reports and research > >> papers. It goes > >> back at least to Euclid. So, I don't understand specifically > >> what advantage > >> real pencil and paper have over a Mathematica notebook, > >> except perhaps that > >> it is far easier to get away with writing nonsense. > >> > >> In fact, let's look at the advantages that a Mathematica > >> notebook has over > >> real pencil and paper. > >> > >> 1) Neatness. And a student can correct and rewrite more easily. > >> 2) An active document. The definitions students write can > >> actively be used > >> in further derivations. In fact, the student is forced to make these > >> definitions and assumptions explicit. > >> 3) Permanent record. Not only a permanent record but also a > >> repository of > >> resources that the student may have developed. > >> 4) Proofing. With a Mathematica notebook you can actually > >> evaluate things > >> and verify that they work. One can't get away with sloppiness. > >> 5) MORE and DEEPER experience. With a Mathematica notebook a > >> student can > >> actually do many more, and more difficult, exercises and > >> examples. Many > >> times, while working through textbooks, I have seen cases > >> where the author > >> either skipped the demonstration or simplified the case for > >> no other reason > >> than the difficulty of hand calculations. > >> 6) A literate style. Conventional exercises and tests are > >> usually skimpy > >> throw away documents. Mathematica notebooks provide a perfect > >> opportunity > >> for 'essay' style work and develop the skills for technical > >> communication. > >> > >> Of course, we have to have teachers and students who know how to take > >> advantage of these features. > >> > >> As for preserving old skills, I'm not too sympathetic. Are > >> students to be > >> taught how to sharpen spears (no advanced bow and arrow > >> technology allowed!) > >> track animals and identify eatable grubs and berries, just in > >> case we get > >> thrown back into a hunter-gatherer society? It wasn't that > >> many generations > >> ago when almost all women knew how to weave or operate a > >> spinning wheel. > >> Should these skills be preserved? Like it or not, we are dependent on > >> civilization and modern technology. Rather than teaching > >> 'survival skills' > >> we should make sure that civilization is preserved and > >> advanced. That's the > >> best chance. If worse comes to worst, some people will learn the > >> multiplication tables fast enough (and also how to sharpen spears). > >> > >> The problem of using Mathematica intelligently, and not > >> blindly, is serious. > >> Most students are not well enough prepared with Mathematica > >> to use it to > >> anywhere near its capability. Mathematica is not wide spread > >> enough and > >> students do not learn it early enough. Any student interested > >> in a technical > >> career could do nothing better than start learning it in high school. > >> Furthermore, Mathematica is not optimized for students and > >> researchers. When > >> it comes to ease of use there are many gaps. I believe that > >> Mathematica can > >> truly effect a revolution in technical education. But it is > >> not as simple as > >> just installing it on a departmental server. A lot of > >> preparation is needed. > >> Additional packages geared to student use are needed. > >> Educators have to > >> learn how to take advantage of the resource. (For example how > >> to shift from > >> quick calculations to essay type questions.) > >> > >> David Park > >> djmp at earthlink.net > >> http://home.earthlink.net/~djmp/ > >> > >> > >> From: King, Peter R [mailto:peter.king at imperial.ac.uk] To: mathgroup at smc.vnet.net > >> > >> I should like to say that as an educator of science students in a > >> (predominantly) non-mathematical branch of science (earth > >> sciences) I am > >> very concerned about this approach. Sure Mathematica is a wonderful > >> tool. As a professional researcher I use it all the time for doing > >> tedious calculations to save time, or to check claculations > >> where I may > >> have got things wrong and so on and so on. If I didn't think > >> Mathematica > >> was useful I wouldn't have it and wouldn't subscribe to this list. > >> > >> But it is still a tool. IT can't know what calculations to do, what > >> approximations to make and sometimes when there are > >> mathematical choices > >> to be made. For example there are times when Mathematica's choice of > >> branch cut doesn't correspond to the one I want to make. Not > >> a problem I > >> can tell it what I really want. There are times when its choice of > >> simplification doesn't suite my purpose. Again not a problem > >> I can tell > >> it what to do or simply carry on by hand if that's easier. > >> But how do I > >> know when the defaults don't suite my purpose, because I have > >> spent many > >> years doing things by hand and gaining that experience to know what I > >> want. I am not convinced that if I had done all my mathematics within > >> Mathematica I would have gained the same experience. But I am open to > >> discussion on this if anyone wants to put the counter case. > >> However, I > >> would need very strong convincing that it is good for > >> students never to > >> have to do old fashioned calculations on paper. In the same > >> way I think > >> it is important for children to learn multiplication rather > >> than rely on > >> a calculator or to learn to write rather than use a word processor. > >> > >> In particular for practicing engineers they may be out in the field, > >> away from a computer and be required to do a back of the envelope > >> calculation by hand. If you have never done it before you > >> will be stuck > >> and I don't think you could consider yourself a "real" engineer. > >> > >> So yes Mathematica is great. Yes students should be taught to > >> use it and > >> use it properly. But please make sure you could have done > >> your homework > >> by hand (it is often not as bad as you might think!). Perhaps I am a > >> dinosaur but I have been in meetings which required > >> moderately difficult > >> numerical calculations which I could do by hand whereas other > >> (younger) > >> people present were stuck without calculators. > >> > >> I was once told a quote and I can't remember who it was from "A fool > >> with a tool is still a fool" > >> > >> (Incidentally please don't take this personally. I don't know > >> you and so > >> I have no reason to doubt that you are a perfectly good scientist > >> I am > >> simply commenting on a current trend for people to run to software > >> rather than doing it by hand - which in some cases is > >> actually easier). > >> > >> Peter King > >> > >> > >> > > • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Apr/msg00399.html","timestamp":"2014-04-18T23:48:39Z","content_type":null,"content_length":"59330","record_id":"<urn:uuid:a3641640-be1c-4f06-96d3-5983dc551b24>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
FRAME 3-10. Solution to If you have a mixed number and wish to convert the fraction to a decimal, Frame 3-9. then the whole number goes to the left of the decimal and the fraction goes to the right of the decimal. For example "five and one-half" (or "five a. 0.0833 and five-tenths) is written as "5.5." b. 0.0088 a. Write 3 1/12 as a decimal (carry out to the fourth decimal place). b. Write 300 8/900 as a decimal (carry out to the fourth decimal place). FRAME 3-11. Solution to To change an improper fraction to a decimal, divide the numerator by the Frame 3-10. denominator. Remember to keep the decimal in the quotient above the decimal point in the dividend. The improper fraction 3/2 is shown below a. 3.0833 being changed to its decimal form. b. 300.0088 2 / 3.0 Change 19/8 to a decimal. FRAME 3-12. Solution to CHANGING DECIMALS TO FRACTIONS. Frames 3-1 through 3-7 have Frame 3-11. given you the basic information you need to change a decimal to a fraction. Just put the numerator over the appropriate denominator (a 19/8 = 2.375 power of 10). For example: 0.045 = 45 thousandths = 45/1000 If you want the fraction reduced, divide the numerator and denominator by their common factors (whole numbers which divide into both the numerator and denominator without leaving a remainder). For example a. Change 0.004 to a fraction and reduce. b. Change 0.0031 to a fraction and reduce.
{"url":"http://armymedical.tpub.com/md0900/md09000049.htm","timestamp":"2014-04-20T15:50:41Z","content_type":null,"content_length":"40478","record_id":"<urn:uuid:9842c12a-b6c4-4c40-a2a7-3213dbef786a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Problem with transformations Date: Nov 16, 2012 7:38 PM Author: Peter Duveen Subject: Problem with transformations The text (Precalculus with limits: a graphing approach Larson, etc.) tells us as follows (p43): "...you can obtain the graph of g(x) = (x - 2)^2 by shifting the graph of f(x) = x^2 two units to the right, as shown in Figure 1.42 [AN ASSERTION]. In this case, the functions g and f have the following relationship. g(x) = (x - 2)^2 = f(x - 2) (right shift of two units)[AN ASSERTION] The following list summarizes vertical and horizontal shifts:" etc. etc. I feel the assertions are not self-evident, and the treatment is generally confusing. I would have treated this differently. I would have first attempted to establish a relationship between a function and another function which is the translation of the first so many spaces horizontally. The relationship is f(x) = g (x + c). That is, the two functions have the same value when the arguments of f and g differ by a particular constant. Assuming we know the form of f(x), what is the form of g(x)? We introduce the argument f(x - c), and want to see what happens to g, namely, f(x - c) = g[(x - c) + c] We thus arrive at the expression f(x - c) = g(x). We have now established the form of g(x) in terms of f(x), which we know. It is simply f(x - c), which is not the same as f(x). In other words, we have derived and demonstrated what the textbook merely asserts.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7924411","timestamp":"2014-04-20T10:54:13Z","content_type":null,"content_length":"2336","record_id":"<urn:uuid:2dc623ce-b9ec-4f7e-a7fb-e20c4791ab4b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society Bulletin Notices AMS Sectional Meeting Program by Day Current as of Tuesday, April 12, 2005 15:10:41 Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org 2002 Central Section Meeting Ann Arbor, MI, March 1-3, 2002 Meeting #974 Associate secretaries: Susan J Friedlander , AMS Saturday March 2, 2002 • Saturday March 2, 2002, 8:00 a.m.-4:30 p.m. Meeting Registration Atrium, East Hall • Saturday March 2, 2002, 8:00 a.m.-4:30 p.m. Exhibit and Book Sale Room 1372, East Hall • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Quantum Topology in Dimension Three, II Room 224, Dennison Building Charles Frohman, University of Iowa frohman@math.uiowa.edu Joanna Kania-Bartoszynska, Boise State University kania@math.boisestate.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Topics in Geometric Function Theory, II Room 229, Dennison Building David A. Herron, University of Cincinnati david.herron@math.uc.edu Nageswari Shanmugalingam, University of Texas nageswari@math.utexas.edu Jeremy T. Tyson, SUNY at Stony Brook tyson@math.sunysb.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Commutative Algebra, II Room 205, Dennison Building Florian Enescu, University of Utah enescu@math.utah.edu. Anurag K. Singh, University of Utah singh@math.utah.edu Karen E. Smith, University of Michigan, Ann Arbor kesmith@math.lsa.umich.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Hyperbolic Manifolds and Discrete Groups, II Room 245, Dennison Building Richard D. Canary, University of Michigan, Ann Arbor canary@math.lsa.umich.edu Alan W. Reid, University of Texas, Austin areid@math.utexas.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Algebraic Combinatorics, II Room 213, Dennison Building Patricia Hersh, University of Michigan, Ann Arbor plhersh@math.lsa.umich.edu Brian D. Taylor, Wayne State University bdt@math.wayne.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Biological Applications of Dynamical Systems, II Room 1060, East Hall J. M. Cushing, University of Arizona cushing@math.arizona.edu Shandelle M. Henson, Andrews University henson@andrews.edu Anna M. Spagnuolo, Oakland University spagnuol@oakland.ed • Saturday March 2, 2002, 8:30 a.m.-11:25 a.m. Special Session on Differential Geometry, I Room 232, Dennison Building Lizhen Ji, University of Michigan, Ann Arbor lji@math.lsa.umich.edu Krishnan Shankar, University of Michigan, Ann Arbor shankar@math.lsa.umich.edu Ralf Spatzier, University of Michigan, Ann Arbor spatzier@math.lsa.umich.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Partial Differential Equations, II Room 1360, East Hall Qing Han, University of Notre Dame qhan@nd.edu Lihe Wang, University of Iowa lwang@math.uiowa.edu • Saturday March 2, 2002, 8:30 a.m.-11:20 a.m. Special Session on Mapping Class Groups and Geometric Theory of Teichmuller Spaces, II Room 237, Dennison Building Benson Farb, University of Chicago farb@math.uchicago.edu Nikolai Ivanov, Michigan State University ivanov@math.msu.edu Howard Masur, University of Illinois, Chicago masur@math.uic.edu □ 8:30 a.m. On the linearity problem for mapping class groups. Hessam Hamidi-Tehrani*, B.C.C. of the City University of New York Tara E. Brendle, Columbia University □ 9:00 a.m. Relations in the Torelli Group. Tara E. Brendle*, Columbia University □ 9:30 a.m. The second homology groups of mapping class groups of orientable surfaces. Mustafa Korkmaz*, Middle East Technical University, Ankara, Turkey Andras I Stipsicz, Princeton Univ. - ELTE TTK, Budapest, Hungary □ 10:00 a.m. Bounded cohomology of subgroups of mapping class groups. Mladen Bestvina, University of Utah Koji Fujiwara*, Tohoku University □ 10:30 a.m. Presentations for the mapping class groups of surfaces and handlebodies. Susumu Hirose*, Michigan State University / Saga University □ 11:00 a.m. • Saturday March 2, 2002, 8:30 a.m.-11:25 a.m. Session for Contributed Papers Room 257, Dennison Building • Saturday March 2, 2002, 9:00 a.m.-11:20 a.m. Special Session on Algebraic Topology, II Room 221, Dennison Building Robert Bruner, Wayne State University rrb@math.wayne.edu Igor Kriz, University of Michigan, Ann Arbor ikriz@math.lsa.umich.edu □ 9:00 a.m. A chain model for the framed little 2-disks operad. James E. McClure*, Purdue University Jeffrey H. Smith, Purdue University □ 9:30 a.m. Derived Koszul duality for $C_k$-algebras. Po Hu*, University of Chicago □ 10:00 a.m. Cochains and Homotopy Type. Michael A. Mandell*, University of Chicago □ 10:30 a.m. The analytic circle-equivariant sigma orientation. Matthew Ando*, Univ. of Illinois--Urbana □ 11:00 a.m. • Saturday March 2, 2002, 9:30 a.m.-11:20 a.m. Special Session on Stochastic Modeling in Financial Mathematics, I Room 1068, East Hall Ronnie Sircar, Princeton University sircar@princeton.edu • Saturday March 2, 2002, 10:00 a.m.-11:20 a.m. Special Session on Integrable Systems and Poisson Geometry, II Room 1084, East Hall Anthony Bloch, University of Michigan abloch@math.lsa.umich.edu Philip Foth, University of Arizona foth@math.arizona.edu Michael Gekhtman, University of Notre Dame Gekhtman.1@nd.edu □ 9:30 a.m. □ 10:00 a.m. Biorthogonal Polynomials, Random Multimatrix Models and the Full Kostant-Toda Lattice. Nicholas M Ercolani*, University of Arizona Kenneth T.- R. McLaughlin, University of North Carolina at Chapel Hill & University of Arizona □ 10:30 a.m. Qualitative Behavior of some non-Abelian Toda Equations. Melinda Koelling*, Rennselaer Polytechnic Institute Anthony Bloch, University of Michigan Michael Gekhtman, University of Notre Dame □ 11:00 a.m. Moment maps and reductive symmetric pairs. Reyer Sjamaar*, Cornell University Yi Lin, Cornell University • Saturday March 2, 2002, 10:00 a.m.-10:45 a.m. Special Session on Moduli Spaces, II Room 216, Dennison Building Angela Gibney, University of Michigan, Ann Arbor agibney@math.lsa.umich.edu Gavril Farkas, University of Michigan, Ann Arbor gfarkas@math.lsa.umich.edu Thomas Nevins, University of Michigan, Ann Arbor nevins@math.lsa.umich.edu Gilberto Bini, University of Michigan, Ann Arbor gbini@math.lsa.umich.edu □ 10:00 a.m. Vanishing Theorems for the Moduli Space of Curves. Ravi Vakil*, Stanford Tom Graber, Harvard • Saturday March 2, 2002, 11:40 a.m.-12:30 p.m. Invited Address Hyperbolic manifolds, discrete groups and quadratic forms. Room 1324, East Hall Alan W Reid*, University of Texas at Austin • Saturday March 2, 2002, 2:00 p.m.-2:50 p.m. Invited Address Pricing and risk management in incomplete markets. Room 1324, East Hall Thaleia Zariphopoulou*, University of Texas, Austin • Saturday March 2, 2002, 2:00 p.m.-5:50 p.m. Special Session on Hyperbolic Manifolds and Discrete Groups, III Room 245, Dennison Building Richard D. Canary, University of Michigan, Ann Arbor canary@math.lsa.umich.edu Alan W. Reid, University of Texas, Austin areid@math.utexas.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Quantum Topology in Dimension Three, III Room 224, Dennison Building Charles Frohman, University of Iowa frohman@math.uiowa.edu Joanna Kania-Bartoszynska, Boise State University kania@math.boisestate.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Topics in Geometric Function Theory, III Room 229, Dennison Building David A. Herron, University of Cincinnati david.herron@math.uc.edu Nageswari Shanmugalingam, University of Texas nageswari@math.utexas.edu Jeremy T. Tyson, SUNY at Stony Brook tyson@math.sunysb.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Integrable Systems and Poisson Geometry, III Room 1084, East Hall Anthony Bloch, University of Michigan abloch@math.lsa.umich.edu Philip Foth, University of Arizona foth@math.arizona.edu Michael Gekhtman, University of Notre Dame Gekhtman.1@nd.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Commutative Algebra, III Room 205, Dennison Building Florian Enescu, University of Utah enescu@math.utah.edu. Anurag K. Singh, University of Utah singh@math.utah.edu Karen E. Smith, University of Michigan, Ann Arbor kesmith@math.lsa.umich.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Algebraic Topology, III Room 221, Dennison Building Robert Bruner, Wayne State University rrb@math.wayne.edu Igor Kriz, University of Michigan, Ann Arbor ikriz@math.lsa.umich.edu • Saturday March 2, 2002, 3:00 p.m.-4:45 p.m. Special Session on Moduli Spaces, III Room 216, Dennison Building Angela Gibney, University of Michigan, Ann Arbor agibney@math.lsa.umich.edu Gavril Farkas, University of Michigan, Ann Arbor gfarkas@math.lsa.umich.edu Thomas Nevins, University of Michigan, Ann Arbor nevins@math.lsa.umich.edu Gilberto Bini, University of Michigan, Ann Arbor gbini@math.lsa.umich.edu □ 3:00 p.m. The rational cohomology of the moduli space of principally polarized abelian 3-folds. Richard M Hain*, Duke University □ 4:00 p.m. • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Algebraic Combinatorics, III Room 213, Dennison Building Patricia Hersh, University of Michigan, Ann Arbor plhersh@math.lsa.umich.edu Brian D. Taylor, Wayne State University bdt@math.wayne.edu • Saturday March 2, 2002, 3:00 p.m.-5:00 p.m. Special Session on Differential Geometry, II Room 232, Dennison Building Lizhen Ji, University of Michigan, Ann Arbor lji@math.lsa.umich.edu Krishnan Shankar, University of Michigan, Ann Arbor shankar@math.lsa.umich.edu Ralf Spatzier, University of Michigan, Ann Arbor spatzier@math.lsa.umich.edu □ 3:00 p.m. Rigidity of quasiconformal structures on the boundary of negatively curved manifolds. Chris Connell*, University of Chicago □ 3:45 p.m. On Ricci Rank for Nonpositively Curved Manifolds. Fangyang Zheng*, Ohio State University □ 4:30 p.m. • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Partial Differential Equations, III Room 1360, East Hall Qing Han, University of Notre Dame qhan@nd.edu Lihe Wang, University of Iowa lwang@math.uiowa.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Mapping Class Groups and Geometric Theory of Teichmuller Spaces, III Room 237, Dennison Building Benson Farb, University of Chicago farb@math.uchicago.edu Nikolai Ivanov, Michigan State University ivanov@math.msu.edu Howard Masur, University of Illinois, Chicago masur@math.uic.edu • Saturday March 2, 2002, 3:00 p.m.-4:50 p.m. Special Session on Stochastic Modeling in Financial Mathematics, II Room 1068, East Hall Ronnie Sircar, Princeton University sircar@princeton.edu • Saturday March 2, 2002, 5:10 p.m.-6:00 p.m. Invited Address Combinatorial models and algebraic questions in the theory of computing. Room 1324, East Hall Laszlo Babai*, University of Chicago • Saturday March 2, 2002, 6:00 p.m.-7:00 p.m. Reception (Sponsored by the Department of Mathematics) Atrium, East Hall Inquiries: meet@ams.org
{"url":"http://ams.org/meetings/sectional/2080_program_saturday.html","timestamp":"2014-04-17T11:13:07Z","content_type":null,"content_length":"81290","record_id":"<urn:uuid:2c5b2472-87bb-44eb-a51e-d22c2d91520c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Eilon Solan Eilon Solan This is information that was supplied by Eilon Solan in registering through RePEc. If you are Eilon Solan , you may change this information at the RePEc Author Service . Or if you are not registered and would like to be listed as well, register at the RePEc Author Service . When you register or update your RePEc registration, you may identify the papers and articles you have authored. Personal Details First Name: Eilon Middle Name: Last Name: Solan RePEc Short-ID: pso68 Homepage: http://www.math.tau.ac.il/~eilons Postal Address: Tel Aviv University, School of Mathematical Sciences Homepage: http://www.math.tau.ac.il Location: Israel, Tel Aviv Working papers 1. Rida Laraki & Eilon Solan, 2012. "Equilibrium in Two-Player Nonzero-Sum Dynkin Games in Continuous Time," Working Papers hal-00753508, HAL. 2. Ehud Lehrer & Eilon Solan & Yannick Viossat, 2011. "Equilibrium payoffs in finite games," Post-Print hal-00361914, HAL. 3. Heller, Yuval & Solan, Eilon & Tomala, Tristan, 2010. "Communication, correlation and cheap-talk in games with public information," MPRA Paper 25895, University Library of Munich, Germany. 4. Marco Scarsini & Eilon Solan & Nicolas Vieille, 2010. "Lowest Unique Bid Auctions," Papers 1007.4264, arXiv.org. 5. Johannes Horner & Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2009. "On a Markov Game with One-Sided Incomplete Information," Cowles Foundation Discussion Papers 1737, Cowles Foundation for Research in Economics, Yale University. 6. Ehud Lehrer & Eilon Solan, 2007. "Learning to play partially-specified equilibrium," Levine's Working Paper Archive 122247000000001436, David K. Levine. 7. Vieille, Nicolas & Rosenberg, Dinah & Solan, Eilon, 2006. "Informational externalities and convergence of behavior," Les Cahiers de Recherche 856, HEC Paris. 8. Johannes Horner & Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2006. "On A Markov Game with Incomplete Information," Discussion Papers 1412, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 9. Eilon Solan & Eran Reshef, 2005. "The Effect of Filters on Spam Mail," Discussion Papers 1402, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 10. Eran Reshef & Eilon Solan, 2005. "Analysis of Do-Not-Spam Registry," Discussion Papers 1411, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 11. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2004. "Timing Games with Informational Externalities," Levine's Working Paper Archive 122247000000000704, David K. Levine. 12. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2004. "Social Learning in One-Arm Bandit Problems," Discussion Papers 1396, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 13. Ehud Lehrer & Eilon Solan, 2003. "No-Regret with Bounded Computational Capacity," Discussion Papers 1373, Northwestern University, Center for Mathematical Studies in Economics and Management 14. Ehud Lehrer & Eilon Solan, 2003. "Excludability and Bounded Computational Capacity Strategies," Discussion Papers 1374, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 15. Ehud Lehrer & Eilon Solan, 2003. "Zero-sum Dynamic Games and a Stochastic Variation of Ramsey Theorem," Discussion Papers 1375, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 16. Eilon Solan & Nicolas Vieille, 2003. "Equilibrium Uniqueness with Perfect Complements," Discussion Papers 1371, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 17. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2003. "Stochastic Games with Imperfect Monitoring," Discussion Papers 1376, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 18. Rida Laraki & Eilon Solan & Nicolas Vieille, 2003. "Continuous-time Games of Timing," Discussion Papers 1363, Northwestern University, Center for Mathematical Studies in Economics and Management 19. Eilon Solan & Nicolas Vielle, 2002. "Deterministic Multi-Player Dynkin Games," Discussion Papers 1355, Northwestern University, Center for Mathematical Studies in Economics and Management 20. Rida Laraki & Eilon Solan, 2002. "Stopping Games in Continuous Time," Discussion Papers 1354, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 21. Eilon Solan & Nicolas Vieille, 2002. "Perturbed Markov Chains," Discussion Papers 1342, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 22. VIEILLE, Nicolas & ROSENBERG, Dinah & SOLAN, Eilon, 2002. "Approximating a sequence of observations by a simple process," Les Cahiers de Recherche 756, HEC Paris. 23. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2002. "Stochastic Games with a Single Controller and Incomplete Information," Discussion Papers 1346, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 24. Eilon Solan, 2002. "Subgame-Perfection in Quitting Games with Perfect Information and Differential Equations," Discussion Papers 1356, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 25. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2002. "Approximating a Sequence of Approximations by a Simple Process," Discussion Papers 1345, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 26. Eran Schmaya & Eilon Solan & Nicolas Vieille, 2002. "Stopping games and Ramsey theorem," Working Papers hal-00242997, HAL. 27. Eran Shmaya & Eilon Solan, 2002. "Two Player Non Zero-Sum Stopping Games in Discrete Time," Discussion Papers 1347, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 28. VIEILLE, Nicolas & SHMAYA, Eran & SOLAN, Eilon, 2001. "An Application of Ramsey Theorem to stopping Games," Les Cahiers de Recherche 746, HEC Paris. 29. VIEILLE, Nicolas & SOLAN, Eilon, 2001. "Quitting games - an example," Les Cahiers de Recherche 747, HEC Paris. □ Eilon Solan & Nicolas Vieille, 2002. "Quitting games - An example," Working Papers hal-00242995, HAL. □ Eilon Solan & Nicholas Vieille, 2001. "Quitting Games - An Example," Discussion Papers 1314, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 30. VIEILLE, Nicolas & SOLAN, Eilon, 2001. "Stopping games: recent results," Les Cahiers de Recherche 744, HEC Paris. 31. VIEILLE, Nicolas & ROSENBERG, Dinah & SOLAN, Eilon, 2001. "On the maxmin value of stochastic games with imperfect monitoring," Les Cahiers de Recherche 760, HEC Paris. □ Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2001. "On the MaxMin Value of Stochastic Games with Imperfect Monitoring," Discussion Papers 1344, Northwestern University, Center for Mathematical Studies in Economics and Management Science. □ Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2002. "On the maxmin value of stochastic games with imperfect monitoring," Working Papers hal-00242999, HAL. □ Eilon Solan & Dinah Rosenberg & Nicolas Vieille, 2001. "On the Max Min Value of Stochastic Games with Imperfect Monitoring," Discussion Papers 1337, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 32. Eilon Solan, 2000. "The Dynamics of the Nash Equilibrium Correspondence and n-Player Stochastic Games," Discussion Papers 1311, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 33. Ehud Kalai & Eilon Solan, 2000. "Randomization and Simplification," Discussion Papers 1283, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 34. Eilon Solan, 2000. "Rationality and Extensive Form Correlated Equilibria in Stochastic Games," Discussion Papers 1298, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 35. Eilon Solan & Nicolas Vieille, 2000. "Uniform Value in Recursive Games," Discussion Papers 1293, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 36. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2000. "Blackwell Optimality in Markov Decision Processes with Partial Observation," Discussion Papers 1292, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 37. Eilon Solan, 2000. "Continuity of the Value in Stochastic Games," Discussion Papers 1310, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 38. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 1999. "Stopping Games with Randomized Strategies," Discussion Papers 1258, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 39. Nabil Al-Najjar & Eilon Solan, 1999. "Equilibrium Existence in Incomplete Information Games with Atomic Posteriors," Discussion Papers 1262, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 40. Eilon Solan & Rakesh V. Vohra, 1999. "Correlated Equilibrium, Public Signaling and Absorbing Games," Discussion Papers 1272, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 41. Eilon Solan & Nicolas Vieille, 1998. "Correlated Equilibrium in Stochastic Games," Discussion Papers 1226, Northwestern University, Center for Mathematical Studies in Economics and Management 42. Eilon Solan & Nicolas Vieille, 1998. "Quitting Games," Discussion Papers 1227, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 43. Eilon Solan & Leeat Yariv, 1998. "Games with Espionage," Discussion Papers 1257, Northwestern University, Center for Mathematical Studies in Economics and Management Science. RePEc:dgr:umamet:2010040 is not listed on IDEAS 1. Andrzej Nowak & Eilon Solan & Sylvain Sorin, 2013. "Preface: Special Issue on Stochastic Games," Dynamic Games and Applications, Springer, vol. 3(2), pages 125-127, June. 2. Renault, Jérôme & Solan, Eilon & Vieille, Nicolas, 2013. "Dynamic sender–receiver games," Journal of Economic Theory, Elsevier, vol. 148(2), pages 502-534. 3. Heller, Yuval & Solan, Eilon & Tomala, Tristan, 2012. "Communication, correlation and cheap-talk in games with public information," Games and Economic Behavior, Elsevier, vol. 74(1), pages 4. Lehrer, Ehud & Solan, Eilon & Viossat, Yannick, 2011. "Equilibrium payoffs of finite games," Journal of Mathematical Economics, Elsevier, vol. 47(1), pages 48-53, January. 5. Eilon Solan & Nicolas Vieille, 2010. "Computing uniformly optimal strategies in two-player stochastic games," Economic Theory, Springer, vol. 42(1), pages 237-253, January. 6. Rosenberg, Dinah & Solan, Eilon & Vieille, Nicolas, 2010. "On the optimal amount of experimentation in sequential decision problems," Statistics & Probability Letters, Elsevier, vol. 80(5-6), pages 381-385, March. 7. Alpern, Steve & Gal, Shmuel & Solan, Eilon, 2010. "A sequential selection game with vetoes," Games and Economic Behavior, Elsevier, vol. 68(1), pages 1-14, January. 8. Lehrer, Ehud & Solan, Eilon, 2009. "Approachability with bounded memory," Games and Economic Behavior, Elsevier, vol. 66(2), pages 995-1004, July. 9. Rosenberg, Dinah & Solan, Eilon & Vieille, Nicolas, 2009. "Informational externalities and emergence of consensus," Games and Economic Behavior, Elsevier, vol. 66(2), pages 979-994, July. 10. Solan, Eilon, 2008. "Learning from Michael Maschler and working with him," Games and Economic Behavior, Elsevier, vol. 64(2), pages 375-375, November. 11. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2007. "Social Learning in One-Arm Bandit Problems," Econometrica, Econometric Society, vol. 75(6), pages 1591-1611, November. 12. Eilon Solan & Nicolas Vieille, 2006. "Equilibrium uniqueness with perfect complements," Economic Theory, Springer, vol. 28(3), pages 721-726, 08. 13. Laraki, Rida & Solan, Eilon & Vieille, Nicolas, 2005. "Continuous-time games of timing," Journal of Economic Theory, Elsevier, vol. 120(2), pages 206-238, February. □ Rida Laraki & Eilon Solan & Nicolas Vieille, 2003. "Continuous-time Games of Timing," Discussion Papers 1363, Northwestern University, Center for Mathematical Studies in Economics and Management Science. □ Nicolas, VIEILLE & Rida, LARAKI & Eilon, SOLAN, 2003. "Continuous-Time Games of Timing," Les Cahiers de Recherche 773, HEC Paris. 14. Shmaya, Eran & Solan, Eilon, 2004. "Zero-sum dynamic games and a stochastic variation of Ramsey's theorem," Stochastic Processes and their Applications, Elsevier, vol. 112(2), pages 319-329, 15. Solan, Eilon & Yariv, Leeat, 2004. "Games with espionage," Games and Economic Behavior, Elsevier, vol. 47(1), pages 172-199, April. 16. Shmaya, Eran & Solan, Eilon & Vieille, Nicolas, 2003. "An application of Ramsey theorem to stopping games," Games and Economic Behavior, Elsevier, vol. 42(2), pages 300-306, February. □ VIEILLE, Nicolas & SHMAYA, Eran & SOLAN, Eilon, 2001. "An Application of Ramsey Theorem to stopping Games," Les Cahiers de Recherche 746, HEC Paris. □ Eran Shmaya & Eilon Solan & Nicolas Vieille, 2001. "An Application of Ramsey Theorem to Stopping Games," Discussion Papers 1323, Northwestern University, Center for Mathematical Studies in Economics and Management Science. 17. Kalai, Ehud & Solan, Eilon, 2003. "Randomization and simplification in dynamic decision-making," Journal of Economic Theory, Elsevier, vol. 111(2), pages 251-264, August. 18. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2003. "The MaxMin value of stochastic games with imperfect monitoring," International Journal of Game Theory, Springer, vol. 32(1), pages 133-150, 19. Solan, Eilon & Vieille, Nicolas, 2003. "Deterministic multi-player Dynkin games," Journal of Mathematical Economics, Elsevier, vol. 39(8), pages 911-929, November. 20. Solan, Eilon & Vieille, Nicolas, 2002. "Correlated Equilibrium in Stochastic Games," Games and Economic Behavior, Elsevier, vol. 38(2), pages 362-399, February. 21. Eilon Solan & Rakesh V. Vohra, 2002. "Correlated equilibrium payoffs and public signalling in absorbing games," International Journal of Game Theory, Springer, vol. 31(1), pages 91-121. 22. Eilon Solan, 2001. "Characterization of correlated equilibria in stochastic games," International Journal of Game Theory, Springer, vol. 30(2), pages 259-277. 23. Solan, Eilon, 2000. "Absorbing Team Games," Games and Economic Behavior, Elsevier, vol. 31(2), pages 245-261, May. 1. Maschler,Michael & Solan,Eilon & Zamir,Shmuel, 2013. "Game Theory," Cambridge Books, Cambridge University Press, number 9781107005488, November. NEP Fields 39 papers by this author were announced in , and specifically in the following field reports (number of papers): This author is among the top 5% authors according to these criteria: Most cited item Most downloaded item (past 12 months) For general information on how to correct material on RePEc, see these instructions To update listings or check citations waiting for approval, Eilon Solan should log into the RePEc Author Service To make corrections to the bibliographic information of a particular item, find the technical contact on the abstract page of that item. There, details are also given on how to add or correct references and citations. To link different versions of the same work, where versions have a different title, use this form. Note that if the versions have a very similar title and are in the author's profile, the links will usually be created automatically. Please note that most corrections can take a couple of weeks to filter through the various RePEc services.
{"url":"http://ideas.repec.org/e/pso68.html","timestamp":"2014-04-20T04:00:47Z","content_type":null,"content_length":"56845","record_id":"<urn:uuid:6a5df791-dcce-4e10-be15-7c05363ddb4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
cone = conofinf(wname,scales,LenSig,SigVal) [cone,PL,PR] = conofinf(wname,scales,LenSig,SigVal) [cone,PL,PR,PLmin,PRmax] = conofinf(wname,scales,LenSig,SigVal) [PLmin,PRmax] = conofinf(wname,scales,LenSig) [...] = conofinf(...,'plot') cone = conofinf(wname,scales,LenSig,SigVal) returns the cone of influence (COI) for the wavelet wname at the scales in scales and positions in SigVal. LenSig is the length of the input signal. If SigVal is a scalar, cone is a matrix with row dimension length(scales) and column dimension LenSig. If isa vector, cone is cell array of matrices. [cone,PL,PR] = conofinf(wname,scales,LenSig,SigVal) returns the left and right boundaries of the cone of influence atscale1for the points in . PL and PR are length(SigVal)-by-2 matrices. The left boundaries are(1-PL(:,2))./PL(:,1) and therightboundariesare(1-PR(:,2))./PR(:,1). [cone,PL,PR,PLmin,PRmax] = conofinf(wname,scales,LenSig,SigVal) returns the equations of the lines that define the minimal left and maximal right boundaries of the cone of influence. PLmin and PRmax are 1-by-2 row vectors where PLmin(1) and PRmax(1) are the slopes of the lines. PLmin(2) and PRmax(2) are the points where the lines intercept the scale axis. [PLmin,PRmax] = conofinf(wname,scales,LenSig) returns the slope and intercept terms for the first-degree polynomials defining the minimal left and maximal right vertices of the cone of influence. [...] = conofinf(...,'plot') plots the cone of influence. Input Arguments wname wname is a string corresponding to a valid wavelet. To verify that wname is a valid wavelet, wavemngr('fields',wname) must return a struct array with a type field of 1 or 2, or a nonempty bound field. scales scales is a vector of scales over which to compute the cone of influence. Larger scales correspond to stretched versions of the wavelet and larger boundary values for the cone of influence. LenSig LenSig is the signal length and must exceed the maximum of SigVal. SigVal SigVal is a vector of signal values at which to compute the cone of influence. The largest value of SigVal must be less than the signal length, LenSig.If SigVal is empty, conofinf returns the slope and intercept terms for the minimal left and maximal right vertices of the cone of influence. Output Arguments cone cone isthe cone of influence. If SigVal is a scalar, cone is a matrix. The row dimension is equal to the number of scales and column dimension equal to the signal length, LenSig. If SigVal is a vector, cone is a cell array of matrices. The elements of each row of the matrix are equal to 1 in the interval around SigVal corresponding to the cone of influence. PL PL is the minimum value of the cone of influence on the position (time) axis. PR PR is the maximum value of the cone of influence on the position (time) axis. PLmin PLmin is a 1-by-2 row vector containing the slope and scale axis intercept of the line defining the minimal left vertex of the cone of influence. PLmin(1) is the slope and PLmin(2) is the point where the line intercepts the scale axis. PRmax PRmax is a 1-by-2 row vector containing the slope and scale axis intercept of the line defining the maximal right vertex of the cone of influence. PRmax(1) is the slope and PRmax(2) is the point where the line intercepts the scale axis. Cone of influence for Mexican hat wavelet: load cuspamax signal = cuspamax; wname = 'mexh'; scales = 1:64; lenSIG = length(signal); x = 500; hold on [cone,PL,PR,Pmin,Pmax] = conofinf(wname,scales,lenSIG,x,'plot'); set(gca,'Xlim',[1 lenSIG]) Left minimal and right maximal vertices for the cone of influence (Morlet wavelet): [PLmin,PRmax] = conofinf('morl',1:32,1024,[],'plot'); % PLmin = -0.1245*u+ 32.0000 % PRmax = 0.1250*u-96.0000 More About Let ψ(t) be an admissible wavelet. Assume that the effective support of ψ(t) is [-B,B]. Letting u denote the translation parameter and s denote the scale parameter, the dilated and translated wavelet andhas effective support [u-sB,u+sB]. The cone of influence (COI) is the set of all t included in the effective support of the wavelet at a given position and scale. This set is equivalent to: At each scale, the COI determines the set of wavelet coefficients influenced by the value of the signal at a specified position. See Also cwt | wavsupport
{"url":"http://www.mathworks.cn/cn/help/wavelet/ref/conofinf.html?nocookie=true","timestamp":"2014-04-23T17:07:31Z","content_type":null,"content_length":"48237","record_id":"<urn:uuid:a2d4057e-0b99-4b71-ade2-133f9a56b019>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Control of Navier Stokes Equations Thursday, July 16 The W. T. and Idalia Reid Prize On the Control of Navier Stokes Equations 5:45 PM-6:30 PM Chair: John Guckenheimer, President, SIAM; and Cornell University Room: Convocation Hall The W. T. and Idalia Reid Prize in Mathematics is awarded for research in, or other contributions to, the broadly defined areas of differential equations and control theory. Each prize may be given either for a single notable achievement or for a collection of such achievements. The prize winner and speaker is Jacques-Louis Lions. The speaker will consider the Navier Stokes equations with a control acting either inside the domain or, more realistically, on part of the boundary. He conjectured in 1990 that this system is approximately controllable. Important contributions to the proof of the conjecture have been made by J. M. Coron and F. Fursikov and Yu. Imanuvilov. After giving some of these results, the speaker will address the question of finding numerical approximations of these controls achieving approximate controllability and, in particular, real time numerical approximations. A systematic method will be presented, based on domain decomposition and on new ways of decomposition of domains, spaces and operators. (This is based on work by O. Pironneau and the A., being published in the CRAS.) Jacques-Louis Lions Collège de France, France LMH Created: 3/16/98; MMD Updated: 5/19/98
{"url":"http://www.siam.org/meetings/an98/ss5.htm","timestamp":"2014-04-16T05:20:07Z","content_type":null,"content_length":"3283","record_id":"<urn:uuid:7d281eb4-7755-41c6-a882-5b769eb05b1b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/elisabette/asked","timestamp":"2014-04-18T14:11:13Z","content_type":null,"content_length":"73916","record_id":"<urn:uuid:66922cc9-dd34-489c-8882-62074bf3c6bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50083b2ee4b020a2b3bd8d2a","timestamp":"2014-04-21T04:48:15Z","content_type":null,"content_length":"51689","record_id":"<urn:uuid:eed19f94-b02a-4bdb-8afe-caad130431fb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
intersection point between gaussians August 11th 2008, 02:34 AM intersection point between gaussians I am doing a project in computer science where I have a need for determing the intersection point between two gaussians defined by their mean and standard variance e.g. g1(mu1,sigma1^2), g2 How do i find their intersection point in the case where the two distributions only have a single intersection point? August 11th 2008, 05:12 AM Just set it up and solve it. After introducing logorithms, it is just a quadratic equation. It's a mess, but it's not any trickier than the Quadratic Formula. There are two complications, both are easily resolved. 1) What to do with the solution you don't want and how to identify it. The desired intersection is the one between the means. 2) If the variance matches, the quadratic solution is no good. You'll have to rely on symmetry. August 17th 2008, 06:22 AM Just as a sanity check for finding the intersection check i post my calculations for obtaining the quadratic equation. It have been a while since i have dealt with equations and i would be really glad if someone could confirm my derivation of the quadratic equation. Gaussian equation $<br /> y=\frac{1}{(2\pi sigma1^{2})^{1/2}}e^{-\frac{1}{2sigma1^{2}}*(x-mu1)^{2}}<br />$ intersection point between two gaussians: $\frac{1}{(2\pi sigma1^{2})^{\frac{1}{2}}}e^{-\frac{1}{2sigma1^{2}}*(x-mu1)^{2}}=\\<br /> \frac{1}{(2\pi sigma2^{2})^{\frac{1}{2}}}e^{-\frac{1}{2sigma2^{2}}*(x-mu2)^{2}}<br />$ remove the e: $ln(\frac{1}{(2\pi sigma1^{2})^{1/2}}-\frac{1}{2sigma1^{2}}*(x-mu1)^{2}<br />$ $ln(\frac{1}{2\pi sigma2^{2})^{1/2})}-\frac{1}{2sigma2^{2}}*(x-mu2)^{2}<br />$ moving everything to one side of the equality sign: $ln(\frac{1}{(2\pi sigma1^{2})^{1/2}})-ln(\frac{1}{(2\pi sigma2^{2})^{1/2}})$ $<br /> -\frac{1}{2sigma1^{2}}*(x-mu1)^{2}+\frac{1}{2sigma2^{2}}*(x-mu2)^{2}=0<br />$ $<br /> K= ln(\frac{1}{(2\pi sigma1^{2})^{1/2}})-ln(\frac{1}{(2\pi sigma2^{2})^{1/2}})<br />$ find the quadratic equation: $K-\frac{1}{2sigma1^{2}}(x^{2}-2mu1*x+mu1^{2})+\frac{1}{2sigma2^{2}}(x^{2}-2mu2+mu2^{2})=0<br />$ $(\frac{1}{2sigma1^{2}}+\frac{1}{2sigma2^{2}})x^{2} +(\frac{1}{2sigma1^{2}}2mu1-\frac{1}{2sigma2^{2}}2mu2)x<br />$ $+K-\frac{mu1^{2}}{2sigma1^{2}}+\frac{mu2^{2}}{2sigma2 ^{2}}=0<br />$ August 17th 2008, 11:44 PM Ok, now i am confused. Is it possible to solve this equation as i have done by using a quadratic equation or is it not? I need the intersection point between the two means, and also know that gaussian1=/gaussian2. Best regards August 18th 2008, 12:16 AM mr fantastic $\frac{1}{\sqrt{2 \pi} \, \sigma_1} e^{\frac{-(x - \mu_1)^2}{2\sigma_1^2}} = \frac{1}{\sqrt{2 \pi} \, \sigma_2} e^{\frac{-(x - \mu_2)^2}{2\sigma_2^2}}$ $\Rightarrow e^{ \frac{ -(x - \mu_1)^2}{2\sigma_1^2} + \frac{(x - \mu_2)^2}{2\sigma_2^2} } = \frac{\sigma_1}{\sigma_2}$ $\Rightarrow \frac{ -(x - \mu_1)^2}{2\sigma_1^2} + \frac{(x - \mu_2)^2}{2\sigma_2^2} = \ln \left( \frac{\sigma_1}{\sigma_2} \right)$ $\Rightarrow -\sigma_2^2 (x - \mu_1)^2 + \sigma_1^2 (x - \mu_2)^2 = 2 \sigma_2^2 \sigma_1^2 \ln \left( \frac{\sigma_1}{\sigma_2} \right)$ It is simple but tedious to expand the left hand side, re-arrange and solve the quadratic for x. I'd suggest introducing some notation to streamline things. Use the discriminant to set conditions on the mean and variances such that you have the desired number of solutions. August 19th 2008, 12:44 AM Thanks a lot for the answer mrfantastic. So it is possible to solve this problem using the equations, and i do not have to solve it numerically. September 7th 2008, 04:23 AM well to completely show how much i sucks at this i would much appreciate a review of the quadratic equation since the results are not correct. $<br /> -\frac{1}{2\sigma_{1}^{2}}(x^{2}-2\mu_{1}x+\mu_{1}^{2})+\frac{1}{2\sigma_{2}^{2}}(x ^{2}-2\mu_{2}x+\mu_{2}^{2})<br />$ more rearranging $<br /> -\frac{1}{2\sigma_{1}^{2}}+\frac{1}{2\sigma_{2}^{2} }x^{2}+\mu_{1}\frac{1}{2\sigma_{1}^{2}}-\mu_{2}\frac{1}{2\sigma_{2}^{2}}x-\frac{1}{2\sigma_{1}^{2}}\mu_{1}^{2}+\frac{1}{2\si gma_{2}^ {2}}\mu_{2}^{2}-ln\left(\frac{\sigma_{1}}{\sigma_{2}}\right)=0<br />$ when inserting $<br /> \mu_{1}=-955 \textrm{ }\sigma_{1}^{2}=396<br />$ $<br /> \mu_{2}=-1578 \textrm{ }\sigma_{2}^{2}=1117<br />$ I get y values -1878 and -1433 and this confuses me a bit. I would expect very small numbers denoting the probability and according to my plot it should be around 0.000036. The intersection point according to the plot should lie between -1530 and -1520. September 7th 2008, 09:52 AM Hmm the intersection data was wrong, so here is the correct approximated solution. for two Gaussians with $<br /> \mu_{1}=-955\textrm{ }\sigma_{1}^{2}=396<br />$ $<br /> \mu_{2}=-1578\textrm{ }\sigma_{2}^{2}=1117<br />$ the approximated intersection point is at -1190 and the probability being But i still haven't been able to poduce the correct answer using the quadratic equation. (Doh)
{"url":"http://mathhelpforum.com/advanced-applied-math/45724-intersection-point-between-gaussians-print.html","timestamp":"2014-04-21T05:13:30Z","content_type":null,"content_length":"16089","record_id":"<urn:uuid:8a4efad0-d04e-4bd4-af38-4ede7e1a2688>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Draft of 30 Jan 2005 4:38 p.m. Constrained Entropy, Free Energy, and the Legendre Transform S. M. Aji, S. L. Fogal, R. J. McEliece, and B. Wang California Institute of Technology Abstract. The Legendre transform is well known to physicists but perhaps not so well known to information theorists. To remedy this, in this paper we carefully describe the Legendre transform and illustrate its usefulness to information theorists by showing that it transforms a broad class of maximum entropy functions (x) into "free energy" functions F(s) which are easier to compute. Then if s is the unique solution to the equation F(s) = x, then it is advantageous to compute (x) by the formula (x) = x · s - F(s Alternatively, we can use F(s) to give a parametric description of (x): x = F(s) = F(s) · s - F(s) for s L, the domain of f(s). 1. The Legendre Transform for Information Theorists. In this section we give our version of the Legendre transform, specially tailored for infor-
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/342/1975795.html","timestamp":"2014-04-16T07:30:15Z","content_type":null,"content_length":"8110","record_id":"<urn:uuid:5780e9a5-544a-4a75-a7e5-1286ce0ef49e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
*Statistics: An Activity Based Approach Featured Course – Statistics: An Activity Based Approach In recent election cycles Americans have been bombarded with the results of polls designed to gauge voter sentiment and, ultimately, to predict the results of the election. But as the use and importance of polls has grown so has the level of misunderstanding of what the results actually mean. For example, imagine that there’s an election tomorrow and you’ve just heard a news report stating that candidate A is favored by 55% of voters, candidate B is favored by 45%, and that the margin of error is 3 points. Assuming the poll was well designed and executed (a very big assumption), does this mean that candidate A has a safe margin over candidate B? Many would answer yes but, in reality, the lead is safe except for the 5% chance that it is not. This 5% chance that the poll is wrong comes from the confidence level and, even though this number is rarely mentioned in news reports, every poll that states a margin of error also has a level of confidence for that estimate, usually 95%. This level of confidence represents the likelihood that chance alone has produced a result that falls outside of the margin of error. So the results reported above actually mean that there is 95% likelihood that between 52% and 58% of voters favor candidate A (and a 5% chance that the actual number is somewhere outside of that range). If the election was held tomorrow, and candidate B won, would it mean that the statistics didn’t work? No, it would simply mean that the pollster was “unlucky” and got a sample that was not representative of actual voter sentiment. The confidence level tells us that there is a 1 in 20 chance (5%) that this could happen. Students in this course build a solid understanding of basic statistical principles like this and they learn to apply their knowledge with confidence in a variety of common scenarios. Featured Activity: Cancer Mortality – Visualization Using Charts and Maps The amount of data available via the Internet for research, exploration, or just “seeing what’s going on” is truly staggering. Governmental bodies at all levels, researchers, interest groups, and non-governmental organizations have all recognized that making their data available is simply good policy. Among the most prominent providers of general interest data are the U.S. Census Bureau, the Unites State Geological Survey (USGS), the Environmental Protection Agency (EPA) and the United Nations. And exploring this data and learning from it doesn’t have have to be dull and academic. If doubt the truth of this statement watch these two videos of the Swedish Doctor, Hans Rosling, talking about global economic and health trends at the TED Conference. Hans Rosling’s new Insights on Poverty Rosling’s research makes use of freely available data from sources including the United Nations, and his presentation demonstrates the value of using effective tools for analyzing data and communicating the results of that analysis. You’ll find much more information about Rosling’s work and the software shown in the videos on the web site, Gapminder.org. The National Cancer Institute (NCI) NCI provides access to cancer mortality data for all forms of cancer by state and by demographic. Students in this course use the graphing and mapping tools provided by NCI to examine the data and look for trends and information in the data. For more information, visit the National Cancer Institute website. Another source for cancer mapping is the CDC Cancer Statistics website.
{"url":"http://commons.esc.edu/smatresources/statistics-an-activity-based-approach/","timestamp":"2014-04-20T03:11:19Z","content_type":null,"content_length":"56443","record_id":"<urn:uuid:b6675af3-55e3-48e0-91e1-b2e66bcdf374>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Google spreadsheets function list Google Spreadsheets supports cell formulas typically found in most desktop spreadsheet packages. These formulas can be used to create functions that manipulate data and calculate strings and numbers. Here's a list of all the functions available in each category. When using them, don't forget to add quotation marks around all function components made of alphabetic characters that aren't referring to cells or columns. The new Google Sheets includes a number of additional functions. These functions include ARRAY_CONSTRAIN, ARRAY_LITERAL, ARRAY_ROW, CELL, CLEAN, DELTA, ISEMAIL, ISURL, TIMEVALUE, LOOKUP, PERCENTRANK.EXC, PERCENTRANK.INC, RANK.AVG, RANK.EQ, TYPE, WEEKNUM, SUMIFS, COUNTIFS, AVERAGEIF, AVERAGEIFS, NETWORKDAYS.INTL, WORKDAY.INTL, SEARCHB, FINDB, and TDIST. Type Name Syntax Description Date DATE DATE(year, month, day) Converts a provided year, month, and day into a date. Learn more Date DATEVALUE DATEVALUE(date_string) Converts a provided date string in a known format to a date value. Learn more Date DAY DAY(date) Returns the day of the month that a specific date falls on, in numeric format. Learn more Date DAYS360 DAYS360(start_date, end_date, method) Returns the difference between two days based on the 360 day year used in some financial interest calculations. Learn more Date EDATE EDATE(start_date) Returns a date a specified number of months before or after another date. Learn more Date EOMONTH EOMONTH(start_date, months) Returns a date representing the last day of a month which falls a specified number of months before or after another date. Learn more Date HOUR HOUR(time) Returns the hour component of a specific time, in numeric format. Learn more Date MINUTE MINUTE(time) Returns the minute component of a specific time, in numeric format. Learn more Date MONTH MONTH(date) Returns the month of the year a specific date falls in, in numeric format. Learn more Date NETWORKDAYS NETWORKDAYS(start_date, end_date, holidays) Returns the number of net working days between two provided days. Learn more Date NETWORKDAYS.INTL NETWORKDAYS.INTL(start_date, end_date, Returns the number of net working days between two provided days excluding specified weekend days and holidays. Only [weekend], [holidays]) available in the new Google Sheets. Learn more Date NOW NOW() Returns the current date and time as a date value. Learn more Date SECOND SECOND(time) Returns the second component of a specific time, in numeric format. Learn more Date TIME TIME(hour, minute, second) Converts a provided hour, minute, and second into a time. Learn more Date TIMEVALUE TIMEVALUE(time_string) Returns the fraction of a 24-hour day the time represents. Only available in the new Google Sheets. Learn more Date TODAY TODAY() Returns the current date as a date value. Learn more Date WEEKDAY WEEKDAY(date, type) Returns a number representing the day of the week of the date provided. Learn more Date WEEKNUM WEEKNUM(date, [type]) Returns a number representing the week of the year where the provided date falls. Only available in the new Google Sheets. Learn more Date WORKDAY WORKDAY(start_date, num_days, holidays) Calculates the number of working days from a specified start date. Learn more Date WORKDAY.INTL WORKDAY.INTL(start_date, num_days, [weekend], Calculates the date after a specified number of workdays excluding specified weekend days and holidays. Only available in [holidays]) the new Google Sheets. Learn more Date YEAR YEAR(date) Returns the year specified by a given date. Learn more Date YEARFRAC YEARFRAC(start_date, end_date, Returns the number of years, including fractional years, between two dates using a specified day count convention. Learn day_count_convention) more Engineering BIN2DEC BIN2DEC(signed_binary_number) Converts a signed binary number to decimal format. Learn more Engineering BIN2HEX BIN2HEX(signed_binary_number, Converts a signed binary number to signed hexadecimal format. Learn more Engineering BIN2OCT BIN2OCT(signed_binary_number, Converts a signed binary number to signed octal format. Learn more Engineering DEC2BIN DEC2BIN(decimal_number, significant_digits) Converts a decimal number to signed binary format. Learn more Engineering DEC2HEX DEC2HEX(decimal_number, significant_digits) Converts a decimal number to signed hexadecimal format. Learn more Engineering DEC2OCT DEC2OCT(decimal_number, significant_digits) Converts a decimal number to signed octal format. Learn more Engineering DELTA DELTA(number1, [number2]) Compare two numeric values, returning 1 if they're equal. Only available in the new Google Sheets. Learn more Engineering HEX2BIN HEX2BIN(signed_hexadecimal_number, Converts a signed hexadecimal number to signed binary format. Learn more Engineering HEX2DEC HEX2DEC(signed_hexadecimal_number) Converts a signed hexadecimal number to decimal format. Learn more Engineering HEX2OCT HEX2OCT(signed_hexadecimal_number, Converts a signed hexadecimal number to signed octal format. Learn more Engineering OCT2BIN OCT2BIN(signed_octal_number, Converts a signed octal number to signed binary format. Learn more Engineering OCT2DEC OCT2DEC(signed_octal_number) Converts a signed octal number to decimal format. Learn more Engineering OCT2HEX OCT2HEX(signed_octal_number, Converts a signed octal number to signed hexadecimal format. Learn more Filter FILTER FILTER(range, condition1, condition2) Returns a filtered version of the source range, returning only rows or columns which meet the specified conditions. Learn Filter SORT SORT(range, sort_column, is_ascending, Sorts the rows of a given array or range by the values in one or more columns. Learn more sort_column2, is_ascending2) Filter UNIQUE UNIQUE(range) Returns unique rows in the provided source range, discarding duplicates. Rows are returned in the order in which they first appear in the source range. Learn more Financial ACCRINT ACCRINT(issue, first_payment, settlement, rate, Calculates the accrued interest of a security that has periodic payments. Learn more redemption, frequency, day_count_convention) Financial ACCRINTM ACCRINTM(issue, maturity, rate, redemption, Calculates the accrued interest of a security that pays interest at maturity. Learn more Financial COUPDAYBS COUPDAYBS(settlement, maturity, frequency, Calculates the number of days from the first coupon, or interest payment, until settlement. Learn more Financial COUPDAYS COUPDAYS(settlement, maturity, frequency, Calculates the number of days in the coupon, or interest payment, period that contains the specified settlement date. day_count_convention) Learn more Financial COUPDAYSNC COUPDAYSNC(settlement, maturity, frequency, Calculates the number of days from the settlement date until the next coupon, or interest payment. Learn more Financial COUPNCD COUPNCD(settlement, maturity, frequency, Calculates next coupon, or interest payment, date after the settlement date. Learn more Financial COUPNUM COUPNUM(settlement, maturity, frequency, Calculates the number of coupons, or interest payments, between the settlement date and the maturity date of the day_count_convention) investment. Learn more Financial COUPPCD COUPPCD(settlement, maturity, frequency, Calculates last coupon, or interest payment, date before the settlement date. Learn more Financial CUMIPMT CUMIPMT(rate, number_of_periods, present_value, Calculates the cumulative interest over a range of payment periods for an investment based on constant-amount periodic first_period, last_period, end_or_beginning) payments and a constant interest rate. Learn more CUMPRINC(rate, number_of_periods, Calculates the cumulative principal paid over a range of payment periods for an investment based on constant-amount Financial CUMPRINC present_value, first_period, last_period, periodic payments and a constant interest rate. Learn more Financial DB DB(cost, salvage, life, period, month) Calculates the depreciation of an asset for a specified period using the arithmetic declining balance method. Learn more Financial DDB DDB(cost, salvage, life, period, factor) Calculates the depreciation of an asset for a specified period using the double-declining balance method. Learn more Financial DISC DISC(settlement, maturity, price, redemption, Calculates the discount rate of a security based on price. Learn more Financial DOLLARDE DOLLARDE(fractional_price, unit) Converts a price quotation given as a decimal fraction into a decimal value. Learn more Financial DOLLARFR DOLLARFR(decimal_price, unit) Converts a price quotation given as a decimal value into a decimal fraction. Learn more Financial DURATION DURATION(rate, present_value, future_value) Calculates the number of compounding periods required for an investment of a specified present value appreciating at a given rate to reach a target value. Learn more Financial EFFECT EFFECT(nominal_rate, periods_per_year) Calculates the annual effective interest rate given the nominal rate and number of compounding periods per year. Learn Financial FV FV(rate, number_of_periods, payment_amount, Calculates the future value of an annuity investment based on constant-amount periodic payments and a constant interest present_value, end_or_beginning) rate. Learn more Financial FVSCHEDULE FVSCHEDULE(principal, rate_schedule) Calculates the future value of some principal based on a specified series of potentially varying interest rates. Learn Financial INTRATE INTRATE(buy_date, sell_date, buy_price, Calculates the effective interest rate generated when an investment is purchased at one price and sold at another with no sell_price, day_count_convention) interest or dividends generated by the investment itself. Learn more Financial IPMT IPMT(rate, period, number_of_periods, Calculates the payment on interest for an investment based on constant-amount periodic payments and a constant interest present_value, future_value, end_or_beginning) rate. Learn more Financial IRR IRR(cashflow_amounts, rate_guess) Calculates the internal rate of return on an investment based on a series of periodic cash flows. Learn more Financial MDURATION MDURATION(settlement, maturity, rate, yield, Calculates the modified Macaulay duration of a security paying periodic interest, such as a US Treasury Bond, based on frequency, day_count_convention) expected yield. Learn more Financial MIRR MIRR(cashflow_amounts, financing_rate, Calculates the modified internal rate of return on an investment based on a series of periodic cash flows and the reinvestment_return_rate) difference between the interest rate paid on financing versus the return received on reinvested income. Learn more Financial NOMINAL NOMINAL(effective_rate, periods_per_year) Calculates the annual nominal interest rate given the effective rate and number of compounding periods per year. Learn Financial NPER NPER(rate, payment_amount, present_value, Calculates the number of payment periods for an investment based on constant-amount periodic payments and a constant future_value, end_or_beginning) interest rate. Learn more Financial NPV NPV(discount, cashflow1, cashflow2) Calculates the net present value of an investment based on a series of periodic cash flows and a discount rate. Learn more Financial PMT PMT(rate, number_of_periods, present_value, Calculates the periodic payment for an annuity investment based on constant-amount periodic payments and a constant future_value, end_or_beginning) interest rate. Learn more Financial PPMT PPMT(rate, period, number_of_periods, Calculates the payment on the principal of an investment based on constant-amount periodic payments and a constant present_value, future_value, end_or_beginning) interest rate. Learn more Financial PRICE PRICE(settlement, maturity, rate, yield, Calculates the price of a security paying periodic interest, such as a US Treasury Bond, based on expected yield. Learn redemption, frequency, day_count_convention) more Financial PRICEDISC PRICEDISC(settlement, maturity, discount, Calculates the price of a discount (non-interest-bearing) security, based on expected yield. Learn more redemption, day_count_convention) Financial PRICEMAT PRICEMAT(settlement, maturity, issue, rate, Calculates the price of a security paying interest at maturity, based on expected yield. Learn more yield, day_count_convention) Financial PV PV(rate, number_of_periods, payment_amount, Calculates the present value of an annuity investment based on constant-amount periodic payments and a constant interest future_value, end_or_beginning) rate. Learn more RATE(number_of_periods, payment_per_period, Calculates the interest rate of an annuity investment based on constant-amount periodic payments and the assumption of a Financial RATE present_value, future_value, end_or_beginning, constant interest rate. Learn more Financial RECEIVED RECEIVED(settlement, maturity, investment, Calculates the amount received at maturity for an investment in fixed-income securities purchased on a given date. Learn discount, day_count_convention) more Financial SLN SLN(cost, salvage, life) Calculates the depreciation of an asset for one period using the straight-line method. Learn more Financial SYD SYD(cost, salvage, life, period) Calculates the depreciation of an asset for a specified period using the sum of years digits method. Learn more Financial TBILLEQ TBILLEQ(settlement, maturity, discount) Calculates the equivalent annualized rate of return of a US Treasury Bill based on discount rate. Learn more Financial TBILLPRICE TBILLPRICE(settlement, maturity, discount) Calculates the price of a US Treasury Bill based on discount rate. Learn more Financial TBILLYIELD TBILLYIELD(settlement, maturity, price) Calculates the yield of a US Treasury Bill based on price. Learn more Financial XIRR XIRR(cashflow_amounts, cashflow_dates, Calculates the internal rate of return of an investment based on a specified series of potentially irregularly spaced cash rate_guess) flows. Learn more Financial XNPV XNPV(discount, cashflow_amounts, Calculates the net present value of an investment based on a specified series of potentially irregularly spaced cash flows cashflow_dates) and a discount rate. Learn more Financial YIELD YIELD(settlement, maturity, rate, price, Calculates the annual yield of a security paying periodic interest, such as a US Treasury Bond, based on price. Learn more redemption, frequency, day_count_convention) Financial YIELDDISC YIELDDISC(settlement, maturity, price, Calculates the annual yield of a discount (non-interest-bearing) security, based on price. Learn more redemption, day_count_convention) Google ARRAYFORMULA ARRAYFORMULA(array_formula) Enables the display of values returned from an array formula into multiple rows and/or columns and the use of non-array functions with arrays. Learn more Google CONTINUE CONTINUE(source_cell, row, column) Returns a specified cell from an array formula result. Learn more Google DETECTLANGUAGE DETECTLANGUAGE(text_or_range) Identifies the language used in text within the specified range. Learn more Google GOOGLECLOCK GOOGLECLOCK() Returns the current system date and time and updates automatically once per minute. Learn more Google GOOGLEFINANCE GOOGLEFINANCE(ticker, attribute, start_date, Fetches current or historical securities information from Google Finance. Learn more end_date|num_days, interval) Google GOOGLETOURNAMENT GOOGLETOURNAMENT(year, league, round, Returns data for March Madness (NCAA Division I Basketball Championship) games. Learn more game_slot, statistic, team) Google GOOGLETRANSLATE GOOGLETRANSLATE(text, source_language, Translates text from one language into another/ Learn more Google IMAGE IMAGE(url, mode) Inserts an image into a cell. Learn more Google IMPORTDATA IMPORTDATA(url) Imports data at a given url in .csv (comma-separated value) or .tsv (tab-separated value) format. Learn more Google IMPORTFEED IMPORTFEED(url, query, headers, num_items) Imports a RSS or ATOM feed. Learn more Google IMPORTHTML IMPORTHTML(url, query, index) Imports data from a table or list within an HTML page. Learn more Google IMPORTRANGE IMPORTRANGE(spreadsheet_key, range_string) Imports a range of cells from a specified spreadsheet. Learn more Google IMPORTXML IMPORTXML(url, xpath_query) Imports data from any of various structured data types including XML, HTML, CSV, TSV, and RSS and ATOM XML feeds. Learn Google QUERY QUERY(data, query, headers) Runs a Google Visualization API Query Language query across data. Learn more Google SPARKLINE SPARKLINE(data, options) Creates a miniature chart contained within a single cell. Learn more The GoogleLookup function was retired in November 2011. This function relied on technology from Google Squared, a Google Google GoogleLookup GoogleLookup(entity, attribute) Lab that has been shut down. As a result, the GoogleLookup function can no longer be used, and cells that contain GoogleLookup functions will return an error. Info ERROR.TYPE ERROR.TYPE(reference) Returns a number corresponding to the error value in a different cell. Learn more Info ISBLANK ISBLANK(value) Checks whether the referenced cell is empty. Learn more Info ISEMAIL ISEMAIL(value) Checks whether a value is a valid email address. Only available in the new Google Sheets. Learn more Info ISERR ISERR(value) Checks whether a value is an error other than `#N/A`. Learn more Info ISERROR ISERROR(value) Checks whether a value is an error. Learn more Info ISLOGICAL ISLOGICAL(value) Checks whether a value is `TRUE` or `FALSE`. Learn more Info ISNA ISNA(value) Checks whether a value is the error `#N/A`. Learn more Info ISNONTEXT ISNONTEXT(value) Checks whether a value is non-textual. Learn more Info ISNUMBER ISNUMBER(value) Checks whether a value is a number. Learn more Info ISREF ISREF(value) Checks whether a value is a valid cell reference. Learn more Info ISTEXT ISTEXT(value) Checks whether a value is text. Learn more Info N N(value) Returns the argument provided as a number. Learn more Info NA NA() Returns the "value not available" error, `#N/A`. Learn more Info TYPE TYPE(value) Returns a number associated with the type of data passed into the function. Only available in the new Google Sheets. Learn Info CELL CELL(info_type, reference) Returns the requested information about the specified cell. Only available in the new Google Sheets. Learn more Info ISURL ISURL(value) Checks whether a value is a valid URL. Only available in the new Google Sheets. Learn more Logical AND AND(logical_expression1, logical_expression2) Returns true if all of the provided arguments are logically true, and false if any of the provided arguments are logically false. Learn more Logical FALSE FALSE() Returns the logical value `FALSE`. Learn more Logical IF IF(logical_expression, value_if_true, Returns one value if a logical expression is `TRUE` and another if it is `FALSE`. Learn more Logical IFERROR IFERROR(value, value_if_error) Returns the first argument if it is not an error value, otherwise returns the second argument if present, or a blank if the second argument is absent. Learn more Logical NOT NOT(logical_expression) Returns the opposite of a logical value - `NOT(TRUE)` returns `FALSE`; `NOT(FALSE)` returns `TRUE`. Learn more Logical OR OR(logical_expression1, logical_expression2) Returns true if any of the provided arguments are logically true, and false if all of the provided arguments are logically false. Learn more Logical TRUE TRUE() Returns the logical value `TRUE`. Learn more Lookup ADDRESS ADDRESS(row, column, absolute_relative_mode, Returns a cell reference as a string. Learn more use_a1_notation, sheet) Lookup CHOOSE CHOOSE(index, choice1, choice2) Returns an element from a list of choices based on index. Learn more Lookup COLUMN COLUMN(cell_reference) Returns the column number of a specified cell, with `A=1`. Learn more Lookup COLUMNS COLUMNS(range) Returns the number of columns in a specified array or range. Learn more Lookup HLOOKUP HLOOKUP(search_key, range, index, is_sorted) Horizontal lookup. Searches across the first row of a range for a key and returns the value of a specified cell in the column found. Learn more Lookup HYPERLINK HYPERLINK(url, link_label) Creates a hyperlink inside a cell. Learn more Lookup INDEX INDEX(reference, row, column) Returns the content of a cell, specified by row and column offset. Learn more Lookup INDIRECT INDIRECT(cell_reference_as_string) Returns a cell reference specified by a string. Learn more Lookup LOOKUP LOOKUP(search_key, search_range| Looks through a row or column for a key and returns the value of the cell in a result range located in the same position search_result_array, [result_range]) as the search row or column. Only available in the new Google Sheets. Learn more Lookup MATCH MATCH(search_key, range, search_type) Returns the relative position of an item in a range that matches a specified value. Learn more Lookup OFFSET OFFSET(cell_reference, offset_rows, Returns a range reference shifted a specified number of rows and columns from a starting cell reference. Learn more offset_columns, height, width) Lookup ROW ROW(cell_reference) Returns the row number of a specified cell. Learn more Lookup ROWS ROWS(range) Returns the number of rows in a specified array or range. Learn more Lookup VLOOKUP VLOOKUP(search_key, range, index, is_sorted) Vertical lookup. Searches down the first column of a range for a key and returns the value of a specified cell in the row found. Learn more Math ABS ABS(value) Returns the absolute value of a number. Learn more Math ACOS ACOS(value) Returns the inverse cosine of a value, in radians. Learn more Math ACOSH ACOSH(value) Returns the inverse hyperbolic cosine of a number. Learn more Math ASIN ASIN(value) Returns the inverse sine of a value, in radians. Learn more Math ASINH ASINH(value) Returns the inverse hyperbolic sine of a number. Learn more Math ATAN ATAN(value) Returns the inverse tangent of a value, in radians. Learn more Math ATAN2 ATAN2(x, y) Returns the angle between the x-axis and a line segment from the origin (0,0) to specified coordinate pair (`x`,`y`), in radians. Learn more Math ATANH ATANH(value) Returns the inverse hyperbolic tangent of a number. Learn more Math CEILING CEILING(value, factor) Rounds a number up to the nearest integer multiple of specified significance. Learn more Math COMBIN COMBIN(n, k) Returns the number of ways to choose some number of objects from a pool of a given size of objects. Learn more Math COS COS(angle) Returns the cosine of an angle provided in radians. Learn more Math COSH COSH(value) Returns the hyperbolic cosine of any real number. Learn more Math COUNTBLANK COUNTBLANK(range) Returns the number of empty cells in a given range. Learn more Math COUNTIF COUNTIF(range, criterion) Returns a conditional count across a range. Learn more Math COUNTIFS COUNTIFS(criteria_range1, criterion1, Returns the count of a range depending on multiple criteria. Only available in the new Google Sheets. Learn more [criteria_range2, criterion2, ...]) Math COUNTUNIQUE COUNTUNIQUE(value1, value2) Counts the number of unique values in a list of specified values and ranges. Learn more Math DEGREES DEGREES(angle) Converts an angle value in radians to degrees. Learn more Math ERFC ERFC(z) Returns the complementary Gauss error function of a value. Learn more Math EVEN EVEN(value) Rounds a number up to the nearest even integer. Learn more Math EXP EXP(exponent) Returns Euler's number, e (~2.718) raised to a power. Learn more Math FACT FACT(value) Returns the factorial of a number. Learn more Math FACTDOUBLE FACTDOUBLE(value) Returns the "double factorial" of a number. Learn more Math FLOOR FLOOR(value, factor) Rounds a number down to the nearest integer multiple of specified significance. Learn more Math GAMMALN GAMMALN(value) Returns the the logarithm of a specified Gamma function, base e (Euler's number). Learn more Math GCD GCD(value1, value2) Returns the greatest common divisor of one or more integers. Learn more Math INT INT(value) Rounds a number down to the nearest integer that is less than or equal to it. Learn more Math ISEVEN ISEVEN(value) Checks whether the provided value is even. Learn more Math ISODD ISODD(value) Checks whether the provided value is odd. Learn more Math LCM LCM(value1, value2) Returns the least common multiple of one or more integers. Learn more Math LN LN(value) Returns the the logarithm of a number, base e (Euler's number). Learn more Math LOG LOG(value, base) Returns the the logarithm of a number given a base. Learn more Math LOG10 LOG10(value) Returns the the logarithm of a number, base 10. Learn more Math MOD MOD(dividend, divisor) Returns the result of the modulo operator, the remainder after a division operation. Learn more Math MROUND MROUND(value, factor) Rounds one number to the nearest integer multiple of another. Learn more Math MULTINOMIAL MULTINOMIAL(value1, value2) Returns the factorial of the sum of values divided by the product of the values' factorials. Learn more Math ODD ODD(value) Rounds a number up to the nearest odd integer. Learn more Math PI PI() Returns the value of Pi to 14 decimal places. Learn more Math POWER POWER(base, exponent) Returns a number raised to a power. Learn more Math PRODUCT PRODUCT(factor1, factor2) Returns the result of multiplying a series of numbers together. Learn more Math QUOTIENT QUOTIENT(dividend, divisor) Returns one number divided by another. Learn more Math RADIANS RADIANS(angle) Converts an angle value in degrees to radians. Learn more Math RAND RAND() Returns a random number between 0 inclusive and 1 exclusive. Learn more Math RANDBETWEEN RANDBETWEEN(low, high) Returns a uniformly random integer between two values, inclusive. Learn more Math ROUND ROUND(value, places) Rounds a number to a certain number of decimal places according to standard rules. Learn more Math ROUNDDOWN ROUNDDOWN(value, places) Rounds a number to a certain number of decimal places, always rounding down to the next valid increment. Learn more Math ROUNDUP ROUNDUP(value, places) Rounds a number to a certain number of decimal places, always rounding up to the next valid increment. Learn more Math SERIESSUM SERIESSUM(x, n, m, a) Given parameters x, n, m, and a, returns the power series sum a[1]x^n + a[2]x^(n+m) + ... + a[i]x^(n+(i-1)m), where i is the number of entries in range `a`. Learn more Math SIGN SIGN(value) Given an input number, returns `-1` if it is negative, `1` if positive, and `0` if it is zero. Learn more Math SIN SIN(angle) Returns the sine of an angle provided in radians. Learn more Math SINH SINH(value) Returns the hyperbolic sine of any real number. Learn more Math SQRT SQRT(value) Returns the positive square root of a positive number. Learn more Math SQRTPI SQRTPI(value) Returns the positive square root of the product of Pi and the given positive number. Learn more Math SUBTOTAL SUBTOTAL(function_code, range1, range2) Returns a subtotal for a vertical range of cells using a specified aggregation function. Learn more Math SUM SUM(value1, value2) Returns the sum of a series of numbers and/or cells. Learn more Math SUMIF SUMIF(range, criterion, sum_range) Returns a conditional sum across a range. Learn more Math SUMIFS SUMIFS(sum_range, criteria_range1, criterion1, Returns the sum of a range depending on multiple criteria. Only available in the new Google Sheets. Learn more [criteria_range2, criterion2, ...]) Math SUMSQ SUMSQ(value1, value2) Returns the sum of the squares of a series of numbers and/or cells. Learn more Math TAN TAN(angle) Returns the tangent of an angle provided in radians. Learn more Math TANH TANH(value) Returns the hyperbolic tangent of any real number. Learn more Math TRUNC TRUNC(value, places) Truncates a number to a certain number of significant digits by omitting less significant digits. Learn more Operator ADD ADD(value1, value2) Returns the sum of two numbers. Equivalent to the `+` operator. Learn more Operator CONCAT CONCAT(value1, value2) Returns the concatenation of two values. Equivalent to the `&` operator. Learn more Operator DIVIDE DIVIDE(dividend, divisor) Returns one number divided by another. Equivalent to the `/` operator. Learn more Operator EQ EQ(value1, value2) Returns `TRUE` if two specified values are equal and `FALSE` otherwise. Equivalent to the `==` operator. Learn more Operator GT GT(value1, value2) Returns `TRUE` if the first argument is strictly greater than the second, and `FALSE` otherwise. Equivalent to the `>` operator. Learn more Operator GTE GTE(value1, value2) Returns `TRUE` if the first argument is greater than or equal to the second, and `FALSE` otherwise. Equivalent to the `>=` operator. Learn more Operator LT LT(value1, value2) Returns `TRUE` if the first argument is strictly less than the second, and `FALSE` otherwise. Equivalent to the `<` operator. Learn more Operator LTE LTE(value1, value2) Returns `TRUE` if the first argument is less than or equal to the second, and `FALSE` otherwise. Equivalent to the `<=` operator. Learn more Operator MINUS MINUS(value1, value2) Returns the difference of two numbers. Equivalent to the `-` operator. Learn more Operator MULTIPLY MULTIPLY(factor1, factor2) Returns the product of two numbers. Equivalent to the `*` operator. Learn more Operator NE NE(value1, value2) Returns `TRUE` if two specified values are not equal and `FALSE` otherwise. Equivalent to the `!=` operator. Learn more Operator POW POW(base, exponent) Returns a number raised to a power. Learn more Operator UMINUS UMINUS(value) Returns a number with the sign reversed. Learn more Operator UNARY_PERCENT UNARY_PERCENT(percentage) Returns a value interpreted as a percentage; that is, `UNARY_PERCENT(100)` equals `1`. Learn more Operator UPLUS UPLUS(value) Returns a specified number, unchanged.. Learn more Statistical AVEDEV AVEDEV(value1, value2) Calculates the average of the magnitudes of deviations of data from a dataset's mean. Learn more Statistical AVERAGE AVERAGE(value1, value2) Returns the numerical average value in a dataset, ignoring text. Learn more Statistical AVERAGEA AVERAGEA(value1, value2) Returns the numerical average value in a dataset. Learn more Statistical AVERAGEIF AVERAGEIF(criteria_range, criterion, Returns the average of a range depending on criteria. Only available in the new Google Sheets. Learn more Statistical AVERAGEIFS AVERAGEIFS(average_range, criteria_range1, Returns the average of a range depending on multiple criteria. Only available in the new Google Sheets. Learn more criterion1, [criteria_range2, criterion2, ...]) BINOMDIST(num_successes, num_trials, Calculates the probability of drawing a certain number of successes (or a maximum number of successes) in a certain number Statistical BINOMDIST prob_success, cumulative) of tries given a population of a certain size containing a certain number of successes, with replacement of draws. Learn Statistical CONFIDENCE CONFIDENCE(alpha, standard_deviation, pop_size) Calculates the width of half the confidence interval for a normal distribution. Learn more Statistical CORREL CORREL(data_y, data_x) Calculates r, the Pearson product-moment correlation coefficient of a dataset. Learn more Statistical COUNT COUNT(value1, value2) Returns the a count of the number of numeric values in a dataset. Learn more Statistical COUNTA COUNTA(value1, value2) Returns the a count of the number of values in a dataset. Learn more Statistical COVAR COVAR(data_y, data_x) Calculates the covariance of a dataset. Learn more Statistical CRITBINOM CRITBINOM(num_trials, prob_success, Calculates the smallest value for which the cumulative binomial distribution is greater than or equal to a specified target_prob) criteria. Learn more Statistical DEVSQ DEVSQ(value1, value2) Calculates the sum of squares of deviations based on a sample. Learn more Statistical EXPONDIST EXPONDIST(x, lambda, cumulative) Returns the value of the exponential distribution function with a specified lambda at a specified value. Learn more Statistical FISHER FISHER(value) Returns the Fisher transformation of a specified value. Learn more Statistical FISHERINV FISHERINV(value) Returns the inverse Fisher transformation of a specified value. Learn more Statistical FORECAST FORECAST(x, data_y, data_x) Calculates the expected y-value for a specified x based on a linear regression of a dataset. Learn more Statistical GEOMEAN GEOMEAN(value1, value2) Calculates the geometric mean of a dataset. Learn more Statistical HARMEAN HARMEAN(value1, value2) Calculates the harmonic mean of a dataset. Learn more Statistical HYPGEOMDIST HYPGEOMDIST(num_successes, num_draws, Calculates the probability of drawing a certain number of successes in a certain number of tries given a population of a successes_in_pop, pop_size) certain size containing a certain number of successes, without replacement of draws. Learn more Statistical INTERCEPT INTERCEPT(data_y, data_x) Calculates the y-value at which the line resulting from linear regression of a dataset will intersect the y-axis (x=0). Learn more Statistical KURT KURT(value1, value2) Calculates the kurtosis of a dataset, which describes the shape, and in particular the "peakedness" of that dataset. Learn Statistical LARGE LARGE(data, n) Returns the nth largest element from a data set, where n is user-defined. Learn more Statistical LOGINV LOGINV(x, mean, standard_deviation) Returns the value of the inverse log-normal cumulative distribution with given mean and standard deviation at a specified value. Learn more Statistical LOGNORMDIST LOGNORMDIST(x, mean, standard_deviation) Returns the value of the log-normal cumulative distribution with given mean and standard deviation at a specified value. Learn more Statistical MAX MAX(value1, value2) Returns the maximum value in a numeric dataset. Learn more Statistical MAXA MAXA(value1, value2) Returns the maximum numeric value in a dataset. Learn more Statistical MEDIAN MEDIAN(value1, value2) Returns the median value in a numeric dataset. Learn more Statistical MIN MIN(value1, value2) Returns the minimum value in a numeric dataset. Learn more Statistical MINA MINA(value1, value2) Returns the minimum numeric value in a dataset. Learn more Statistical MODE MODE(value1, value2) Returns the most commonly occurring value in a dataset. Learn more Statistical NEGBINOMDIST NEGBINOMDIST(num_failures, num_successes, Calculates the probability of drawing a certain number of failures before a certain number of successes given a prob_success) probability of success in independent trials. Learn more Statistical NORMDIST NORMDIST(x, mean, standard_deviation, Returns the value of the normal distribution function (or normal cumulative distribution function) for a specified value, cumulative) mean, and standard deviation. Learn more Statistical NORMINV NORMINV(x, mean, standard_deviation) Returns the value of the inverse normal distribution function for a specified value, mean, and standard deviation. Learn Statistical NORMSDIST NORMSDIST(x) Returns the value of the standard normal cumulative distribution function for a specified value. Learn more Statistical NORMSINV NORMSINV(x) Returns the value of the inverse standard normal distribution function for a specified value. Learn more Statistical PEARSON PEARSON(data_y, data_x) Calculates r, the Pearson product-moment correlation coefficient of a dataset. Learn more Statistical PERCENTILE PERCENTILE(data, percentile) Returns the value at a given percentile of a dataset. Learn more Statistical PERCENTRANK PERCENTRANK(data, value, [significant_digits]) Returns the percentage rank (percentile) of a specified value in a dataset. Learn more Statistical PERCENTRANK.EXC PERCENTRANK.EXC(data, value, Returns the percentage rank (percentile) from 0 to 1 exclusive of a specified value in a dataset. Only available in the [significant_digits]) new Google Sheets. Learn more Statistical PERCENTRANK.INC PERCENTRANK.INC(data, value, Returns the percentage rank (percentile) from 0 to 1 inclusive of a specified value in a dataset. Only available in the [significant_digits]) new Google Sheets. Learn more Statistical PERMUT PERMUT(n, k) Returns the number of ways to choose some number of objects from a pool of a given size of objects, considering order. Learn more Statistical POISSON POISSON(x, mean, cumulative) Returns the value of the Poisson distribution function (or Poisson cumulative distribution function) for a specified value and mean. Learn more Statistical PROB PROB(data, probabilities, low_limit, Given a set of values and corresponding probabilities, calculates the probability that a value chosen at random falls high_limit) between two limits. Learn more Statistical QUARTILE QUARTILE(data, quartile_number) Returns a value nearest to a specified quartile of a dataset. Learn more Statistical RANK RANK(value, data, is_ascending) Returns the rank of a specified value in a dataset. Learn more Statistical RANK.AVG RANK.AVG(value, data, [is_ascending]) Returns the rank of a specified value in a dataset. If there is more than one entry of the same value in the dataset, the average rank of the entries will be returned. Only available in the new Google Sheets. Learn more Statistical RANK.EQ RANK.EQ(value, data, [is_ascending]) Returns the rank of a specified value in a dataset. If there is more than one entry of the same value in the dataset, the top rank of the entries will be returned. Only available in the new Google Sheets. Learn more Statistical RSQ RSQ(data_y, data_x) Calculates the square of r, the Pearson product-moment correlation coefficient of a dataset. Learn more Statistical SKEW SKEW(value1, value2) Calculates the skewness of a dataset, which describes the symmetry of that dataset about the mean. Learn more Statistical SLOPE SLOPE(data_y, data_x) Calculates the slope of the line resulting from linear regression of a dataset. Learn more Statistical SMALL SMALL(data, n) Returns the nth smallest element from a data set, where n is user-defined. Learn more Statistical STANDARDIZE STANDARDIZE(value, mean, standard_deviation) Calculates the normalized equivalent of a random variable given mean and standard deviation of the distribution. Learn Statistical STDEV STDEV(value1, value2) Calculates an estimate of standard deviation based on a sample. Learn more Statistical STDEVA STDEVA(value1, value2) Calculates an estimate of standard deviation based on a sample, setting text to the value `0`. Learn more Statistical STDEVP STDEVP(value1, value2) Calculates an estimate of standard deviation based on an entire population. Learn more Statistical STDEVPA STDEVPA(value1, value2) Calculates an estimate of standard deviation based on an entire population, setting text to the value `0`. Learn more Statistical STEYX STEYX(data_y, data_x) Calculates the standard error of the predicted y-value for each x in the regression of a dataset. Learn more Statistical TDIST TDIST(x, degrees_freedom, tails) Calculates the probability for Student's t-distribution with a given input (x). Only available in the new Google Sheets. Learn more Statistical TRIMMEAN TRIMMEAN(data, exclude_proportion) Calculates the mean of a dataset excluding some proportion of data from the high and low ends of the dataset. Learn more Statistical VAR VAR(value1, value2) Calculates an estimate of variance based on a sample. Learn more Statistical VARA VARA(value1, value2) Calculates an estimate of variance based on a sample, setting text to the value `0`. Learn more Statistical VARP VARP(value1, value2) Calculates an estimate of variance based on an entire population. Learn more Statistical VARPA VARPA(value1, value2) Calculates an estimate of variance based on an entire population, setting text to the value `0`. Learn more Statistical WEIBULL WEIBULL(x, shape, scale, cumulative) Returns the value of the Weibull distribution function (or Weibull cumulative distribution function) for a specified shape and scale. Learn more Statistical ZTEST ZTEST(data, value, standard_deviation) Returns the two-tailed P-value of a Z-test with standard distribution. Learn more Text ARABIC ARABIC(roman_numeral) Computes the value of a Roman numeral. Learn more Text CHAR CHAR(table_number) Convert a number into a character according to the current Unicode table. Learn more Text CLEAN CLEAN(text) Returns the text with the non-printable ASCII characters removed. Only available in the new Google Sheets. Learn more Text CODE CODE(string) Returns the numeric Unicode map value of the first character in the string provided. Learn more Text CONCATENATE CONCATENATE(string1, string2) Appends strings to one another. Learn more Text DOLLAR DOLLAR(number, number_of_places) Formats a number into the locale-specific currency format. Learn more Text EXACT EXACT(string1, string2) Tests whether two strings are identical. Learn more Text FIND FIND(search_for, text_to_search, starting_at) Returns the position at which a string is first found within text. Learn more Text FINDB FINDB(search_for, text_to_search, Returns the position at which a string is first found within text counting each double-character as 2. Only available in [starting_at]) the new Google Sheets. Learn more Text FIXED FIXED(number, number_of_places, Formats a number with a fixed number of decimal places. Learn more Text JOIN JOIN(delimiter, value_or_array1, Concatenates the elements of one or more one-dimensional arrays using a specified delimiter. Learn more Text LEFT LEFT(string, number_of_characters) Returns a substring from the beginning of a specified string. Learn more Text LEN LEN(text) Returns the length of a string. Learn more Text LOWER LOWER(text) Converts a specified string to lowercase. Learn more Text MID MID(string, starting_at, extract_length) Returns a segment of a string. Learn more Text PROPER PROPER(text_to_capitalize) Capitalizes each word in a specified string. Learn more Text REGEXEXTRACT REGEXEXTRACT(text, regular_expression) Extracts matching substrings according to a regular expression. Learn more Text REGEXMATCH REGEXMATCH(text, regular_expression) Whether a piece of text matches a regular expression. Learn more Text REGEXREPLACE REGEXREPLACE(text, regular_expression, Replaces part of a text string with a different text string using regular expressions. Learn more Text REPLACE REPLACE(text, position, length, new_text) Replaces part of a text string with a different text string. Learn more Text REPT REPT(text_to_repeat, number_of_repetitions) Returns specified text repeated a number of times. Learn more Text RIGHT RIGHT(string, number_of_characters) Returns a substring from the end of a specified string. Learn more Text ROMAN ROMAN(number, rule_relaxation) Formats a number in Roman numerals. Learn more Text SEARCH SEARCH(search_for, text_to_search, starting_at) Returns the position at which a string is first found within text. Learn more Text SEARCHB SEARCHB(search_for, text_to_search, Returns the position at which a string is first found within text counting each double-character as 2. Only available in [starting_at]) the new Google Sheets. Learn more Text SPLIT SPLIT(text, delimiter, split_by_each) Divides text around a specified character or string, and puts each fragment into a separate cell in the row. Learn more Text SUBSTITUTE SUBSTITUTE(text_to_search, search_for, Replaces existing text with new text in a string. Learn more replace_with, occurrence_number) Text T T(value) Returns string arguments as text. Learn more Text TEXT TEXT(number, format) Converts a number into text according to a specified format. Learn more Text TRIM TRIM(text) Removes leading and trailing spaces in a specified string. Learn more Text UPPER UPPER(text) Converts a specified string to uppercase. Learn more Text VALUE VALUE(text) Converts a string in any of the date, time or number formats that Google Sheets understands into a number. Learn more Database DAVERAGE DAVERAGE(database, field, criteria) Returns the average of a set of values selected from a database table-like array or range using a SQL-like query. Learn Database DCOUNT DCOUNT(database, field, criteria) Counts numeric values selected from a database table-like array or range using a SQL-like query. Learn more Database DCOUNTA DCOUNTA(database, field, criteria) Counts values, including text, selected from a database table-like array or range using a SQL-like query. Learn more Database DGET DGET(database, field, criteria) Returns a single value from a database table-like array or range using a SQL-like query. Learn more Database DMAX DMAX(database, field, criteria) Returns the maximum value selected from a database table-like array or range using a SQL-like query. Learn more Database DMIN DMIN(database, field, criteria) Returns the minimum value selected from a database table-like array or range using a SQL-like query. Learn more Database DPRODUCT DPRODUCT(database, field, criteria) Returns the product of values selected from a database table-like array or range using a SQL-like query. Learn more Database DSTDEV DSTDEV(database, field, criteria) Returns the standard deviation of a population sample selected from a database table-like array or range using a SQL-like query. Learn more Database DSTDEVP DSTDEVP(database, field, criteria) Returns the standard deviation of an entire population selected from a database table-like array or range using a SQL-like query. Learn more Database DSUM DSUM(database, field, criteria) Returns the sum of values selected from a database table-like array or range using a SQL-like query. Learn more Database DVAR DVAR(database, field, criteria) Returns the variance of a population sample selected from a database table-like array or range using a SQL-like query. Learn more Database DVARP DVARP(database, field, criteria) Returns the variance of an entire population selected from a database table-like array or range using a SQL-like query. Learn more Parser TO_DATE TO_DATE(value) Converts a provided number to a date. Learn more Parser TO_DOLLARS TO_DOLLARS(value) Converts a provided number to a dollar value. Learn more Parser TO_PERCENT TO_PERCENT(value) Converts a provided number to a percentage. Learn more Parser TO_PURE_NUMBER TO_PURE_NUMBER(value) Converts a provided date/time, percentage, currency or other formatted numeric value to a pure number without formatting. Learn more Parser TO_TEXT TO_TEXT(value) Converts a provided numeric value to a text value. Learn more Array ARRAY_CONSTRAIN ARRAY_CONSTRAIN(input_range, num_rows, Constrains an array result to a specified size. Only available in the new Google Sheets. Learn more Array ARRAY_LITERAL ARRAY_LITERAL(input_arrays, ...) Returns an array with M rows and N columns from M 1xN arrays. Only available in the new Google Sheets. Learn more Array ARRAY_ROW ARRAY_ROW(elements, ...) Returns an array made up of the elements passed in as arguments. Only available in the new Google Sheets. Learn more Array EXPAND EXPAND(array_formula) Forces the automatic expansion of array formula output as the output size grows. Learn more Array FREQUENCY FREQUENCY(data, classes) Calculates the frequency distribution of a one-column array into specified classes. Learn more Array GROWTH GROWTH(known_data_y, known_data_x, new_data_x, Given partial data about an exponential growth trend, fits an ideal exponential growth trend and/or predicts further b) values. Learn more Array LINEST LINEST(known_data_y, known_data_x, b, verbose) Given partial data about a linear trend, calculates various parameters about the ideal linear trend using the least-squares method. Learn more Array LOGEST LOGEST(known_data_y, known_data_x, b, verbose) Given partial data about an exponential growth curve, calculates various parameters about the best fit ideal exponential growth curve. Learn more Array MDETERM MDETERM(square_matrix) Returns the matrix determinant of a square matrix specified as an array or range. Learn more Array MINVERSE MINVERSE(square_matrix) Returns the multiplicative inverse of a square matrix specified as an array or range. Learn more Array MMULT MMULT(matrix1, matrix2) Calculates the matrix product of two matrices specified as arrays or ranges. Learn more Array NOEXPAND NOEXPAND(array_formula) Prevents the automatic expansion of array formula output as the output size grows. Learn more Array SUMPRODUCT SUMPRODUCT(array1, array2) Calculates the sum of the products of corresponding entries in two equal-sized arrays or ranges. Learn more Array SUMX2MY2 SUMX2MY2(array_x, array_y) Calculates the sum of the differences of the squares of values in two arrays. Learn more Array SUMX2PY2 SUMX2PY2(array_x, array_y) Calculates the sum of the sums of the squares of values in two arrays. Learn more Array SUMXMY2 SUMXMY2(array_x, array_y) Calculates the sum of the squares of differences of values in two arrays. Learn more Array TRANSPOSE TRANSPOSE(array_or_range) Transposes the rows and columns of an array or range of cells. Learn more Array TREND TREND(known_data_y, known_data_x, new_data_x, Given partial data about a linear trend, fits an ideal linear trend using the least squares method and/or predicts further b) values. Learn more
{"url":"https://support.google.com/drive/table/25273?hl=en&rd=2","timestamp":"2014-04-17T21:44:19Z","content_type":null,"content_length":"129035","record_id":"<urn:uuid:32ecf5ac-b3df-4970-9c13-8bb47949a825>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Length Contraction in the Muon Experiment Registered Senior Member Well then, lets define the length of the known universe as 30b ly in the Earth frame (15b ly in each direction). This makes the length of the known universe 6b ly (3b ly in each direction) in the Muon frame as defined from the Muon experiment. So light from the edge of the universe takes 15b yrs to arrive to our point in space according to the Earth and this same light takes 3b yrs to arrive to our point in space according to the And what do you conclude? Registered Senior Member And what do you conclude? According to our Muon frame, how much time has elapsed on Earth if 3b yrs have elapsed in our (Muon) frame? James R}There are no "empty space" distances[/quote You must know that I am going to bring up the aburdity of this statement. You might as well forget trying to preach your point of view to me. I am quite serious. How is a distance meaningful if it is not a distance between two objects? Can you give me an example of a distance which is NOT between two objects, other than an empty statement like "3 kilometres" (3 kilometres where? What for? According to whom?) Registered Senior Member James R I am quite serious. How is a distance meaningful if it is not a distance between two objects? Take a distance from a slice of spacetime. That is, freeze time - pick two arbitrary points in one frame, get the distance between them, go to the other frame, the arbitrary points are no longer arbitrary as they have 1-to-1 correspondence in actual reality in this frame. That is if we place objects at these two points, both frame can see the two objects. Just because two objects don't exist there doesn't mean there isn't a distance between these two points. You must know that I am going to bring up the aburdity of this statement. You might as well forget trying to preach your point of view to me. I am quite serious. How is a distance meaningful if it is not a distance between two objects? Can you give me an example of a distance which is NOT between two objects, other than an empty statement like "3 kilometres" (3 kilometres where? What for? According to whom?) It's a fascinating point that one rarely, if ever, consciously examines. What is a distance without two objects? The only example I can think of is points in a geometric plane or space - an abstraction. Fascinating. Analysis time: Events of interest (coordinates given for Earth frame): O - Earth-muon collision ( I'll aribtrarily set this to be the Origin: x = 0, t = 0) A - Earth at the beginning of time (x = 0, t = -15b yr) B - light emitted at the beginning of time at the -x end of the known Universe (x = -15b ly, t = -15b yr) C - light emitted at the beginning of time at the +x end of the known Universe (x = 15b ly, t = -15b yr) D - muon at the beginning of time (x = -14.7b ly, t = -15b yr) Note that I'm assuming that time began across the known Universe 15 billion years ago in Earth's frame (you see where this is going already, don't you?) Now I'm going to transform these events to the Muon frame. I'm just plugging numbers into the Lorentz transform, with v = 0.98c. Muon frame: O: x' = 0, t' = 0 A: x' = 73.5b ly, t' = -75b yr B: x' = -1.5b ly, t' = -1.5b yr C: x' = 148.5b ly, t' = 148.5by D: x' = 0, t' = -3b yr You might like to plot these points on a graph. Conclusion of the analysis: According to the muon: The known universe is 150b ly across.** Time began 3b yrs ago for the muon. Time began 75b yrs ago on Earth. Time began 1.5b yrs ago at the -x end of the known Universe. Time began 148.5b yrs ago at the +x end of the known Universe. **I didn't expect this! This is why I rely on the Lorentz transform, rather than trying to use length contraction, time dilation, and relative simultaneity piecemeal. Things to cogitate on: What is the "end of the known universe", really? It is a place now, or a past event? It's a fascinating point that one rarely, if ever, consciously examines. What is a distance without two objects? The only example I can think of is points in a geometric plane or space - an abstraction. Fascinating. I know what you mean! I posted my thoughts on this topic a bit earlier, deep in the heart of the highly stimulating discussion with Aer: What is length, under the umbrella of SR? It's the magnitude of the spacetime interval between two simultaneous events. According to this view, length isn't directly transferrable between frames - To transfer a length from one frame to another, you have to change one or both events so that they are simultaneous in the new frame. This is OK, as long as you make sure that the new event(s) have the same spatial coordinates in the original frame as the old events. THis is easy and natural if you have an object at rest in the first frame to mark the spatial coordinates. Registered Senior Member Analysis time: B - light emitted at the beginning of time at the -x end of the known Universe (x = -15b ly, t = -15b yr) C - light emitted at the beginning of time at the +x end of the known Universe (x = 15b ly, t = -15b yr) B: x' = -1.5b ly, t' = -1.5b yr C: x' = 148.5b ly, t' = 148.5by Assuming your assumptions leading to your lorentz transformations and calculations are correct (I checked neither yet) then we have a very interesting result above. The same light hitting the Earth can be presumed to be hitting the muon when the Earth and muon collide. Yet it took different distances and different time intervals for the light to reach the muon/earth from different And I am much too tired to be doing any analysis of mathematical equations Take a distance from a slice of spacetime. That is, freeze time - pick two arbitrary points in one frame, get the distance between them, go to the other frame, the arbitrary points are no longer arbitrary as they have 1-to-1 correspondence in actual reality in this frame. Frames, spacetime. Sounds very abstract to me. So, you define two points in spacetime, and you denote the distance between them to be the spatial separation at one instant of time. Right? Then you switch frames. How do you now define the distance between the same two points? Are the points even the same any more? How do you know where your points are? By switching frames, didn't you exchange one spacetime for another? At the very least, it seems to me you swapped a bit of space for a bit of time and vice versa. What I am saying is that while coordinates in spacetimes can change, all observers agree on the existence or non-existence of objects in spacetime. That is if we place objects at these two points, both frame can see the two objects. Just because two objects don't exist there doesn't mean there isn't a distance between these two points. But by specifying two points in the first place, you are presupposing an observer. And when it comes down to it, all real observers are also objects. Yes Pete. I remember that post. It's the magnitude of the spacetime interval between two simultaneous events. I committed your phrase to memory - I really like it! Thanks, super - that's high praise! Registered Senior Member James R At the very least, it seems to me you swapped a bit of space for a bit of time and vice versa. Yes, maybe this is true. However, if we send a signal to a moving object to record its observation and can calculate when the moving object will recieve this signal to "record" at that split time interval, then can we not just record what we see at that calculated time that the moving object recieves the signal? Of course I see the relativitity of simultaneity rearing its ugly ass again James R What I am saying is that while coordinates in spacetimes can change, all observers agree on the existence or non-existence of objects in spacetime. Let's leave the discussion to where it exists with Pete and I right now. We all moved on from this discussion long ago - and to be honest, I keep forgetting what I am replying to in this Registered Senior Member Does anyone wish to verify Pete's results: Analysis time: B - light emitted at the beginning of time at the -x end of the known Universe (x = -15b ly, t = -15b yr) C - light emitted at the beginning of time at the +x end of the known Universe (x = 15b ly, t = -15b yr) B - light emitted at the beginning of time at the -x end of the known Universe (x = -15b ly, t = -15b yr) C - light emitted at the beginning of time at the +x end of the known Universe (x = 15b ly, t = -15b yr) B: x' = -1.5b ly, t' = -1.5b yr C: x' = 148.5b ly, t' = 148.5by Assuming your assumptions leading to your lorentz transformations and calculations are correct (I checked neither yet) then we have a very interesting result above. The same light hitting the Earth can be presumed to be hitting the muon when the Earth and muon collide. Yet it took different distances and different time intervals for the light to reach the muon/earth from different Yep - the "beginning of time" was simultaneous in the Earth frame, but not the muon frame. I think this means that assuming a universal instant (the beginning of time) defines a universally observable reference frame. An observer on Earth can tell that they are at rest in the Universe's "creation rest frame", while an observer with the muon can tell that they are moving at 0.98c relative to that frame. This sounds kind of like the CMBR rest frame - we can tell by looking at the CMBR that we're moving at 600km/s relative to some Universally observable reference frame. to be honest, I keep forgetting what I am replying to in this thread I hear you... it's been entertaining, but exhausting Registered Senior Member Yep - the "beginning of time" was simultaneous in the Earth frame, but not the muon frame. It seems time could have never have began as no one can agree on when it happened. I think this means that assuming a universal instant (the beginning of time) defines a universally observable reference frame. Come again? I rather like my Earth reference frame - Einstein told me it was perfectly OK to use. An observer on Earth can tell that they are at rest in the Universe's "creation rest frame", while an observer with the muon can tell that they are moving at 0.98c relative to that frame. This sounds kind of like the CMBR rest frame - we can tell by looking at the CMBR that we're moving at 600km/s relative to some Universally observable reference frame. If all you said above is true, we should have some light coming from one direction that took longer to get here than light from the opposite direction, no? Registered Senior Member And how did the length of the known universe expand in the muon's frame when we agreed that the muon's frame was length contracted? Are you sure your calculations are correct? I certainly wouldn't be disappointed as this is nonsense, but If all you said above is true, we should have some light coming from one direction that took longer to get here than light from the opposite direction, no? Yes! (unless GR says something different about it) But not by much. The relativistic gamma factor for 600km/s is only 1.000002 And how did the length of the known universe expand in the muon's frame when we agreed that the muon's frame was length contracted? Apparently we were wrong. I'm not completely sure how this pans out as far as length contraction goes... length contraction is hard to follow without objects with length to track. Are you sure your calculations are correct? I checked them twice... I've been looking for an online Lorentz transform calculator, but no joy. But, it's not hard to do. In our case with the units we're using, v = 0.98, gamma = 5, c = 1, so: x' = 5(x - 0.98 t) t' = 5(t - 0.98 x) I certainly wouldn't be disappointed as this is nonsense Are you sure?
{"url":"http://www.sciforums.com/showthread.php?47577-Length-Contraction-in-the-Muon-Experiment/page17","timestamp":"2014-04-19T17:05:06Z","content_type":null,"content_length":"120129","record_id":"<urn:uuid:67fc4e87-1a05-4153-adf0-5dea64bacaa1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Interpretation of Two-sample t test with equal variances? Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Interpretation of Two-sample t test with equal variances? From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Interpretation of Two-sample t test with equal variances? Date Wed, 20 Mar 2013 14:33:39 +0000 In much the same spirit as earlier suggestions: The mean ages given were 28.8 and 29.4 (presumably years) for the two classes. That sounds like a difference without clinical significance, although I am no clinician, not a woman, and not even significant. However, it is also likely that the means are hiding important details in the distributions. For example, I would expect skewed distributions for mothers' ages -- and the skewness I might guess to differ between the two modes of delivery. General knowledge underlines a range from <<20 to >50 years. Although I have much faith that Student's t test works well even if you lie to it, skewness sounds like an area for investigation. My gut instinct is that turning the problem round to make it a logit regression on age makes much more sense. I would use a fractional polynomial or cubic spline in age and always plot some smooth summary of one or other fraction (e.g. fraction C or fraction V) versus age. On Wed, Mar 20, 2013 at 2:02 PM, David Hoaglin <dchoaglin@gmail.com> wrote: > Gwinyai, > In your first message you posed the question of whether the mode of > delivery depended on (or was related to) mother's age. The logistic > regression is an appropriate way to approach that question. The > output says that, in your data, the odds of a C/section increase with > mother's age, but the rate of increase does not differ significantly > from zero. That is, the risk of a C/section is not related to > mother's age. > You may want to do a little diagnostic checking, to make sure that the > logit model is a satisfactory summary of your data. You could split > the age range into intervals (with a reasonable total sample size in > each interval), and calculate the percentage of C/sections in each > category. Does either group of mothers contain any unusually low or > unusually high ages? > I hope this discussion is helpful. > David Hoaglin > On Wed, Mar 20, 2013 at 1:04 AM, Gwinyai Masukume > <parturitions@gmail.com> wrote: >> Thank you Richard. Yes, I guess the t-test suggests the counter >> intuitive though it probably won’t change things much. >> How can I reverse the situation? >> I ran a logistic regression for binary outcomes as you suggested: >> Essentially no significance is shown? >> . logit mode_delivery age >> Iteration 0: log likelihood = -159.58665 >> Iteration 1: log likelihood = -159.34203 >> Iteration 2: log likelihood = -159.34197 >> Iteration 3: log likelihood = -159.34197 >> Logistic regression Number of obs = 250 >> LR chi2(1) = 0.49 >> Prob > chi2 = 0.4842 >> Log likelihood = -159.34197 Pseudo R2 = 0.0015 >> ------------------------------------------------------------------------------- >> mode_delivery | Coef. Std. Err. z P>|z| [95% Conf. Interval] >> --------------+---------------------------------------------------------------- >> age | .0155454 .0222368 0.70 0.485 -.028038 .0591288 >> _cons | -1.133737 .6630978 -1.71 0.087 -2.433385 .1659111 >> ------------------------------------------------------------------------------- * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-03/msg00856.html","timestamp":"2014-04-16T16:49:47Z","content_type":null,"content_length":"12268","record_id":"<urn:uuid:5cee2013-22c7-475d-8b8e-3db15778a59a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
sketch an approximate solution curve through each of the points... February 5th 2011, 10:11 PM #1 Senior Member Aug 2009 sketch an approximate solution curve through each of the points... by hand sketch an approximate solution curve through each of the points. ok so the function is: y(dy/dx) = -x so, (dy/dx) = -x/y at the point y(1)= 1 and y(0) = 4 so at the point 1,1 the slope is -1, so my dash is going up and to the left? is this all im doing with this problem or am i missing something? thanks in advance. It's well known that circles have a derivative of $\displaystyle -\frac{x}{y}$. You can check this by solving the DE... $\displaystyle y\,\frac{dy}{dx} = -x$ $\displaystyle \int{y\,\frac{dy}{dx}\,dx} = \int{-x\,dx}$ $\displaystyle \int{y\,dy} = -\frac{x^2}{2} + C_1$ $\displaystyle \frac{y^2}{2} +C_2 = \frac{x^2}{2} + C_1$ $\displaystyle \frac{x^2}{2} + \frac{y^2}{2} = C_1 - C_2$ $\displaystyle x^2 + y^2 = r^2$ where $\displaystyle r^2 = 2(C_1 - C_2)$. Now you can use your boundary conditions to evaluate $\displaystyle r^2$. February 5th 2011, 10:19 PM #2
{"url":"http://mathhelpforum.com/differential-equations/170314-sketch-approximate-solution-curve-through-each-points.html","timestamp":"2014-04-24T10:48:31Z","content_type":null,"content_length":"35499","record_id":"<urn:uuid:29e18b15-5441-476b-9c0a-9a2fef3036e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] Major math screwiness going on with fractions! From: Danny Yoo (dyoo at hkn.eecs.berkeley.edu) Date: Fri Jan 26 19:44:41 EST 2007 Hi everyone, I was tracking down a problem in a program I was running involving numeric comparisons. I've isolated it down to this: dyoo at dyoo-desktop:~$ mzscheme Welcome to MzScheme v369.6 [3m], Copyright (c) 2004-2007 PLT Scheme Inc. > (< 1 (/ 1 4)) After recovering from laughter, I looked at it further. This bug appears to affect both mzscheme and mzschemecgc. It's not a JIT problem: dyoo at dyoo-desktop:~/local/plt-svn/src/mzscheme$ PLTNOMZJIT=1 mzscheme Welcome to MzScheme v369.6 [3m], Copyright (c) 2004-2007 PLT Scheme Inc. > (< 1 (/ 1 4)) It appears to have to do with rationals: if I convert down to inexacts, things work out as expected. > (< 1 (exact->inexact (/ 1 4))) It's in the logic of rational_lt in src/mzscheme/rational.c; some of the fast-path logic looks very suspicious to me, but I haven't put my finger on it yet. If I comment out the fast case logic, the bug's fixed, so I know there's some something funky going on in the block here: /* Avoid multiplication in simple cases: */ if (scheme_bin_lt_eq(ra->num, rb->num) && scheme_bin_gt_eq(ra->num, rb->num)) { if (!or_eq) { if (scheme_rational_eq(a, b)) return 0; return 1; } else if (or_eq) { if (scheme_rational_eq(a, b)) return 1; I've commented it out in my own repository. I have to go grab dinner right now, but hopefully someone can fix this? It looks like it got introduced yesterday or so in r5455. Anyway, hope this helps! Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2007-January/016214.html","timestamp":"2014-04-20T10:52:35Z","content_type":null,"content_length":"7283","record_id":"<urn:uuid:cda597d1-ee74-4f67-ba34-27886d85fc70>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
checker plate floor loading JStephen (Mechanical) 17 Apr 07 13:17 I think to get Roark's formula to work, you'll need dimensions in inches (to match PSI). If I remember correctly, don't the building codes define what a "concentrated load" is? It's not a point load, it's something like 2' square (ie, space a person occupies) and that will make a sizable difference in your case. And that would put you closer to formula 1c rather than 1b in Roark's book.
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=184426","timestamp":"2014-04-17T18:34:36Z","content_type":null,"content_length":"33628","record_id":"<urn:uuid:ce28cd32-e56f-4556-a099-4b519223e6dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Looped inset plot Replies: 9 Last Post: Jan 18, 2013 5:23 PM Messages: [ Previous | Next ] dpb Re: Looped inset plot Posted: Jan 16, 2013 2:10 PM Posts: 7,867 Registered: 6/7/07 On 1/16/2013 1:01 PM, dpb wrote: > Play around w/ the following at some point when you've got some data... > >> figure > >> h1=axes; % create a main axes object > >> plot(h1,x,y,'*') % plot into it > >> h2=axes('pos',[.55 .25 .30 .50]); % now an inset set of axes > >> boxplot(h2,y) % and the boxplot therein... > Now, each loop thru, pick the proper set of axes. Oh, meant to add--search the documentation section on 'Enhanced Plotting' or similar section title for animation of plots techniques. You'll be able to avoid the flicker of the update and speed it up as well by simply replacing the dataset in the plot() and I presume you can also do similar inside boxplot altho I've not done any handle-diving there to determine what that entails with it. Date Subject Author 1/16/13 dpb 1/16/13 dpb 1/16/13 dpb 1/18/13 dpb 1/18/13 dpb 1/18/13 dpb
{"url":"http://mathforum.org/kb/message.jspa?messageID=8078547","timestamp":"2014-04-18T00:16:55Z","content_type":null,"content_length":"22547","record_id":"<urn:uuid:f6fbe710-fba4-42b8-973a-b51751ac2439>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Log System of equations December 9th 2010, 03:20 PM Log System of equations I'm needing a little assistance with this problem, haven't any trouble until this. A village of 1000 inhabitants increases at a rate of 10% per year. A neighbouring village of 2000 inhabitants decreases at a rate of 5% per year. After how many years will these two villages have the same population? So I wrote them as two exponential functions and made them equal to each other. Not sure this is how I should do it but it feels right. Then by definition $C^x=y <==> log(y)=x$ Turned them into this then I know it can be turned into this. And now I'm stuck, help! December 9th 2010, 03:26 PM December 9th 2010, 03:33 PM December 10th 2010, 02:01 AM Using what definition? Taking the logarithm of both sides (any base) of $(1.1)^x= 2(0.95)^x$ you get x log(1.1)= x log(.95)+ log 2
{"url":"http://mathhelpforum.com/algebra/165837-log-system-equations-print.html","timestamp":"2014-04-21T07:19:42Z","content_type":null,"content_length":"7390","record_id":"<urn:uuid:70708227-4ae1-4f48-a7f0-3feb144b7c39>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
The streams package Various Haskell 2010 stream comonads. • Data.Stream.Branching provides an "f-Branching Stream" comonad, aka the cofree comonad, or generalized rose tree. data Stream f a = a :< f (Stream a) • Data.Stream.Future provides a coinductive anti-causal stream, or non-empty ZipList. The comonad provides access to only the tail of the stream. Like a conventional ZipList, this is not a monad. data Future a = Last a | a :< Future a • Data.Stream.Future.Skew provides a non-empty skew-binary random-access-list with the semantics of Data.Stream.Future. As with Data.Stream.Future this stream is not a Monad, since the Applicative instance zips streams of potentially differing lengths. The random-access-list structure provides a number of operations logarithmic access time, but makes Data.Stream.Future.Skew.cons less productive. Where applicable Data.Stream.Infinite.Skew may be more efficient, due to a lazier and more efficient Applicative instance. • Data.Stream.NonEmpty provides a non-empty list comonad where the Applicative and Monad work like those of the [a]. Being non-empty, it trades in the Alternative and Monoid instances of [a] for weaker append-based FunctorAlt and Semigroup instances while becoming a member of Comonad and ComonadApply. Acting like a list, the semantics of <*> and <.> take a cross-product of membership from both NonEmpty lists rather than zipping like a Future data NonEmpty a = a :| [a] • Data.Stream.Infinite provides a coinductive infinite anti-causal stream. The Comonad provides access to the tail of the stream and the Applicative zips streams together. Unlike Future, infinite stream form a Monad. The monad diagonalizes the Stream, which is consistent with the behavior of the Applicative, and the view of a Stream as a isomorphic to the reader monad from the natural numbers. Being infinite in length, there is no Alternative instance, but instead the FunctorAlt instance provides access to the Semigroup of interleaving streams. data Stream a = a :< Stream a • Data.Stream.Infinite.Skew provides an infinite skew-binary random-access-list with the semantics of Data.Stream.Infinite Since every stream is infinite, the Applicative instance can be considerably less strict than the corresponding instance for Data.Stream.Future.Skew and performs asymptotically better. • Data.Stream.Infinite.Functional.Zipper provides a bi-infinite sequence, represented as a pure function with an accumulating parameter added to optimize moving the current focus. data Zipper a = !Integer :~ (Integer -> a) Changes since 0.1: • A number of strictness issues with NonEmpty were fixed • More documentation Versions 0.1.1, 0.2, 0.3, 0.3.1, 0.4, 0.5.0, 0.5.1, 0.5.1.1, 0.5.1.2, 0.6.0, 0.6.0.1, 0.6.1.1, 0.6.1.2, 0.6.3, 0.7.0, 0.7.1, 0.7.2, 0.8.0, 0.8.0.1, 0.8.0.2, 0.8.0.3, 0.8.0.4, 0.8.1, 0.8.2, 3.0, 3.0.0.1, 3.0.1, 3.0.1.1, 3.1, 3.1.1, 3.2 Dependencies base (>=4 && <4.4), comonad (>=0.6.0 && <0.7), functor-apply (>=0.7.4 && <0.8), semigroups (>=0.3.2 && <0.4) License BSD3 Copyright Copyright 2011 Edward Kmett Copyright 2010 Tony Morris, Oliver Taylor, Eelis van der Weegen Copyright 2007-2010 Wouter Swierstra, Bas van Dijk Author Edward A. Kmett Maintainer Edward A. Kmett <ekmett@gmail.com> Stability provisional Category Control, Comonads Home page http://github.com/ekmett/streams Source head: git clone git://github.com/ekmett/streams.git Upload date Fri Jan 21 17:02:03 UTC 2011 Uploaded by EdwardKmett Downloads 1368 total (137 in last 30 days) Maintainers' corner For package maintainers and hackage trustees
{"url":"http://hackage.haskell.org/package/streams-0.2","timestamp":"2014-04-17T13:11:04Z","content_type":null,"content_length":"9505","record_id":"<urn:uuid:73b4b973-72b0-4e15-8253-5e6c6c4de4f6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Federal Reserve Bank of Cleveland Background and Research Calculating Inflation Expectations Using Two Kinds of Treasury Securities Inferring long-term inflation expectations from the data is difficult. One measure, obtained from the University of Michigan's Survey of Consumers, asks participants what they expect inflation to be over the next 5 to 10 years, but it consistently runs above other measures and is probably not very reliable. Another measure of long-term inflation expectations is the Survey of Professional Forecasters (SPF), which asks professional forecasters what they expect inflation to be over the next 10 years. The SPF appears to measure expected inflation well on average but appears to be excessively smooth and does not pick up week-by-week or even month-by-month variation in actual expected inflation. In theory, the yields on two different kinds of Treasury securities—nominal treasury notes and treasury inflation-protected securities (TIPS)—can be used to calculate a market-based estimate of expected inflation. Nominal treasury notes earn a fixed nominal rate of interest on a fixed amount of principal, whereas the principal of TIPS is adjusted for inflation (and thus so are the coupon payments). Because the return on nominal treasuries is vulnerable to inflation, it most assuredly contains compensation to investors for any losses they expect to incur from inflation if they hold the bond. TIPS therefore protect the bondholder from losses due to inflation, but nominal treasuries do not. Whereas nominal treasury notes earn a fixed nominal rate of interest, TIPS earn a fixed real rate of interest. In principle, one ought to be able to simply subtract the real yield on TIPS from the nominal yield of treasury notes of the same maturity to derive expected inflation. Yet even that measure is imperfect. The TIPS-derived measure of inflation expectations underestimates SPF expected inflation and on average actual expected inflation by 50 basis points. If we could be assured that the difference between TIPS expected inflation and actual expected inflation was constant, we could easily correct for this bias. But this bias is probably not constant. There are reasons to suspect that the difference between a nominal security and an inflation protected TIPS security may not correctly measure expected inflation. There are actually 2 different factors that cause TIPS to be a biased predictor of expected inflation: an inflation-risk premium and a liquidity premium. To make matters more difficult, these biases likely go in different directions. We attempt to correct for both of these biases. The existence of an inflation-risk premium suggests that TIPS expected inflation likely overestimates actual expected inflation. This occurs because variable inflation implies that the real return on nominal treasury securities is uncertain, while, by definition, a TIPS real return is constant. To compensate for this inflation risk, the real return on TIPS will be less than the average return on nominal bonds. Studies suggest that because of inflation risk, TIPS expected inflation will overestimate actual expected inflation by 50 to 100 basis points. Although this bias may not be constant over very long periods of time, monthly movements in the bias are not likely to be important. If the bias is nearly constant, correcting for it is straightforward. More difficult to correct for is the bias due to liquidity risk because it is likely not constant over time. While the TIPS market is deep, it is, nevertheless, less liquid than the market for nominal treasury securities. Because of this relative liquidity difference, a TIPS real return should be more than the real return on nominal government securities. That is, TIPS-derived expected inflation will underestimate actual expected inflation. This bias, however, does not appear to be constant and can change on a weekly basis. Preparing the Adjusted Series: Correcting for Illiquidity and Inflation Risk To correct for the relative illiquidity of TIPS, we need a proxy for liquidity risk and an unbiased (but not necessarily efficient) predictor of expected inflation. The adjustment to the unadjusted TIPS expected inflation series is based on the following assumptions: • While nominal treasuries are extremely liquid instruments, there is still a small liquidity premium priced into these bonds, and we assume that the liquidity premium in the TIPS market is correlated with the liquidity premium in the nominal treasuries market. • One measure of the liquidity premium in the nominal treasuries market is the difference in the yields on nominal treasuries in the primary and secondary markets. (The primary market refers to bonds bought directly from the Treasury at auction; bonds bought in this market are said to be bought "on the run." The secondary market refers to bonds bought from other investors—"off the run".) We assume that the liquidity risk for TIPS is larger than the risk of nominal treasuries bought in the secondary market because the TIPS market is less developed. • We assume that the difference observed between two measures of expected inflation—that reported in the Survey of Professional Forecasters and that derived from unadjusted TIPS yields—is largely driven by the liquidity risk. Furthermore, this liquidity risk does not affect the difference between actual and SPF expected inflation. The liquidity premium in the nominal treasuries market is calculated as the difference between the yields on 10-year on-the-run and off-the-run treasury notes. (The data are obtained from the Board of Governors of the Federal Reserve System.) Regressing the spread between SPF expected inflation and unadjusted TIPS-derived expected inflation on the liquidity premium results in the following Spread = 0.948 - 12.71(LP) + 20.9(LP)^2, (where LP = liquidity premium). The constant picks up the bias due to inflation risk, while rest of the equation picks up the bias due to liquidity. This equation demonstrates that if there were no liquidity risk in the nominal treasuries market, and thus no liquidity risk in the TIPS market, expected inflation derived from TIPS would overstate actual expected inflation by 95 basis points (0.95 percent). This overstatement is basically the size of the inflation risk predicted by earlier research, which lends credence to our correction method. As liquidity risk rises in the nominal treasuries market, liquidity risk in the TIPS market also rises, so that the unadjusted TIPS-derived expected inflation series understates actual expected inflation. We can, therefore, correct unadjusted TIPS-derived expected inflation by subtracting the spread, estimated using the equation above, and the current liquidity premium in the nominal treasuries market: Adjusted TIIS expected inflation = Unadjusted TIIS expected inflation - 0.9485 + 12.7(LP) - 20.9(LP)^2
{"url":"http://www.clevelandfed.org/Research/data/TIPS/bg.cfm?DCSext.nav=Local","timestamp":"2014-04-20T10:49:55Z","content_type":null,"content_length":"25582","record_id":"<urn:uuid:d959ee04-a69b-4cd4-b4bf-05bf4104ca32>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Duluth, GA SAT Math Tutor Find a Duluth, GA SAT Math Tutor I am planning to tutor only one or two students on a weekly basis. I would prefer to tutor children between the ages of 10 – 13 due to time constraints in my work schedule. I am Korean/American with a degree in Accounting (minor in Mathematics). I am currently the Director of Internal Audit for a... 23 Subjects: including SAT math, reading, algebra 1, accounting ...I taught two pre-university classes and managed the math program through the school. I came to the US to pursue my graduate studies in Operations Research and Economics. These studies have given me a great understanding of math and of economics where I received my master degree and an MBA in Management Science. 15 Subjects: including SAT math, calculus, French, algebra 2 ...Those who read your paper (whether it be a teacher or a college admissions representative) will not overlook those mistakes. I have passed the ACT quizzes that WyzAnt provides. I can have my scores sent to you if required. 46 Subjects: including SAT math, Spanish, English, biology ...This usually helps students form a framework, which is all they need to understand trig. SAT math can be broken down into only a few types of questions in a few subjects. Specifically there are abstract, disguised, or multi-step questions in algebra, geometry, or data interpretation. 17 Subjects: including SAT math, chemistry, physics, writing ...Several students have done very well on the AP test, scoring 5's on the AB and BC tests. I have been teaching Geometry for over 5 years now. I realize it can be a strange subject for many students, it's the first time many will encounter formal definitions, proofs and logic. 41 Subjects: including SAT math, reading, physics, writing Related Duluth, GA Tutors Duluth, GA Accounting Tutors Duluth, GA ACT Tutors Duluth, GA Algebra Tutors Duluth, GA Algebra 2 Tutors Duluth, GA Calculus Tutors Duluth, GA Geometry Tutors Duluth, GA Math Tutors Duluth, GA Prealgebra Tutors Duluth, GA Precalculus Tutors Duluth, GA SAT Tutors Duluth, GA SAT Math Tutors Duluth, GA Science Tutors Duluth, GA Statistics Tutors Duluth, GA Trigonometry Tutors Nearby Cities With SAT math Tutor Alpharetta SAT math Tutors Berkeley Lake, GA SAT math Tutors Buford, GA SAT math Tutors Doraville, GA SAT math Tutors Dunwoody, GA SAT math Tutors East Point, GA SAT math Tutors Johns Creek, GA SAT math Tutors Lawrenceville, GA SAT math Tutors Lilburn SAT math Tutors Milton, GA SAT math Tutors Norcross, GA SAT math Tutors Snellville SAT math Tutors Sugar Hill, GA SAT math Tutors Suwanee SAT math Tutors Tucker, GA SAT math Tutors
{"url":"http://www.purplemath.com/duluth_ga_sat_math_tutors.php","timestamp":"2014-04-16T19:34:58Z","content_type":null,"content_length":"23883","record_id":"<urn:uuid:4667d703-19c4-40eb-8007-44660f6cb8b3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to OnlineConversion.com circle to sq to feet ?? circle to sq to feet ?? by E&H on 02/19/03 at 06:15:06 how do we fine sq foot of a circle ??? Re: circle to sq to feet ?? by Robert Fogt on 02/19/03 at 09:58:20 Try the Circle Solver page here: Though keep in mind, for the result to be in square feet, the input must be in feet. If you enter a diameter in inches, the result will be square inches. Re: circle to sq to feet ?? by Happykraut on 02/19/03 at 16:34:46 The formula is Pi r squared. Tough to write on a keyboard. Pi=3.14159 and r=radius of circle in feet. Or 3.14159x(radiusx radius). The radius is half of the diameter. Re: circle to sq to feet ?? by swp on 04/12/04 at 12:49:36 A friend told me an easy way to remember how to calculate the area of a circle. Most pies are round but, if the [color=Red]pie are square[/color], it would be easy! pi x r squared = area Go Back | Archive Index
{"url":"http://www.onlineconversion.com/forum/forum_1045664116.htm","timestamp":"2014-04-17T00:50:26Z","content_type":null,"content_length":"9034","record_id":"<urn:uuid:a5ce4a31-b266-4493-a669-491ed3a8cc4c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
dividing polynomials calculator Google visitors found our website today by typing in these algebra terms: Algebra foil calculator online, How to calculate pH with a casio calculator, online summation calculator, math substitution worksheet, how to put in ascending oder for polynomials calculater, algebra ks3 worksheets. Printable linear equation problems 6th grade, solving linear equations with 4 variables, solving math problems-difference of quotient, can you divide by square roots. Real-life problem involving polynomials, checking your answer for algebra, lessons on expanding brackets with algebra and solving equations, online rational expressions calculator, algebra aptitude test for 6th grade. What life situation can polynomials be used, identifying variables worksheet ks3, HOW TO multiplying binomials in ti 84, trigonometry poem, reflection translation rotation worksheet, Abstract Algebra exams, grade 6 algebra exam papers pdf file. 6th grade taks practice worksheets, tricks for solving aptitude questions, finite math for dummies, problem involving polynomials with real life situations, best chemistry calculator for balancing Difference between evaluation and simplification of an expression, extraneous solutions calculator, sum of digit-java, radical notation solver, hardest math problem in the world, graph points (ordered pairs) on the coordinate plane in order to create a picture. worsheet. X cubed equations, matlab determine solution 2nd order polynoom, what are the two variables worksheet, redox program for ti84, standard form formula calculators, free online ti-83 graphing Zero and negative exponents worksheet, online integrator step by step, radical square roots practice worksheet, solving square root equations worksheets, grade 10 mathematics radicals, inequality solver calculator, solution keys of elementary linear algebra. Modulus javascript, negative numbers worksheets ks3, math trivia with answer. TI-84 radical program, balancing chemical equations finder, online calculator with exponent key. Printable free picture grids, free downloadable algebra solution software, pre-algebra with pizzazz book dd answers, poems on mathematics for 10th graders. Math printouts for 6th graders, calculator that does two step algebra equations, machine balancing formula, ks2 maths problem solving, how to find the square of a imperfect number, dividing radicals calculator, free math worksheets for solving algebraic expressions with exponents. Dilation worksheet free, algebra 1 free answers, chart.ppt, worksheets forsolving addition and subtraction equationsequations, what is the best algebra software, math/financial aptitude test, mathematical help for kids in ks3 online. Taks formula chart, finding factors of an equation, partial fractions exponential, solving unknown variable with an exponent, ti 83 plotting points, adding and subtracting negative fractions, algebra 1 answer key. Samples of expanded notations, worksheets for dummies, ks3 problem solving volume questions, decimal mixed number calculator, calculator for inequalities, online expanding the brackets questions. Quadratic equations example worksheets, how to solve simultaneous equations non linear with excel, fourth grade long division problems, beach gradient, balancing chemical equations using algebra, square roots worksheet for third graders. Radical worksheet, NYS 6th grade math test review worksheets, math trivias with answers, free downloads cube and cube root worksheets, difficult balancing equations worksheet, asymptote calculator, how to rewrite division as multiplication. Cube root ti-89, free math solver step by step, mathematical trivias, quadratic inequality worksheet, creative pre algebra worksheets, 6th grade circumference worksheets. Simplifying radicals, dividing radical calculator, mixed numbers decimals calculator, reducing rational expressions worksheets. Abstract algebra hungerford solutions, rotation reflection translation worksheet, 6th grade math taks practice, convert base on line, HARD math trivias with answers, easy aptitude questions. Factor ti-89, free algebra help for dummies, monomial multiplying dividing worksheet, foiling on TI-84 calculators, solve the compound inequality calculator free, matlab solving the system of nonlinear equations, percentage problems and solutions. Algedbra trivia, common denominator worksheets, mathtutor made my ontario. Solve simultaneous non-linear equations online, math problems that can't be solved by synthetic division, solve my math problems for me for free, first in math cheats, java calculate partial integer, free two step equation worksheet, algebraic pyramid solver. Lineal metre, abstract algebra, hungerford, factoring 3rd order equation, coordinate plane free printable, matric maths help, solve a equation with rational expressions calculator, how to understand writing in algebra. Hard pi math problems, quadratic simplifier, year 7 algebra worksheets, simplify radical expression calculator, rearranging formulas worksheet, 3rd grade math permutations, algebra reflection rotation translation. Multiplying and dividing square roots, Hungerford abstract algebra solution, homework log, free parabola calculator, jessica flores retirement, calcul radical online, algebra percent equations Linear graphing free worksheets 8th grade, aptitude questions with solutions, integration solver. Ti-83 linear equation worksheets, free permutation and combination calculator, creative publications pre-algebra with pizzazz answers, matlab ode45 how to solve differential equation, partial fraction solver ti84. Chemical equation machine, free probability printable worksheet with answer key for grade 7, trigonometry in everyday life, examples of trivias in economics. Raticals free math games, factorize polynomials calculator, online foil, gcd calculation, algebra exercises free. Mcdougal littell algebra 1 answer key, rudin principles solution, trigonometric equation solver free program, algebra solver imaginary numbers, combination permutation calculator. Partial fractions calculator with steps, holt algebra 1 answers, "solve radical equations" worksheet. Practise questions on completing the chemical equations for 8 grade, rational expression calculator, online inequalities calculator, cubic equation solver, balance chemical equations for dummies, math Investigatory Projects, multiplying monomials worksheet. How to graph a cubic root ti 89, how to program a an equation with excel, prime factorization worksheets., free printable worksheets ks3, rational expressions solver. Quadratic Equation system solver, TRICK TO SOLVE QUESTIONS, calculator cu radicali, free math simultaneous equation solver. Solving least squares problems ti 89, 06.07 Multiplying and Dividing Roots, do my math homework for me for free with work, maths worksheets print-yr 9, ti-30xiis cube root, Free printable Grid Mathematics poem(intermediate algebra), standard form of equation calculator, algebra worksheets ks3. How to put information on to a ti 84, foil math calculator, multivariable equation solver online free. Distributive property free worksheet, EASY ENGLISH APTITUDE QUESTIONS, free algebrator online, solve arithmetic, nonlinear equation solver, algera free online help roots negative exponents. Tricks to solve aptitude, figuring out algebra problems, algebra test papers printable. Holt 7th grade math book answers, dividing rational expressions solver, dividing polynomials by monomials calculator, math tricks and trivias, factorial worksheets. Really hard equations, free grade nine, rotational symmetry worksheet ks3, solve equations by factoring vba, implicit differentiation calculator, matlab code+shrinking an array. 2nd grade fraction worksheets, dividing rational expressions calculator, math tricks and trivias, algebra 1 mcdougal littell answers, using the difference quotient to solve trig problems, prentice hall algebra 1 practice workbook. Sum of two cubes calculator, word problems ks3, sixth grade math examples free to copy, .Net generate combinations, online scientific graphing calculator to turn decimals into fractions, lcm of polynomials online calculator. Rational expressions and equations calculator, really hard algebra equations, square root property calculator, least to greatest decimals calculator, holt california algebra 1 answers, online ti-84 calculator, wwwmathcom. Writing algebraic equations ppt, graphing absolute value equalities, example of math trivia, expanding brackets questions. Decimal as mixed number calculator, backing into a number - algebra, math trivia questions, math tricks, math worksheets translations. Free use of online scientific calculator with fractions, cd math course at a sixth-grader, problem solving sheets, decimals to mixed numbers calculator, maths games ks3 year 8, factoring difference of two imperfect squares. Math 6th grade worksheets, trigonometry problems answers, online integer exponent solver, free online 6th grade math probability worksheets, algebra free help, holt california algebra 2 practice Solving non linear system equotions, How to factor trinomials on TI 83 plus, simplification calculator, 9th grade algebra problems, texas calculator simulator, complete the square program, 3 grade combinations and permutations. Synthetic division calculator online, algebra chapter 8 test answers, substitution calculator, ti89 polar equations, linear equations in standard form calculator. Coordinate graphing pictures printable, how to put combinations in calculator, mcdougal littell algebra 1 online 2001, dilations worksheet, factoring polynomials PPT. Algebra pizzazzi, dd-57 worksheet, beginning multiplication, pre-algebra with pizzazz. Graphing inequalities calculator, coordinate grid ordered pairs worksheets, algebra 1substitution math equations, adding subtracting multiplying and dividing integers worksheet, algebra software resources, math 9th grade quizes. Algebra 2 prentice hall mathematics answers, standard form definition in math, math Investigatory. Factoriales ti 89, algebrator guide, boolean algebra tutorial, RADICAL SOLVER. Rational expression problems, simplifying rational expressions worksheet, online polynomial factorizer. My math solver, permutations for third grade, percent equation worksheets, Mentel maths, simplifying exponents like bases worksheets. 5th grade math exam, printable homework logs, 9th grade math practice, convert quadratic equation vertex form, non-linear solving calc, complex rational algebraic expression. 5th grade trivia questions and answers, balancing algebraic equations worksheets, calculator help equations with fractional exponents, associative property worksheets, math trivia algebra question and answer, hard fraction problems. Calculator the lcd rational expressions, factoring polynomials calculator online, alg 1 factoring calculator, foil math solver. Free online radicals calculator, tivias in math, california algebra 1 holt, one step equation worksheets, Primary Algebraic expression worksheets. Java peano, prime factorization worksheets, summation calculator, steb by step integral solver, rational fractions calculator. 9th grade free math help, algebra 1 holt book answer, multiply rational expressions calculator, square root rules, asymptote calculator online, square root of 30 in radical form, simplify radical expressions calculator. Firstinmath cheats, quadratics practice, maths factoring questions. Pizzazz creative publications math worksheets, focal diameter, ti-84 programs difference quotient, Radicals Calculator, elementary math LCM and GCD, decimal?fraction worksheets for third garde, algebra slope worksheets. Work sheets on writing algebraic expressions for year 8, WORKSHEET ON EXPANDING BRACKETS, math games on radicals, math tricks trivia, ratioworksheets ks3, ratio ks3 worksheets, trigonometry bearing Detailed Rational Expressions, explain java program of quadratic roots, how can you get the ti 89 to give you the cubed root of a number, foerster algebra, factoring rational expressions worksheet, solving one step equations worksheet using addition and subtraction, linear equations 6th grade. Online TI 83, math textbook of GCD and LCM, graphing equations worksheets, 5th grade math trivia, algebra 2 probability. Parabola for dummies, mixed number to decimal calculator, completing and balancing equations calculator, work sheet practice of synthetic division, Ti 89 nonlinear equation systems, writing equations in slope intercept form worksheets. Square equal solver, simplifying imperfect squares, on, two and thre-step equations+ free. Can algebrator solve integrals, free coordinate graphing picture worksheets, free algebra calculator for subtracting rational expressions, worksheets for year 2 in uk. Mcdougal littell algebra 1 answer key free, substitutuin calculator, solve a radical equation. Real life in which you might use polynomial division, what is percent proportion?, combining radical expressions. Javascript modulus, do my math problems for me for free, solving inequalities with parentheses and/or the variable on both sides. Factor trees worksheets, polynomials worksheet, basic x box factoring problems. Finite math sovler, mathgames grade8, implicit derivative calculator. Simultaneous equation solver, problem solving worksheet, solution to popular math trivia, ny grade 2 math worksheets, complex numbers radicals calculator. Worksheet zero and negative exponents, hardest algebra problem in the world, algebra 1 answer sheet, online polynomial divider calculator, maths matric subjects pie, algebra tricks & trivias, printable t-charts. College algebra software, algebra denominator, solving college algebra problems, algebra 1 9th grade. Parabola calculator, non linear equations question sheet, third grade equation, multi variable equation solver, slope intercept form worksheet, free fraction worksheets 3rd grade, why did the donkey get a passport? algebra problem. Cartoons doing monomials, manth, algebra homework solver step by step download free, subtraction problem solving, ti-83+ cubic solver, elimination math problems, quadratic expression calculator, algebra exercise exponent. Free graphing ordered pairs picture, thomas fuller picture, multiply divide monomial worksheet. Describing a situation involving relationships that match a given graph, putting formulas into ti-84, mathematic.com, dilation worksheets, substitution calculator online, fractions worksheet for ks3. Best math solver, equation second grade solve matlab, pre algebra with pizzazz creative publications, calculate algebra in matlab, linear equation fraction solver, math papers to print, eighth grade educational trivia questions. Creative publications pizzazz worksheets, integral solver step by step, permutations and combinations worksheet 3rd grade, the sixth square root using a calculator. Foil Calculator, 10 th matric maths, free aptitude test 7 grade. Algebra trivia, beginning multiplication worksheets with pictures, ti-84 plus programs for electrical math. Angle relationships algebra, math trivia question and answer, polynomial factoring calculator, radical notation calculator. Mcdougal littell pre algebra practice workbook answers, learn algebraic factoring, eoc accounting equation, aptitude tricks, math trivia with solution. Cube roots on a ti-89, how to figure algebra, 6th grade math tutorial, algebra 1 answers free, javascript modulus problem, online ti 84 calculator. Volume worksheets ks3, java program to find sum of digits of a number, online differntiating calculator step by step, math trivia. Worksheet factoring quadratic equations, multiplying and simplifying rational expressions calculator, 6th grade Math TAKS practice, free exercise mathematique and solution, aptitude problems on compound interest, a matter of time algebra. Polynomial simplification matlab, math poems (order of operation), algebra rule for reversing formulas, square roots worksheets. Coordinate graphing picture worksheets, algebraic equations square root, poem about algebra symbol, excel algebra graph, graph inequalities online. Expanded form algebra, step by step integral solver, college algebra word problem solver program, online multiple variable solver software, algebra - expanding brackets worksheets, trigonometry algebrator free download. Subtraction with renaming, onlineTI 83, ti-89 titanium inverse log, free picture coordinate graphing worksheet, help with dividing radical expressions. Writing expressions powerpoint, free downloadable partial fraction decomposition calculator, solving simultaneous equations on internet, poem about trigonometry and advanced algebra, online ti84. Java code for subration program, test of genius worksheet, math poems algebra. Substitution worksheet, complicated order of operations worksheet, Algebra eqations for kids powerpoint, The benefits of completing the square, how do you input slope formula on graphing calculator, solving combinations in matlab, polynomial divider. Foil calculator, show step math, free cat tutorials, logarithmic equations +solver, free download algebrator, practice test 6th grade sat, taks practice worksheets 6th grade. Equations with two variables worksheet, free worksheets matrices, multivariable equation solver, adding polynomials + ti 83, reflection translation rotation. Algebra with pizzazz answer key, 6th grade math quotient, rational expressions calculator, difference of rational expressions calculator. Java get lowest common multiple, glencoe accounting 2 workbook problems, importance of linear equations, Basic combination in statistics, algebra with pizzazz creative publications. 9th grade algebra test, intermediate algebra questions with answer, simplifying radicals solver, graphing ordered pairs picture. Solving factorial equations in matlab, how to do combinations on ti-84, free reflections translations rotations worksheets, simplify expressions with exponents calculator fractions, prentice hall pre-algebra workbook answers, subtract negative numbers calculator, nonlinear simultaneous equations in matlab. Factorising calculator, get equation of a hyperbola excel, 1-step equations with square roots, math taks practice worksheets 6th grade, multiplying and dividing monomials worksheet, ratio problems Factoring out the GCF worksheet, TI-84 squaring, trivia questions for grade 6. 6th grade math taks test, improve my math skills yr 8, subtracting rational expressions calculator, math tricks and trivia, online calculator to check slope intercept form, algebra explained, calculator for monomials'. "first in math" review, free download math test papers grade 6, grade 9 math worksheets, "free online algebra cheater", adding and subtracting rational expressions calculator, Solving Quadratic Simultaneous equations. Faction calculator, how do you find the missing number in algebra fractions , factorization with variables, free greatest common factor test paper, dilations worksheets. Holt algebra workbook, ti-93 graphing calculator, decimal to mixed number converter calculator. 5th grade math california, matlab solve nonlinear equation, quiz for ks3, simplify rational expressions worksheet, factoring using substitution. Free printable College Algebra Cheat Sheet, factoring quadratic expressions solver, mathcheats.com, grades-adding. Factor rational exponents, online calculator solving using substitution, holt practice workbook answers. Creative Maths problems on square roots and cube roots, boolean algebra lesson, 6th grade math placement test practice, equation of hyperbole figure, precalculus math solver, simplifying calculator radicals, lyapunov exponent time series code. Algebra reducer, work out algebra problems online, ti-89 log, rational expressions problem solving, hard algebra problems. Mcdougal littell algebra 1 answers, common factors of 52, Free worksheet printouts, simplifying equations+4th grade, matlab roots of a quadratic, division activity sheets. Calculator divide monomial equation, TRANSITION GRAPHS WORKSHEETS, free percentage worksheets ks3, radical expression ti 85, inequality equation calculator, asymptotes solver online, math angles Online integration solver step by step, elementary math books, LCM and GCD, easiest way to factor, math exercises exponents, ks2 lowest common denominator, solving two step equations calculator, factoring solutions. Equation involving rational algebraic expressions, radical solver, hyperbola matlab. Holts 5th grade math worksheets, Multiple Choice Matlab Sample Questions, dilation worksheets math. The hardest math problem in the world to do, plane trigonometry problems with solutions, plot hyperbola matlab, Balancing equations calculator. Ratio calculator math, online binomial expansion calculator, second grade equation, inequalities calculator online. FOIL calculator algebra 2, free maths worksheets algebra expanding, algebra software, easy ways to find algebraic expressions, the americans textbook online, lowest common denominator in variable, ordered pairs coordinate plane. Elementary math trivia, holt algebra 1 textbook answers, conceptual physics online quiz, ex. of math poem, reducing radicals worksheet, how to do simultaneous equations in excel. "scale factor worksheets", percent in solution and proportion, maths powerpoint presentation, scale factor worksheets math. Work out algebra online, algebra questions yr7, algebra games grade 8. Algebra with pizzazz answers key, free worksheets on measurement, quadratic expression calculator webmath, online chemistry equations solver, linear algebra and everyday math, factorise cubed Ode45 en matlab, problem solver worksheets, algebra b software, ks3 math, factoring rational expressions calculator. Free algebrator, practice hard equations, hyperbola calculator, radical calculator, l c m grade 7 maths, trinomial problem solving. 2 step equation worksheets, trivia related to mathematics, how to solve radicals with fractions and variables and square root, what might you have if you don't feel well algebra with pizzazz, one step inequality worksheets, mathematical poems, holt algebra 1 answers to worksheets. Subtracting mixed numbers calculator, cube of a binomial worksheet, what are the common factors of 34, one step inequalities worksheet, interval notation calculator online, year 6 hard maths problem exams with answers. Math worksheets ordered pairs, algebra with pizzazz answers, printable function machines for 2nd grade, polynomial and rational expression calculators, online put numbers in order, 10th grade english games online. Divisor calculator, cubed calculator, decimal to fraction formula, calculator cu radical online. Expanded and factored form, Math Dilation, factoring trinomials solver, simplify radicals calculator. Third grade fraction worksheets, finite math solver, Printable Coordinate Grid. Factor trinomials solver, bginning mulitiplication, quadratic formula ti-89. Graphing linear equations worksheet, TI 84 equation solver system of equations, mathsppt, hard equation test. Linear algebra done right solution, simplifying ratios worksheet, multiplying and dividing rational expressions calculator. Systems of linear equations worksheet, regarding mathematical game, vertex finder, "solve with a grapher". Free downloadable graph questions for aptitude test preparation, difference quotient, math, least to greatest solver, practice equations online, fun algebra projects. Use excel + solver non-linear simultaneous equations, java common multiple example, 9th grade algebra how to multiply radicals, line graph worksheets. Algebra substitution negative expression printable worksheet, multiple choice tests for english, freeware algebra solver, ppt logarithms. Solve square root of a fraction calculator, Maths Aptitude example tests, free printable t chart, square root of pie math, common demoninator calculator, long division multiple choice. 9th grade algebra worksheets, pre algebra prentice hall answers, "greatest common denominator calculator", pre algebra test 6th grade MN. Foil solver online, polynomials in real life situations, 6th grade math with formulas quizzes. Coordinate plane worksheets, free translations rotation, reflection worksheets, mulitiple step math, free graphs of parabolas worksheet, fun poems for math on distributive property. Ti 83+rational expression+program, solve for unknown variable, questions about radical expressions for 9th grade, chemical math. Pearson prentice hall pre algebra online textbook, least to greatest fraction calculator, 2009 california algebra 1 answers, 4th grade worksheets, asymptotes calculator. Radical calculator, algebra games ks2, finding the scale factor of a circle. Adding and subtracting three fractions calculator, polynomial calculator factoring, two step equations test. Quadratic expression solver, 5th grade "trivia", ks3 percentages worksheet, permutation and combination+ 6 grade, binomial radical expressions calculator, radical expressions calculator. Creative publications algebra with pizzazz, fraction TI-89 factorial, factor polynomial calculator, casio computer emulator for pocket pc, apple numbers formulas percent, solving quadratic equations worksheet free, exponents and roots worksheets. Online recursive formula calculator, houghton mifflin mathematics answers, printable printable worksheets of laws of exponents, printable coordinate grid. Online ti-89, cheat sheet for sixth grade math, gcd solver, financial aptitude test with answers, math problem solver synthetic, java program to find roots of quadratic equation, even root property. Scale factor of a circle, McDougal Littell Algebra 2 simplifying complex fractions, algebra worksheet synthetic division+free, transformations on the coordinate plane quiz, math poems middle school. Circle graph worksheets, formula for l.c.m., 9th grade math questions and answers. Coordinate plane printable, square root consecutive integers calculator, why do you need a LCD in rational expressions. Online integration steps, how to complete the square ti-89, multiplying and dividing rational expressions solver, how to solve simultaneous equations with squares. Ks3 WORD PROBLEMS worksheets, graphing linear equations worksheet answer sheet, algebraic +/* fractions, Arithmetic reasoning. Trinomial calculator free, " problem solving worksheets", prentice hall math pre algebra answers, how do you get proportion for percent change, free volume word problems solvers. TI 84 graphing radical equations, algebra problems with brackets worksheet, ks2 maths papers, graphing ordered pairs to make a picture, cross multiply variables worksheets. Financial information aptitude test, factoring a negative fraction, maths equations ppt, convert square root to decimal. Simplifying radicals calculator, chemistry balancing equations calculator, multiply worksheets, foil calculator online, free printable 9th grade math problems, step by step equations of pi, sqare eqal solver. Prayer in advance algebra, radicals calculator, graphing ordered pairs to make a picture worksheet. Consecutive integers calculator, parent graph worksheet, implicit differentiation calculator online, square root sums download free worksheet, poet math exercise algebra. Divide polynomials calculator, grade 2 word problems, arithmetic progression in real life, binomial expansion, permutation worksheet for third grade, factorization 9th grade, word problems of plus sums for class III. Printable coordinate plane, adding and subtracting negative and positive fractions calculator, factorising calculator online, algebraic equations free downloadable worksheets for grade VII, least common denominator java, rewriting division as multiplication problems, how to add subtract multiply and divide radicals. Radical equation calculator solver, 6th grade math taks practice test, divide monomials calculators, solving lagrange multipliers, "complex fractions calculator". Decimals least to greatest calculator, really hard math problems answers, free downloads cube and cube roots. Inequalities calculator, solve rational expression calculator, thomas fuller mathematician, two step equation calculator, algebraic calculator that shows steps. Plane trigonometry problems, ti 89 cube root, math prayers, parabola and its intercepts on ti 84, root problems in algebra, pictures of thomas fuller mathematician, polynomial simplifier. Programming exponents in matlab, root of quadratic equation using java, polinom divider. Parabolas- focus, vertex, directrix, focal diameter, remember our algebra, finding imaginary roots. Quadratic with cubed, rules of square roots worksheets free, free download mathematics formula guide for 8th grade, maths aptitude tricks, finding square roots of imperfect squares, algebrator free Partial fractions in TI-83, poem about polynomials, www.algerbra with pizzazz.com, linear equation with fractions solver. Elementary math textbooks GCD, calculations for Holt Physics 2002, year 8 maths test algebra. Linear measurement worksheets, algebra expanding brackets worksheet, 6 th grade taks past papers. Generator to put numbers in order, quadratic algebra calculator, adding subtracting polynomials worksheet, ti-83 square roots problem, online polar graphing calculator, how to solve for extraneous Word problems involving radical equations, free ratio worksheets, convert to radical notation calculator. Solving complex trigonometric equations, mathematical factorization, creative publications algebra with pizzazz answers, coordinate plane printables. Algebra structure and method book 1 mcdougal littell online, mixed numbers to decimals calculator, powerpoints on how to solve quadratic equations using factoring, ti-93 online. Evaluation and simplification of an expression, make your own algebra tiles, completely factoring complex numbers, lesson positive negative add subtract multiply divide, rational algebraic expression equations, how do you write an equation in vertex form. Multi step problem solving worksheets, free online rational expression calculator, the formula solving nonlinear equations for teenagers, inequality trivias, distributive property worksheets, simplifying ratio worksheet, linear equations worksheets ks3. Calculator radicali, free linear worksheet graph, rational expressions worksheet, rational expression program+ti. Poems about trigonometry, california content standards test formula sheets, mathemathical pOEM about trigonometry, root solver, application in real life of hyperbola problems, high school math trivia, division word problems grade 4. Math help show steps, rotation reflection translation worksheets, simultaneous equations for dummies, Poem in trigonometry, easy integer division problems. Www.maths 7th class exponents and powers, balancing equations online calculator, standard form calculator online, polynomial multiplication calculator, put info on ti-84, algebra expanding brackets. Expression simplifier, synthetic division worksheet with answers, worksheets for fractions 6th grade, boolean algebra cheat sheet, inequality calculator, math poems, ti89 inverse log. Multi step math problems 4th grade, online math solver simplification, java programs code of sum of digit. Percentages worksheets ks3, ti89 online, pre algebra with pizzazz answer key, holt algebra 1 answer key, squaring fractions calculator. Multipling radical, complex trigonometric equations calculator, completing and balancing chemical equations, solve factors and trinomials online, second grade fractions pretest, problem of the week mcdougal littell algebra 2. Focal diameter parabola, maths worksheets ks3 factorising algebra, math trivia for kids, maths for dummies online multiplication, factor tree 4th grade, equations with squares and cubes worksheet, how to do non linear simultaneous equations in excel. Coordinate grid pictures, sums and differences of cubes worksheets, substitution practice, fraction solver, free algebra worksheets. Value of pie math, site that solve fortran program, TI 84 simplifying radical expressions, "good program" spring calculation. Combinations on graphing calculator, online binomial math equation solver, sum or difference and simplify, parabola example problems, yr8 games. Quadratic inequality worksheets free, logarithm solver, trigonometry bearing problems, printable worksheet - integers, grade 6 translations worksheet, algebra 1 math book online. Algebra 1 chapter 3 resource book, higher order polynomials, orthogonal polynomials, cd saxon algebra. Division on polynomials, solve the system of linear equations, np polynomial, trinomial equations . Worksheets on square roots, fraction to decimal calculator, algebra chapter 15, finding roots of polynomials, variables and algebraic expressions, understanding square root. Graphing inequalities activities, solve for x online, convert decimal fraction into, how to solve an algebra equation, free online equation solver, linear functions and graphs. Calculator online to use now, algebraic equasions, books algebra, solving inequalities on a number line, simplifying fractions calculator, math for the real world software, quadratic equations and 3d graph equations, find the x and, off axis parabola, teaching factoring. Properties of matrices, is a square root a rational number, algebra linear graphs, how do i solve algebraic equations, with polynomial, algebra one half. Mathplayground.com, of polynomial equation, a trinomial, ti 83 calc, algebra 1 prentice, to algebra and, heath algebra 1 an integrated approach. Math projects hands on, algebra 2 note taking guide, sum of cubes. Algebra differential, algebraic multiplication, add polynomials, factoring in math, how to solve quadratic equations, how to do algabra. Inequalities graphing, polynomial long division calculator, high school algebra. Algebra letter, what is radical, radical expressions and equations. Algebra square, root simplifying square, help solving math, steps to solve parabola vertex. Algebra 11 help, polynomial from, matrice inverse. Cheat on my math lab, lcm and, college pre algebra, convert decimals to fractions, linear systems of equations, polynomials adding subtracting, expressions in maths. Learning fraction, simplify a radical, virginia algebra 2. Graphing calculator program, polynomial equations by factoring, radical simplify, simplify and combine like radicals, algebra equation solving, orleans hanna algebra prognosis test. Graphing a quadratic equation, prealgabra, calculator for algebra, learn college algebra, grade algebra 1. Quadratic 20equation, uses of algebra, algebra factors, simplifying complex expressions, algerbra answers, hyperbolic graph. The square root of negative, parabolic graphs, 3 simultaneous equations, graphing linear equations worksheets, gcf math. Rational equation addition, algebra differencees of square, Teacher's Edition book answers for trigonometric problems. 6th Grade mathematics vocabulary quiz, online algebraic solvers, general aptitude questions, radical simplifier program, complex analysis maximum exercise solvers, algebra revision "year 8". Convert string to time java, prentice hall mathematics course 3 ohio teacher's edition, mathematical problem solver, college algebra exercise problem, aptitude question, 'program factor 9. Biology worksheets and mcqs, linear first order partial differential equation homogeneous, simplifying radicals calculator, what is the domain of a hyperbola, permutations and combination problems, duhamel's principle nonhomogeneous problem. Free printable 8th grade inequalities worksheets, integers worksheet, mathematics on line explanations for grade 11 about equations of the parabola, Algebra with Pizzazz Worksheet Answers for page 98, surds worksheet. SOFTMATH, pass papers exam for grade 8 physics, cpm algebra 2 formulas, square root with variables calculators, learn simultaneous equations. Vertex form absolute value, easy maths questions test on algebra, polynomial poems, test paper of operations on numbers of class 6th, algebra review least common denominator simplify, math.formulae. Low level algebra 1, Fifth grade ALgebra Puzzles, LCM decimal primary school, balancing equations calculator. How to do algebra, hyperbolic tan ti 83, mcdougal world history textbook glossary terms online. How to solve simultaneous eqations for children, online EOG practice 7th grade, 8th grade algebra basics powerpoint, algebra problem 4 problems 4 unknowns, solved aptitude questions, reducing rational expressions. How to find lcm on ti-84 calculator, ti-89 solve, hardest equation, calculators used to find out equations. Integrated algebra sample test answers explained free, nonlinear equation solver matlab, id worksheets for kids, matlab "system of equations" non linear, solving equations examples crickets, Ti-83 free calculator download, kumon answers level g. Mathmatics for beginners, vb6 power math, permutation combination solving, finding the cube root bbc, algebra dummies free, trinomial math free solver, answers to practice tests for integerated alegra practice tests for regents exams. Solving equations-gcse, grade 9 «algebra exercises, quadratic equation on calculator manually, TI 83 quadratic formula directrix focus, algebra simplifier and solver, advanced logarithm algebra, Adding and Subtracting Positive and Negative Numbers. 9th grade algebra math books, year 8 maths exam examples, common factors table, summer workbook pre-algebra, adding and subtracting integers worksheet, math test ks2. 8th grade pre-algebra final exam, MAtrix intermediate tests answer, special factoring rules, make a factoring program on a graphing calculator, second order differential equation matlab, factoring polynomials sum of 2 cubes. Free downloadable 3rd grade tests, multiply exponents equations, aptitude questions on series.pdf, reverse foil algebra sample problems, permutations and combination. Mathpower 11 self study, adding/subtracting like bases with exponents, Year8 maths papers online, unit 1 mathematics practise exam, year 11, perfect fifth power factor in algebra. Year 7 maths worksheet, practical applications of parabolas using casio algebra fx 2, science ks3 revision free online worksheets, online papers for the 11+, free first grade lesson. Gmat examples: complex square root factoring, "wave equation" nonhomogeneous, free download kumon, program equation rdcalc, simplyfing radicals with exponents. Casio fx-92 store numbers help, grade 10 Algebra, prime factorization printables. Algebraic proof gcse help, fractions algebra calculator, basic maths cumulative percentages, linear quadratic functions mountain range, dividing fraction polynomials calculator, application of algebra, simultaneous equation solver. Conceptual physics prentice hall, free work sheets for 5 year olds, downloadable pdf notes on boolean algebra for computer science students, definition of mathematical expression permutation, method for solving polynomial in three degrees, equation writer as a program for the ti 89 titanium. Understand radical algebra, download kumon, maths pre-test year 8 for exams, graphics calculator online solve, easy subtraction of algebra, what is the difference between parabola and hyperbola. EXCEL COSINE GRAPH DOWNLOADS, positive and negative integer worksheets, basic cost accouting formullas, algebra test you take in 6th grade to take algebra in 7th in texas, probability aptitude questions, multiple choice algorithm exercises VBA. Free e-books on what do first graders need to know, exam papers year 10 physics, rationalize the denominator and simplify calc, iowa algebraic aptitude 6th grade sample, creating programs for the TI-84 Silver Plus Edition, equation solver for int, shadow algebra math problem. Free eog workbooks for 3rd graders, ti 89 completing square, "factorising" generator, software. Game theory java programing formulas, graphing a basic equation with fractions, free Maths papers for 8 year olds, middle school math with pizzazz book d grade 6 riddle answers. Poem on use of maths in indian tradition, formula sheet online for free, conic graphing calculator pictures, nc released 4th grade amth eog. Lewis loftus, solving second order differential equations, algebra hands on activities year 7, permutations & combinations, mathmatics poems, linear graphs worksheet. Algebra 1 prentice hall chapter 10, find the roots of an equation calculator, simultaneous solver. 5th grade social studies 10 question practice quizs.com, factorization of quadratic equation, college algebra problems, algebra solver download, characteristic properties of the compounds of the s-block elements. Binomial solving, square roots in fraction form, glencoe algebra 2 online student edition, sample trigonometry problems with answers. Yahoo visitors found us yesterday by typing in these math terms: • example math problems for ti-84 • text ebook ti-89 download • grade 11 math tests free • graphic ordered pairs using a Ti-83 Plus calculator • free lesson in beginner algebra • linear algebra + free ebook + free download • converting fractions formula • adding subtracting square roots to real numbers • printabile math sheets.com • algebra work sheets for children • pre- algebra with pizzazz! • algebra 1 interactive books • " linear algebra" and" solved problem" -3000 • ti 92 unit math teaching • grade 10 mathematics review parabolas vertex form • solution to linear programming TI-83 • Linear equations in two variables calculator • get free answers for algebra problems • algebra 2 help • reciprocals to write multiplication • algabra cheatsheet for the ti-89 • complete the square calculator • free aptitude question and answers • Slope int,point slope,standard form games • Free Simultaneous Equation Solver • 8th grade algebra practice problems • maple solve equations • linear programing+software • Comparing exponential expressions worksheet • mathematic problems with answers • how to convert general to standard form • math problem solver • math simplify logarithm division • logarithmic interpolation applet • "Differential Equation";"Application";"2nd order" • nonlinear first order differential equation y multiplied by y' = • Kumon Answers • formula for 3rd order polynomial • how to calculate GCD • glencoe high school algebra password • java basics long division • algebra practices • pre-algebra conversion tables • pizzazz free math worksheets • ti 89 and matlab • Rational Expressions Calculator • fractions in basic algebraic equations • how to use graphing calculator for linear equations • "algebra 2 software" • ti-84 math programs • sats exam maths games • free venn diagram problem solver • Find Least Common multiple tool • logarithmic expressions graphing TI-83 • Solving slopes and intercepts - fractions • online antiderivative calculator • gcse science printable test papers on biology • hard algebra questions • printable yr 9 maths tests • math homework sheets first grade • convert decimal to base 3 java • printable algebra test • quadratic equations in excel • gmat exercices • intermediate accounting volume one third edition powerpoint • LCD calculator • permutation combination • free aptitude test papers • how to find a combination or permutation on a calculator • free algebra 2 help online • physics algebra grade 10 • forming algebraic expressions, equations from phrases • holt physics CD • algebra helper • solving first order, nonlinear differential equations • college math sheets • end of third grade printable math test • glencoe algebra 2 answers • english past papers 5-7 interactive • Pythagorean lesson gui • calculator for factoring • mixed number to decimal • A work sheet on proportion • laws of exponent math test questions • grade 12 mathematics ontario sample quiz • easy way to learn english pdf • homogeneous nonlinear ode equations • practice prealgebra final exam • solve simutaneous equation excel • "best algebra textbook" • write expression using only positive exponents 2^-2CD^-5 • Yr 8 maths revision paper • easy way to calculate percentages mental • casio calculator orders • algebra problems/6th grade • probability math review test online • solving square equations linux • equations • how to do scale factor in mathematics • simplifying square roots with exponents • exam papers for year 8 maths exam • 5th grade math ga • probability and permutation combination aptitude questions • do a maths free paper online • free maths tuition graphs • circle graphs objectives and goals • 2nd order nonhomogeneous differential equation • fractions to decimal formula • factoring with calculator • Prentice Hall Texas Algebra 2 book • positive or negative quadratic expression • how to use casio graphics calculator equation solver • online graphics calculator with 2 variables • ti-86 rational expression • two variable equations • grade eight algebraic equations • TI-83 calculator program download • ti-86 how-to factor polynomial • percents, proportions, and absolute value • algebra solving equations Matrices • adding,subtracting,multiplying and dividing integers practice problems • yr 9 maths sheets • free math worksheet to help with ged • triangle solver using vb • pizazz math • kumon worksheets free • singapore downloadable exam papers • BASIC factoring program in ti-84 • absolute value functions worked out • ti89 parabola • online english test yr 8 • 6th grade statistics math worksheets • cube roots with a factor tree • free download of aptitude book • solving for two added variables with exponents • McDougal Littell algebra 2 answers • learn simple algebra • mathematical jokes about linear system • download aptitude test • free downloadable maths past papers with answers • trigonometry + factoring • accounting equations permutations combinations explained • 3rd grade coordinates planes • simultaneous equations square • math+series+activities+year 9+syllabus+algebra • free problem solving printables KS2 logical • holt +"middle school math" +"chapter review" +"course 2" • equation solver on TI-84 • lessons on square roots • math test for ks3 • Quadratic Equations solving by squares • chapter 4 algebra 1 mcdougal littell inc. answers • solving squre roots and other radicals • kumon logarithm • square roots with variables • math online exercise of ks3 • quadratic TI-89 • algebraic proofs homework worksheet • converting decimal worksheet • i need help with college algebra • math algebra functions, domain video tutorials lectures • 10th grade algebra/linear algebra • log formulas • practice free year 8 algebra • worksheets for 8yr olds • online expression simplifier square root • free use of calculator-TI-84 • solving second order homogeneous equations with initial conditions • 11+ Examinations free paper • lineal metres in one square meters • algebra help • Algebra 2 project help • is the greatest common factor of any two even numbers always even • printable pre-algebra learning sheets • square difference • KS2 sample maths test • multiplication and division of integers work sheet • ti 84 emulator • square roots on ti • differential equations solve second • pre algebra worksheets • multiplying like denominators • statistics formulas for combinations • factoring polynomials grade 9 • adding power fractions • learn elementary algebra • cheatsheet for the ti-89 • free online maths papers.ks3 • Factoring Cubed numbers • fx-92 swf • free math exercises for fourth grade of primary school • trigonometry problems and answers • exponents lesson plans • software for college algebra formulas • I ne help on my homework • algebra en pdf • how to solve linear equations powerpoint • algebra made easy • f1 maths exercise download • Free pre-Algebra worksheets • rational equations calculators • gce 7th grade math question paper • teach surds • combination examples in real life • general apptitude questions and solutions • printable worksheets for 6th graders • linear proportion math problem • multiplying and dividing online free test • lesson plan multiplying numbers multiplication • square root property • work out this statistics problem for me • worksheets adding multiples of 5 • 6th grade final exam online study quizzes • free factoring accounting manual • college algebra 2 calculator radical expressions • graph circles with square roots • parabola calculator • algebra questions ks2 • free printable seventh grade math sheets • mc answer sheet download • how to find a cube root on a ti-83 plus • prentice hall pre-algebra practice workbook answers • free ks2 maths quizzes • indian maths teaching with tutor download • lowest common denominator and polynomials calculator • permutation and combination ppt • Free College Algebra Book • Solving simultaneous equations classes for beginners online • how to determine if a graph represents a function • year seven maths free exercise • irrational multiplying square roots worksheets • sums in algebra • learn boolean algebra • "best college algebra textbook" • trig identities solver • online tutor alegra 1 • Free 12+ Examination Papers • chemistry yr 8 quizzes online • revision yr 8 algebra • square root +caculator with addition and subtraction • factoring expressions to 3rd power • how to convert angles from rectangular to polar coordinates on TI-84 plus • different denominators 9th grade algebra • how to square footage calculation • absolute value formulae stat • get help solving an algebra (radical equations) problem step by step online • simplify by factoring • SQUARE ROOT TRICKS elementary • downloadable aptitude test • ellipses equations grapher • Holt algebra 1 • LCM of fractions calculator • even root property calculator • free online exercises on fractions • "free math homework" fourth grade • formula for multiplying integers • Boolean Algebra Practice Problems • software for pre algebra • math-area of grade 8 • radical equations inequalities with squares equations • How to do simultaneous equations for math studies • easy way to solve a system of equations • beginners algebra • maths solver square roots • find GCF and LCM algebra 2 • clep college algebra sample tests • online test papers ks2 not pdf's • free math problem answers • trigonometric formulae for addition and subtraction of inverse functions • free online instant help with algebra 2 • scale factors problems • algabra help • math "discount worksheets" • algebra/simplify without negative exponents • free 6th grade multiplication workbook • print out math homework • exponential functions solver • What is the difference between an equation and an expression? • math - kids - combinations and permutations • cheat sheet for maths yr 9 • solving algebraic fractions LCM • free algebra for dummies • online limits calculator • algebra final worksheet with solutions • Balance + chemical equations + chemistry + animation • how do you add and subtract and multiply and divide and simplify radical expressions • maths paper 4 past papers questions and solutions • how to find equation of cubed graph • intermediate algebra study guides • "tricks for factoring" • equation fractional exponent • Quartic Equation Solver • algebra for dummies online • free 9th grade math study test • college algebra clep test • algebra factor machine • level 8 questions on simultaneous equations • online high school algebra I enrichment course texas • Graphing linear equations for idiots • free homework for 6th graders • free printable worksheets sixth grade • online aptitude questions free download • Simplifying and solving rules activity. • how solve conversion factors • "Elementary Algebra" Bittinger 6th Edition table of contents • holt algebra 1 lessons 9-5 to 9-9 quiz • Factoring algebra 2 when to use it in real life • quadratic functions with fraction coefficients • practice domain algebra problems • algerbra 1 study game • factoring trinomials jokes+cartoons • common factor of numbers in variables • polynomial factorization applet • finding common denominators of equations • McDougal Littell Algebra II Objectives • greatest common factor calculator polynomial • free rational expressions cheater • grid method to multiply algebraic expression • www.com google/ How to make a good ENGLISH GRAMMER TEST? • Free Online Math Tutor • algebra .pdf • free online learning games for 7 th graders • equation for parabola graphing calculator casio • perfect fourth roots • free download of 6th grade math books • chiago font download • solving 3rd power equations • science games for ninth graders free • simplified radical form • algebra 2 answers glencoe • solving trig word problems with graphing calculator • how to calculate lowest common denominator • free learning alegbra • manual solution for discrete mathematics and its application 5th edition arabic • volumetric root calculator • 7th grade equations worksheet free • algebra formula for ti-83 • i need step by step of beginning statistic • kumon answer book download • learning algebra • square roots with varibales • polynomial fifth • factorization calculator • 8th grade pre-algebra final • grade 4 algebra quiz • grade 5 algebra equations • An easy way of doing 10th grade algebra • math fact practice sheets pdf • algebra solver, step by step explanation • analyzing circles worksheet grade 10 • Prentice Hall Mathematics WorkBook Answers • factorise and solve equation calculator • how to solve 6th distributive property free examples • tricks ti-84 plus • biology ks3 questions online • algebra help exponents simplifying calculator • 6th grade final exam online quizzes • sguare meter to square feet • algebra, power • algebra 2 full online tutor • solve online objective type india • how do u turn a decimal into a precent • Worksheets for 9th Graders • "rational expressions calculator" • symbolic method • year8 math worksheet • partial factoring into vertex form • algerbra • on-line pre-algebra courses • partial factoring into vertex form quadratics • java: what is the difference between print Sum and find Sum? • what is the relationship between y=mx+b and ax+by=c ? • square root to the third • fraction division by whole number worksheets • math quation • divide polynomials online • mcdougal littell algebra 1 answers • rational expressions and equations calculater • mixture problems • Algebra with Pizzazz • India 1st grade maths • free GED square root example • sixth grade fraction practice • aptitude questions pdf • square root minus square root calculator • solve sixth power algebra • excel trig calculators • algebra 1 california adition answers • free download charles p. mckeague beginning algebra 7th edition • maths year 8 exam papers • free prentice hall mathematics answer • Matlab: runge kutta heat equation first order • convert numbers from base six to base three • Ninth grade science notes • Pre Algebra Lesson plans • exponential form calculator • real life pictures of a hyperbole • trig identities solver free • Graphing for idiots • balancing equations ks3 maths • 8th grade math eog quizzes free • child struggling with prealgebra • radical equations and inequalities • adding and subtracting ducks • 6th grade aptitude test • calculator for quadratic equation • math algebra binomial theorem • compund interest equation • 4th grade geometry graphs problems help • worksheets test papers for grade six • year eleven equations with letter in the denominator • balancing method algebra • expanding binomial calculator showing all work • simultaneous parabola 3 equations • solving quadratic equations by extracting the root • how to do a Quadratic equation 8th grade • test paper fractions of class 6th • accounting with algebra formulas • solving • rational algebraic expressions lesson • slope worksheets • 6th grade NEW YORK math testing program • first grader font download • free online quiz for 6th grader • permutation combination exercise • calculate integral free online book • adding and subtracting fractions calculator • solve equation visual basic • force KS3 download worksheet • completing the square calculator • maths quizzes for yr 8 revision • quadratic formula code for ti-83 plus • problem solver definitio • Example of a word problem using grouping, exponentials, multiplication • fraction power • cubed polynomial factored • decimal worksheet • add radical square roots • multiply adding and subtracting and divide decimal games • algebra 1 california edition answers • Algebra classes for beginners online • what do you call the suare root of a number? • root and exponent • free answers to my own quadratic functions • how to factorize cubes • Intermediate algebra Martin-Gay html tests 4th edition • plot circle ellipse in Mathcad • Cognitive Abilities Test online 4th grade • SLOPE OF GRAPH ON GRAPHING CALC • intermediate algebra answers • find roots of a quadratic equation using a calculator • worksheets-angle relations • 3rd grade lattice multiplication worksheets • Notes on Simplifying Radicals • learning Elementary Algebra online • solving a square root with a exponent • Tests on Algebra • video to solve quadratic word problems • ti 84 radical function • multiply polynomials calculator • check if given no is divisible by two in java • Exponential how to write expression • worded quadratic equations • Year 10 chemistry lessons writing equations • simplify radicals worksheets • math worksheets printouts for 3rd graders • free math worksheets gr6 • sat materials papers free download • college algebra 2 calculator • free basic math absolute beginner • using a calculator for algebra how does one simplified • Formula for Ratios • KS3 free test papers • Radical Expressions Solver • what to study for the florida algebra 1 orange county final exam • SURDS IN MATHEMATICS+9'TH STANDARD STATE SYLLABUS • maths aptitude questions • fastest ways to do grade 7 math sums • free ti 83 calculator online • free fractions worksheets with easy word problems • learn algebra software • activities for solving quadratic equations • ged math work sheets • qudratic equations • EBOOKS FOR FAST MATH CALCULATIONS • free aptitude book download • kumon answers online • multiply fractions online calculator • exponential quadratic equation solver • entering the math equation • algebra hungerford • Yr 8 math revision • Find equation of hyperbola • rational equations calculator • 9th grade chemistry sample test • 7th grade equations worksheet • mcdougal Littell Algebra 2 Answers • printable year 9 math • mixed number fraction & adding, subtracting • maths volume sheets for teachers • scale maths • laws of exponent math test questions pdf • free 'maths worksheet grade 2 more than and less than • printable 6th grade math test • mcdougal littell algebra 2 test • free online scientific calculator with fraction button • finding the vertex with a negative number • radicals and complex numbers finding the domain • pre alegra • ks3 science past test paper • instead of adding ,it is subtracting • end of the year distrubitive math, 5th • hyperbola+inequalities • mathematics: homework gr 10 • turn a decimal into a fraction calculator • ti-89 interval notation • boolean algebra calculator • prentice hall mathematics workbook • free radical simplifier • math grade 9 worksheet • venn diagram + kids problems • trigonomic equation simplifier • Free Algebra 2 Solver • year 6 printable maths worksheets • 2nd form history exam paper • algebra software • Algebra with Pizzazz! :Creative Publications • algebra work problems • factor into fraction calculator • Conceptual physics questions answers • subtract radicals • EOG Practice Test 6th grade • trigonometry problem solver • factoring cubed binomials • algebra answer help rational expressions and equations • adding subtracting positive negative numbers exercises • prentice hall algebra 2 answer key • finite math clep tests • Algebra equations for grade 5 • Discrete mathematics venn diagram cheat sheet • holt rinehart winston algebra • algebra and trig math for dummies online course • ks2 practice sheets maths • simplifying rational expressions calculator • Regents B exam maths papers of previous years with key • radical simplifier online • domain and range + Ti 83 • mixed number to decimal calculator • step problems in maths for kids • complex rational expression graphs • 9th grade texas formula chart science • synthetic division calculator • how to program TI-84 Plus Trig • maths scale • Study Notes on aptitude • free tutorials on Cost Accounting • Math calculator past papers ks3 • online yr 8 maths revision • meaning, method, mode algebra online help • how to solve 3rd equations • radical solver • free worksheet 6th grade area and volume • algebra 2 review • algrebraic fractions • how to learn basic algebra FREE DOWNLOADS • equations with inequalities 5th grade • how to find the y intercept, x intercept, and slope for dummies • studying for maths tests year 8 • maths printable revision games • how to solve 9th class math problems in India • maple calculate permutation • McDougal Littell algebra 2 answer key • free online square root calculator • factoring equation • adding and subtracting negative integer math online practice test • free key state 3 sat paper revision • 5th grade math worksheets/stem & leaf plot • EOG worksheets to practice to printout for grades 3-12 • chapter 10 Mcdougal Littell algebra 2 • java solving ode • equation factor calculator • java convert decimal to any number base • algebra software • how to find r2 on ti 83 • dividing polynomials online • I need help finding a problem using grouping, exponentials, and multiplication or division • algebra fraction solve for t • adding positive negative numbers practice test • questions+aptitude+answers • learn algebra online free • add and subtract fractions math worksheets • calculator techniques • answers to glencoe algebra 2 works sheets • pre algebra quiz • 6TH GRADE MATH TEST • grade 3 math combinations • ebook downloadz • how to solve algebra • rational expressions calculator • ppt. beginning algebra • tic tac toe 5th grade • formula for percentage • ks3 papers free • free math grade 9 worksheet • free printable math worksheets for seventh grade • what are the answers for holt mathematics • ti 84 plus polynomdivision • advanced example problems permutations and combinations • math for dummies • Algebra 1 Formula Chart • how to solve numerical skills prealgebra • second order differential equations by matlab • intermediate algebra problem solver for free • using a linear equation in a real-life application • radical calcultor • math help college algebra new mexico edition • algebra 2 answer • gcf exercises • MATHMATICAL EQUAIONS • prime factorization printable worksheets • free pre algebra • list the formulas used in solving clock questions in aptitude • Laplace help • reduction formulae- trigonometry grade 11 • 6th Grade Algebra Fractions • how do you type fractions into you eqation with the texas instruments ti83 • common foot problem.ppt • Precalculus with Limits: A Graphing Approach, Fourth Edition Homework Answers • math investigatory project • kumon cheat answers • factoring polynominal • logarithm powerpoints • free calculators log e • Math Cheats • world hardest math problem • Plato Interactive Mathematics Elementary Algebra answers • scale in maths • online fraction division calculator • printable 1st grade math assessments • school algebra: vertical hyperbola • quadratics and trigonometry problems • use rational exponents to simplify • math factoring free quick answer show work • mental maths test sheets to print • Polynomials & Monomials solver • tutor ti-84 • finding formulas for number grids • free 3 rd grade math worksheets • online graphing calculation • Saxon math vs. Aleks math? • free work sheets on order of operation • y-intercept theorem • printable ez grader • dividing with fractional exponents • download rom ti 84 • parent graph of hyperbola • prime factor calculator • aptitude papers solved • Iowa Test of Basic Study free • absolute value graph- polynomial? • vertex form from a graph • college algebra assistance • 9th Grade Algebra; Cheat Sheet • exponents for 6th grade worksheets • applications of trigonometry in daily life • free first grade printables • math problem solvers • solving radicals with algebrator • Multiplying and dividing rational expressions • algerba calculator • Translate the phrase into a variable expression: seven subtracted from the product of eight and d • first grader printable math games • convert decimals to fractions calculator online • find graph with absolute value • homework help balancing method algebra • how to find the domain and the range using the graph of a function • yr 11 general algebra • free algebra calculator • math and distributive principles and problems and answers • Algebrator download • free printable maths percentage exercises • pythagoras theory online calculator • 8th grade math-area and volume • texas ti-85 rom image • long trigonometry answers • ti84 examples • Bearings Ks3 worksheets • Cost Accounting level 3 exam paper and answer • casio fx83 free download • who invented fractions? • rudin exercise solutions • maths questions esl yr 9 • free volume and area worksheets maths • EQ online test printouts • Algebra pdf • NC end of course algebra 1B • maxima command line • rational expressions and equations online calculator • boolean formula generator applet • ti 84 plus downloads • Prentice Hall Mathmatics Course 2 • McDougal Littell online english teacher books • 9th grad pre algebra freeware download • fast learn free gcse • gcf lcm worksheets • square root addition calculator simplify • free online hard math tests • maths area question for kids • fractions sheets for teachers • free online10th grade math test • common denominator calculator • fractions caculator online • free downloadable maths sums • year 11 maths work sheet • maths made tests of level 8 • printable math sheets • teach me algebra 1 free • calculator left limit • daily life mathematical application freeware • algebra, write an expression: n less than 20 • college algebra help • slope formulas • patterns and algerbra worksheets for grade 6 • 6th grade geometry printout • divide rational expressions • math equasions • holt pre algebra answers • Java solve formula polynomial • texas t1-81 manual • algebra for college students Mark Dugopolski teachers edition • maths poem in indian tradition • algebra calculator with square root • Texas ti 84 plus download game • simplifying complex radicals • developing Area of Circle: KS3 • grade 10 word problem quadratic functions help • solved problems about 3 equation 3 unknowns; linear equation • example of 6th grade algebra • permutation & combination • Algerbra Final Study help • how to solve linear equations • Math problems with number sense radicals, LCM, GCF, Irrational for 7 graders • the vertex form for kids • free maths powerpoint presentation on adding and subtracting word problems • free sample entrance test for grade 7 • printable math sheet for grade one • maths questions kumon • free math worksheets for formulas • factorization cross method • texas instruments ti 89 cube root • free algebra 1 work sheet • trigonometric formulae for addition and subtraction of inverse functions as tan inverse + tan inverse • matric calculator • As level maths + formulae for TI-84 plus • observed point algebra eoc booklet • How to do algebra problems • Algebrator free trial download • How to get percentage formula • Free College Algebra Worksheets • rational calculators algebra • bbc bitesize maths quadratic equations simplify • rationalizing the denominator in linear equations • algebra humor • maths problems scaling • grade 7 lesson plans, transformations • trig calculations • online free exams for software testing • 7th Grade Algebra free course • strategies for teaching quadratic factorising • Algebrator+full download • artin algebra • probability/algebra 2 • solving graphing a derivative • radical expression solver • learn algebra online for free • physics formulas for conceptual physics • solving quadratics when there is a negative number infront of x • College Preparatory Mathematics answers for Algebra 2 unit 8 • "4th grade" + eog + math + practice exams • year 10 maths & english aptitude test • Trigonometry Questions for intermediate level • maths half year test for year 7 work sheet and games • writing C# square root formular • FREE KUMON WORKSHEET • how to solve a physic problem • prentice hall mathematics answer • factor a quadratic program casio • step by step algebra for interest and percents • negatives and positives- adding, subtracting, dividing, multiplying • addison-wesley conceptual physics third edition answers • Solving equations-gcse E-D grade • dividing polynomials calculator algebra • eighth grade pre - algebra worksheets • learn algebra 2 free online • free calculator for radical expressions and functions • How do you graph a hyperbola on a TI • probability examples for 9th grader • "grade 8" algebra fun lessons • hard math calculation • download free graphin text • introductory algebra homework answers • algebra II probability • how to use logs TI89 • to determine the number of deer +Algebra • andwers for mastering physics • hard algebraic equations • square root and variable calculator' • mathematical graphs parabola hyperbola • 6th grade math practice printouts • mental math practive online for SAT tests • Algebra Radical questions • algebra for 6th graders • free download book of accounting principles & practices • life science exam papers • square roots with exponents • year 12 maths worksheets • program • Who invented the slope and y intercept • texas instrument instructions T83 • multiply and simplify using casio • solving distance formula with a variable • Aptitude Questions and Answers • prentice hall mathematics interactive algebra 2 textbook • biology sol review sheet answer key • algebra two max and min value of function • SCINTIFIC CALCULATER RULE • prentice hall pre algebra practice workbook • 5th grade distrubitive practice • how to do algebra for beginners • software for algebra • algebra by barron online free math material • 11a Simplifying Radicals • Solved aptitude questions • free math problem solver • Virginia SOL 4th grade printable SOL Math • free kumon worksheet • simulink to Solving an equation of fifth grade • how to add and subtract +eqautions • nonhomogeneous second order ode • hrw +"middle school math" +"chapter review" +"course 2" • 8th grade inequalities worksheets • Algebra Problem Solvers for Free • grade 10 functions worksheet • maths games yr11 • free homework help balancing method algebra • solution manuals for physics books for free download • Jeeves Solve Math Problems • glencoe texas math textbook course 3 "Chapter 10" • Type in Algebra Problem Get Answer • glencoe algebra 2 book review • similarities between dividing two fractions and dividing two rational expression • algebra 2 chapter 11 help • problems on permutation and combination • how can i teach myself basic algebra • "integration of radicals" • maths papers for 9 year olds free to download • square root addition calculator • simplify exponential equations questions worksheet • formula for all ratios • completing the square quadratic functions and graphs maths least value of x • math grade six test papers • Free Algebra Solver • Pre-Algebra free tutorial • college algebra • Algebra CLEP • Free non-downloadable printable Maths worksheets for year 2 • simple explanations notes discrete mathematics • printable worksheets year1 • pictures of factorials • TI-84 Algebra 2 programs • how use equation in power point • online algebra 2 calculator • worksheets for maths ks2 • solving algebra word problems calculator • pre algebra practice eog worksheets • algebra calculator emulator carcked • stats gcse revision games • 6th grade math quizzes • quadratic equations fractions • printable math worksheets for 6th • UK 6th grade Math tests • printable pre-algebra readiness test for middle school • sample erb tests • quadratic graph • 6th grade math virginia practice and test prep for SOL workbook • fractions in simplest form math cheats • "equation worksheets" and "Grade 4" and |"free" • equation for meters cubed • free download a changing world mcgraw hill • simultaneous equations calculator • 1st grade math patterns lesson plans • maths exam online 4th year level • finding least common multiples in algebra • ti 89 completing the squares • Glencoe worksheets answers • equation to get the solution for volume • cummulative density function • answers for linear motion worksheet • square root solver with factoring • algebrator • college algebra Math problem solver program • prentice hall algebra 1 • Prentice hall~ 8th grade math~interactive learning using textbook as guide • Free Maths pattern workbook for grade 6 • simultaneous equation 3 unknowns Ti84 • radicals calculator • algebra worksheets + "simplifying expressions" + printable • TI-84 distance formula program • kumon answers • solving non-linear differential equations • polynomial factor solver • math equations you need to know for grade 9 • rational equation algebra 1 calculator • low common factor high common factor • How to List Fractions from Least to Greatest • Prentice Hall Pre-Algebra worksheets and answers • conceptual physics teachers edition • combining like terms lesson plan • general equations for all function graph families • systems of equations cheat free • school entrance examination science sample papers grade 7 for downloads • how to factor a cubed root • dividing rational expressions calculators • solve factoring with square roots • free ordered pairs worksheets • free printable worksheets fo 12 year olds • 4th root of 16 • aptitude test solved papers • year 11 physics notes for graphics calculator • houghton mifflin pre algebra • divide polynomial Source C • hard mathemetical calculations • i need to learn algebra 1 fast online for free • mixed decimal to fraction • graph polar equations in matlab • download calculas • exercise maths form 2 • Algebra for beginners pdf • excel exponential equation solver • free help with math • multiplying and dividing equations worksheet • holt physics practice • adding subtracting dividing and multiplying roots • solved homwork of discrete math relations • beach math sheet Yahoo visitors came to this page today by entering these algebra terms: │graphing calculator functions / 3rd order polynomial function │Trigonometry games for a class │ │TI program for simplifying radicals │math exercises for 12 years old │ │mathematic tests │free 9th grade math test │ │EQUATIONS PRACTICE.COM │download online algebra 2 │ │math quizzes for 9th grade │precalculus problem solver │ │maths made easy grade 10 │grade 8 advanced algebra │ │how to divide complex numbers in excel │how do i turn a decimal to a mixed fraction │ │formula to find square root │program Ti-84 plus trig │ │free grade 4 algebra quiz │introductory algebra eighth edition homework answers │ │software to calcuate final grade? │newton's + nonlinear + system + matlab │ │calculator greatest common factor calculator of two expressions │free online graphics calculator 2 variables │ │Free Cost accounting books/reference │balancing chemical equations electrically │ │algebra practice problems isolate variable │passing allgebra clep test │ │applying FOIL algebra │accounting study notes gcse concepts │ │kids printable work sheets │free online quiz of english for 7th grader │ │Divison Worksheets for 2nd graders │math placement test 6th grade │ │finding slopes of a line calculators │maths symmetry worksheet │ │TI 84 emulator │boolean algebra simplification calculator │ │completing the sqaure calculator │online TI-83 graphing calculator │ │6th grade math w words │rational expression solver │ │SUMS FOR 8TH CLASS OF CUBE & CUBE ROOTS IN MATHS │hard maths tests for kids │ │free printable 9th grade worksheets │differential equations maple + modelling motion of a bungy jumper│ │how to calculate parabola distance │Solving Mathematical Problems of mathcad │ │hardest math problem │California 6th grade Algebra Lessons │ │kumon worksheet │area methodpdf │ │Prentice Hall Pre-Algebra Workbook answers │subtracting and adding │ │Teaching slope in pre-algebra │free maths powerpoint in addition and subtraction problem solving│ │SATS-Year Five Practice Paper │multiple roots math 10 worksheet │ │aptitute model quetion pappers │greatest or least fraction calculator │ │freework sheet for second grader │find square roots on a ti │ │example of using trig in real life │Numerical Methods for chemical Engineers. Regression │ │formula fractions to decimals │graphing linear and non linear lines - grade 9 │ │least squares method ti84 │high school linear systems solver online │ │multipliying matrices online calculator │basic algebra complete beginner │ │using factor 9 ti-83 plus │What Is the Ratio Formula │ │Ratio and proportion tutorial │week 6 quiz algebra university of phoenix │ │simplifier fraction word │Algebra Poems │ │solving factorial algebra │divide multiply add subtract fractions │ │worksheets on adding integers │kumon sheets for free │ │general aptitude onlineexam │book of maths of usa 6th grade only of 6th grade │ │math help operations with radical expressions │rational expressions, excluded values and reducing fractions │ │simultaneous equation calculator │radicals adding example │ │internet free algebra exercises │illinois freshman algebra 1 online textbooks │ │Quadratic fractions │solutions rudin chapter 9 problem 17 │ │Grade 10th mathematics exercises │help with d=rt algebra │ │square root radical form ploynomial │highest common factor grade 7 │ │ks2 maths worksheet basic │KS2 exercise worksheet │ │multiplying integers and dividing │free online sats papers │ │Two variable equations │what is an example of a real life slope algebraic │ │online maths exams ks3 │writing decimals in radical form │ │percentage formulas │multiplying square root fractions │ │free algebra software downloads │hyperbola graph │ │ti89 store │decimal equations calculator │ │south carolina algebra end of course test │free maths cgse calculator exams │ │factoring factor calculator │formula for finding ratio │ │mathlab calculator example │teaching kids simple formulas in Excel │ │ti 83 rom download │glencoe math chapter 11 course 1 │ │how do to scientific notation on a ti-83 plus │simultaneous quadratic equations │ │excel compound angle calculator │FREE ALGEBRA SOLVER │ │square root tips │reduce polynomial expression to lowest terms calculator │ │lesson plan of Expressions │Simplified Radical Form │ │McDougal Littell algebra 2 final │boole express │ │factoring square roots with unknown variables │radical expressions solver │ │maths easy way calculate │COMMON ENTRANCE PAST PAPERS DOWNLOAD ONLINE │ │Algebra, formulas, D,T,R │fraction and decimal notation display │ │factoring algebra │college algebra clep review │ │Maths worksheets yr 1/2 │printable grade 1 homework │ │using the order of operations, evaluation the following expressions before adding or subtracting. Keep answer in fractional form│8th grade worksheet │ │differential equation ti-89 │laws of probability maths statistics gcse │ │algebra test for year 8 │class 9 math algebra formula │ │hard maths work for 7 and 8 years old │calculator cu radical │ │math properties worksheets │8th grade worksheets │ │addition and subtraction equations worksheet │free algebra software │ │how to determine the turning point of the hyperbola grade 11 │free Math sat1 rules │ │Abstract algebra powerpoint │download free kumon tutorial │ │6th grade permutations │algebra problems with exponents │ │Online SOlver algebra │solving square root polynomials │ │math parabola project algebra 2 │math word problem solver │ │algebra games for year 11 │solve quadratic equation step by step │ │vb6 book download │Course on mathematical physics: filetype: pdf │ │radical simplify │simplify root equation │ │convert partial differential equation into canonical form with an integral equation │tutorial for algebra conics │ │SAT maths grade 6 │college math for dummies │ │teach me algebra 1 │simultaneous equation solver free online │ │teacher manual elements of modern algebra │Factoring Cubed Polynomial │ │yr 8 maths │combination of negative number │ │algebra history exponents │solving radicals online │ │kumon practise sheet │factor binomial calculator │ │the difference between evaluation and simplification of an expression, Algebra │square root of 85 │ │vapor enthalpy vs temperature │javascript BigInteger │ │printable algebra pre-test and test │worksheet of maths for grade 8 o levels │
{"url":"http://softmath.com/math-com-calculator/factoring-expressions/dividing-polynomials.html","timestamp":"2014-04-18T16:16:31Z","content_type":null,"content_length":"132932","record_id":"<urn:uuid:c5ad8376-701c-4e8e-9016-991ec90ff305>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society 2014 Niven Lecture - Bjorn Poonen Month: May 2014 Date: May 26 Name: 2014 Niven Lecture - Bjorn Poonen Location: The University of British Columbia, Vancouver, BC V6T 1Z4, Canada. Undecidability in Number Theory. Hilbert's Tenth Problem asked for an algorithm that, given a multivariable polynomial equation with integer coefficients, would decide whether there exists a solution in integers. Around 1970, Matiyasevich, building on earlier work of Davis, Putnam, and Robinson, showed that no such algorithm exists. However, the answer to the analogous question with integers replaced by rational numbers is still unknown, and there is not even agreement among experts as to what the answer should be. The annual Niven Lecture Series, held at UBC since 2005, is funded in part through a generous bequest from Ivan and Betty Niven to the UBC Mathematics Department.
{"url":"http://cust-serv@ams.org/meetings/calendar/2014_may26_vancouver.html","timestamp":"2014-04-19T16:03:50Z","content_type":null,"content_length":"38170","record_id":"<urn:uuid:7b4b84bf-9b13-4bbd-bf14-e5d9e8f571d7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Misunderstood Minds . Arithmetic Activity Page 1 | PBS Learning basic math facts is a critical step that allows children to progress efficiently to higher levels of mathematical thinking. If a middle-school student cannot quickly recall basic facts like 2 + 3 = 5 or 6 x 3 = 18, this will likely slow him down when working on a more complex problem. For many people, math facts come easily. Some people with math disabilities, however, who lack an intuitive understanding of numbers or symbols or place value, may struggle endlessly with these basic mathematical concepts. Most people learn basics facts on a table like the one below. Do you remember how to use it? ├─┼─┼─┼─┤ Find the first number in the dark grey column on the left. Then find the other number in the dark grey row at the top. The sum is the value in the cell where the row and column intersect. │1│2│3│4│ For example, to find 2 + 3 using the addition tables at left, first find 2 in the left column, then find 3 in the top row. Follow the column and row to where they meet, at 5. Thus, 5 is ├─┼─┼─┼─┤ the answer.
{"url":"http://www.pbs.org/wgbh/misunderstoodminds/experiences/mathexp1a.html","timestamp":"2014-04-19T17:17:53Z","content_type":null,"content_length":"5228","record_id":"<urn:uuid:95f484bc-b91a-4202-aec4-609e365be06f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
the Distance to HT Cas Using Parallax The Method of Parallax Parallax is the apparent shift of a nearby star against the fixed background that can be seen as the Earth goes around the Sun. A star's parallax angle is given by the formula: If you know any two of these variables (the parallax angle and the radius of the Earth's orbit, for example), you can solve for the third (the distance to the star). Click here for a movie illustrating a star's parallax motion on the sky Tell me more about parallax Finding the Distance to HT Cas Using Parallax You know that parallax is the most accurate method for finding the distance to nearby stars, and you hope that you can use this method for HT Cas. First you need the right images. Shown below are idealized images of the sky around HT Cas taken six months apart with an optical telescope. (In order to achieve the resolution or fine detail in these idealized images, you would have to have a telescope a kilometer across!). Image of HT Cas taken 06/96 Image of HT Cas taken 12/96 In the image below, the two images have been combined. HT Cas is highlighted in red. You can measure the apparent shift during this six month period if you know the scale of the picture. From a stellar astrometric catalog, you find that the two stars closest to HT Cas are a distance of 0.01" (" stands for units of arcseconds) apart. Measure the apparent shift of HT Cas over this six month period. Half this angular value is the star's parallax, which you can now use to calculate the distance to HT Cas. Composite image of measurements of HT Cas taken six months apart
{"url":"http://imagine.gsfc.nasa.gov/YBA/HTCas-size/parallax2.html","timestamp":"2014-04-16T21:59:43Z","content_type":null,"content_length":"15565","record_id":"<urn:uuid:d118b815-c844-4ae1-8001-bbb70d2b910d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Reviewing for the FE exam, came across questionable thermo problem. Im studying for the Fundamentals of Engineering exam (coming up April 12!!) and came across this Thermodynamics problem in my review book. A gas goes through the following processes: A to B: isothermal compression B to C: isochoric compression C to A: isobaric expansion P[C] = P[A] = 1.4bar V[C] = V[B] = 0.028m^3 The net work during the C-to-A process is W[CA] = 10.5kJ What work is performed in the A-to-B process? So the work for an isothermal process is ), where was previously determined to be In the solution to the problem, the term is replaced with , where it should be from the ideal gas law. Is this solution wrong? As I see it, this cant be solved without knowing the number of moles of gas (or mass of the gas depending on which is used). I was unable to get the correct answer, and the solution seems wrong to me.
{"url":"http://www.physicsforums.com/showthread.php?p=1675348","timestamp":"2014-04-17T09:58:28Z","content_type":null,"content_length":"23342","record_id":"<urn:uuid:12a4a856-a0e1-4950-97d0-c201c9407b18>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Transgressing boundaries, smashing binaries, and queering categories are important goals within certain schools of thought. Reading such stuff the other week-end I noticed (a) a heap of geometrical metaphors and (b) limited geometrical vocabulary. What I dislike about the word #liminality http://t.co/uWCczGDiDj : it suggests the ∩ is small or temporary. — isomorphismes (@isomorphisms) In my opinion functional analysis (as in, precision about mathematical functions—not practical deconstruction) points toward more appropriate geometries than just the [0,1] of fuzzy logic. If your goal is to escape “either/or” then I don’t think you’ve escaped very much if you just make room for an “in between”. By contrast ℝ→ℝ functions (even continuous ones; even smooth ones!) can wiggle out of definitions you might naïvely try to impose on them. The space of functions naturally lends itself to different metrics that are appropriate for different purposes, rather than “one right answer”. And even trying to define a rational means of categorising things requires a lot—like, Terence Tao level—of hard I’ll illustrate my point with the arbitrary function ƒ pictured at the top of this post. Suppose that ƒ∈𝒞². So it does make sense to talk about whether ƒ′′≷0. But in the case I drew above, ƒ′′≹0. In fact “most” 𝒞² functions on that same interval wouldn’t fully fit into either “concave" or "convex”. So “fits the binary” is rarer than “doesn’t fit the binary”. The “borderlands” are bigger than the staked-out lands. And it would be very strange to even think about trying to shoehorn generic 𝒞² functions into Beyond “false dichotomy”, ≶ in this space doesn’t even pass the scoff test. I wouldn’t want to call the ƒ I drew a “queer function”, but I wonder if a geometry like this isn’t more what queer theorists want than something as evanescent as “liminal”, something as thin as "boundary". In harmonic analysis and PDE, one often wants to place a function ƒ:ℝᵈ→ℂ on some domain (let’s take a Euclidean space ℝᵈ for simplicity) in one or more function spaces in order to quantify its [T]here is an entire zoo of function spaces one could consider, and it can be difficult at first to see how they are organised with respect to each other. For function spaces X on Euclidean space, two such exponents are the regularity s of the space, and the integrability p of the space. —Terence Tao Hat tip: @AnalysisFact Given a time-series of one security’s price-train P[t], a low-frequency trader’s job (forgetting trading costs) is to find a step function S[t] to convolve against price changes P′[t] $\large \dpi{200} \bg_white \int^{\tau}_0 dP_t \ast S_t \ dt \quad = \quad \text{profit}$ with the proviso that the other side to the trade exists. S[t] represents the bet size long or short the security in question. The trader’s profit at any point in time τ is then given by the above definite integral. • I haven’t seen anyone talk this way about the problem, perhaps because I don’t read enough or because it’s not a useful idea. But … it was a cool thought, representing a >0 amount of cogitation. • This came to mind while reading a discussion of “Monkey Style Trading” on NuclearPhynance. My guess is that monkey style is a Brownian ratchet and as such should do no useful work. • If I were doing a paper investigating the public-welfare consequences of trading, this is how I’d think about the problem. Each hedge fund / central bank / significant player is reduced to a conditional response strategy, chosen from the set of all step functions uniformly less than a liquidity constraint. This endogenously coughs up the trading volume which really should be fed back into the conditional strategies. • Does this viewpoint lead to new risk metrics? • Should be mechanical to expand to multiple securities. Would anything interesting come from that? I wouldn’t usually think that multiplication of functions has anything to do with trading. Maybe some theorems can do a bit of heavy lifting here; maybe not. It at least feels like an antidote to two wrongful axiomatic habits. For economists who look for real value, logic, and Information Transmission, it says The market does whatever it wants, and the best response is a response to whatever that is. For financial engineering graduates who spent too long chanting the mantra “μ dt + σ dBt" this is just another way of emphasising: you can’t control anything except your bet size. UPDATE: Thanks to an anonymous commenter for a correction. The word ‘space’ has acquired several meanings, which is what you would expect of such a sexy, primitive, metaphorically rich, eminently repurposeable concept. 1. Outer space, of course, is where cosmonauts, Hubble telescopes, television satellites, and aliens reside. It’s ℝ³, or something like that. 2. Grammatical spaces keep words apart. The space bar got a little more exercise than the backspace key while I was writing this list. 3. Non-printable area (space) is also free from ink or electronic text in newspapers: ad space. Would you like to buy one? 4. Or in music. Don’t forget to “play” the notes you don’t play, Thelonious! 5. Or the space you need to give someone in a relationship, if you want to allow them to be themselves whilst also being with you. 6. Space on my hard drive to store an exact digital replica of all my vinyl? This kind of space also applies to human memory capacity, computer RAM, and other electronic pulsings which seem rather more time-based than spatial & static. 7. Businessmen refer to competitive neighbourhoods: the online payments space; the self-help books category; the $99-and-under motel space; and so on. 8. Space as distinct from time. Although cosmologists will tell you that spacetime is a pseudo-Riemannian manifold which looks locally like ℝ⁴, a geographer or ecologist will tell you that locally space looks like ℝ² (since we live solely on the surface of the Earth). I believe the ℝ² view is also taken by programmers who geotag things (flickr photos, twitter tweets, 4square updates): second basement = 85th floor and canopy = rainforest floor as far as that’s Both perspectives are valid. They’re just different ways of modelling “the world” with tuples. Is it surprising that cold, rigid, soulless mathematics allows for different, contradictory viewpoints? Time is like space in the grand scheme of things, but for life on Earth time-averages and space-averages are very different. 9. Parameter space. The first graphs one learns in school plot input x versus output ƒ(x). But another kind of plot — like a solid liquid gas diagram — plots input a versus input b, with the area coloured or labelled by output ƒ(x). (In the case of matter’s phases, the codomain of ƒ is the set {solid, liquid, gas, plasma} rather than the familiar ℝ.) □ When I push this lever, what happens? What about when I push that one? □ There are connections to Fourier spectrum. 10. Phase space. Paths, orbits, and trajectories taken through other spaces. Like the string of (x₍ᵤ₎,y₍ᵤ₎,z₍ᵤ₎)-coordinates that a water rocket takes across the lawn. Or the path of temperature (temp₍ᵤ₎) during a year in Bloomington. Or the trajectory of the dynamical system (your feelings₍ᵤ₎, your partner’s feelings₍ᵤ₎) representing your marriage. Roger Penrose uses the example of the configuration space of a belt to explain that phases can happen on non-trivial manifolds. (A belt can take on as many configurations as a string, plus it can be twisted into a Moebius band, but if it’s twisted twice that’s the same as twisted zero times.) [Sorry, I don’t have a Unicode character for subscript t, so I used u to represent the time-indexing of path variables. Maybe that’s better anyway, because time isn’t the only possible index.] ₜ 1. Personal space. I forgot personal space. Excuse me; pardon me. All of the spaces above are like an existing nothing. The space between your arm and your chest, the space where I draw—all of these are conceptually “empty” but impinge on and interact with the rest of reality. All of those senses of the word are completely nothing alike to how mathematicians use the word. Mathematicians mean “stuff plus structure to the stuff” which is not at all like the other spaces. Abstract spaces. These are best understood as ordered tuples, i.e. “Things plus the relationships and desired interpretation of those things.” The space—more like “the entire logical universe I’m going to be talking about here”—is supposed to contain EVERYTHING you need, in order to work with any of the parts. So for example to use a division sign ÷, the space must include numbers like ⅓ and ⅝. (Or you could just do without the ÷ sign. You can make a ring that’s not a division ring; look it up.) □ A Banach space is made up of vectors (things that can be added together), is complete (there are enough things that infinite limit sequences make sense), with a notion of distance (norm), but not necessarily angle. Also two things can be 0 distance away from each other without being the same thing. (That’s unlike points in Euclidean space: (2,5,2) is the only thing 0 away from □ A group is complete in the sense that everything you need to do the operation is included. (But not complete in the way that Banach space is complete with respect to sequences converging. Geez, this terminology is overloaded with meanings!) □ A vector space is complete in the same way that a group is. In the abstract sense. Again, a vector is “anything that can be added together”. The vectors’ space completely brings together all the possible sums of any combination of summands. For example, in a 2-space, if you had (1,0) and (0,1) in the space, you would need (1,1) so that the vector space could be complete. (You would also need other stuff.) And if the vector space had a and b, it would need to contain a+b — whatever that is taken to mean — as well as a+b+b+(a+b)+a and so on. In jargon, “closed under addition”. □ A topological space (confusingly, sometimes called “a topology”) is made up of things, bundled together with the necessary overlap, intersection, union, superset, subset concepts so that “connectedness” makes sense. □ A Hilbert space has everything a Banach space does, plus the notion of "angle". (Defining an inner product is as good as defining an angle, because you can infer angle from inner multiplication.) ℂ⁷ is a hilbert space, but the pair ({0, 1, 2}, + mod 2) is not. □ Euclidean space is a flat, rigid, stick-straight, all-joins-square Hilbert space. □ To recap that: vector space ⊰ Banach space ⊰ Hilbert space, where the ⊰ symbol means “is less structured than”. Topological spaces can be even more unstructured than a vector space. Wikipedia explains all of the T0 ⊰ T1 ⊰ T2 ⊰ T2.5 ⊰ T3 ⊰ T3.5 ⊰ T4 ⊰ T5 ⊰ T6 progression which was thoroughly explored during the 20th century. (Those spaces differ in how separated “neighbours” are taken to be.) I don’t mean to imply that these spaces can only be thought of as tuples: ({things}, operations). There are categorical ways to understand them which may be better. But don’t look at me; ask the 1. Lastly, sometimes ‘space’ just means a collection of related things, without necessarily specifying, like above, the tools and viewpoints that we take to their relationships. □ The space of all possible faces. □ The space of all possible boyfriends. □ The space of all possible songs. □ The space of all possible sentences. □ Qualia space, if you’re a theorist of consciousness. □ The space of all possible romantic relationships. □ The space of all possible computer programs of length 17239 bytes. □ Whatever space politics occupies. (And we could debate about that.) □ (consumption, leisure, utility) space □ The space of all possible strategy pairs. □ The space of all possible wealth distributions that sum to W. □ The space of all bounded functions. □ The space of all 8×8 matrices over the field ℤ₁₁. □ The space of all polynomials. □ The space of all continuous functions from [0,1] → [0,1]. □ The space of all square integrable functions. □ The space of all bounded linear operators. □ The space of all possible models of ______. □ The space of all legal configurations of the Rubik’s cube. (Some of these may be assumed to come packaged with a particular set of interpretations as in the previous ol:li.) Vectors, concretely, are arrows, with a head and a tail. If two arrows share a tail, then you can measure the angle between them. The length of the arrow represents the magnitude of the vector. The modern abstract view is much more interesting but let’s start at the beginning. Force vectors Originally vectors were conceived as a force applied at a point. As in, “That lawn ain’t mowing itself, boy. Now you git over there and apply a continuous stream of vectors to that lawnmower, before I apply a high-magnitude vector to your bee-hind!” Thanks Galileo, totally gonna get you back, man The Galilean idea of splitting a point into its x-coordinate, y-coordinate and z-coordinate works with vectors as well. “Apply a force that totals 5 foot-pounds / second² in the x direction and 2 foot-pounds / second² in the y direction”, for instance. Therefore, both points and vectors benefit from adding more dimensions to Galileo’s “coordinate system”. Add a w dimension, a q dimension, a ξ dimension — and it’s up to you to determine what those things can mean. If a vector can be described as (5, 2, 0), then why not make a vector that’s (5, 2, 0, 1.1, 2.2, 19, 0, 0, 0, 3)? And so on. 4th Dimension Plus So that’s how you get to 4-D vectors, 13-D vectors, and 11,929-D vectors. But the really interesting stuff comes from considering ∞-dimensional vectors. That opens up functional space, and sooooo many things in life are functions. (Interesting stuff also happens when you make vectors out of things that are not traditionally conceived to be “numbers”. Another post.) In the most general sense, vectors are things that can be added together. The modern, abstract view includes as vectors: • lists • 1-D arrays • bitstreams • linear functionals • force vectors • decisions • desires • the flow of heat • dinosaur tracks • tying a knot • economic transactions • a story or article, in any language • a poem • water flowing • one instant during an argument • a curve • probabilities associated with various outcomes • marketing data • statistical observations • bids and asks • waveforms • songs • heartbeats • one solution of a differential equation • a polynomial • a jpeg or bitmap of the Mona Lisa • a set of instructions • a dance move • someone’s signature • a secret message • a Fourier decomposition • turning your mattress (which you’re apparently supposed to do once a season) • intentions • electromagnetic flux • part of a trajectory • one wisp of the wind • states of affairs • distortions in a crystal lattice • a rotation of Rubik’s cube • neuronal spike-trains — so, thoughts? perceptions? Things you can do with vectors Given two vectors, you should be able to take their outer product or their inner product. The inner product allows you to measure the angle between two vectors. If the inner product makes sense, then the space you are playing in has geometry. (Not all spaces have geometry — some just have And — this is weird — if the concept of angle applies, then the concept of length applies as well. Don’t ask me why; the symbols just work that way. But the “length” of a song (one of my for-instances above) would not be something like 2:43. The magnitude of a song vector would be the total amount of energy in the sound wave | compression wave. $\dpi{300} \bg_white \| \text{song} \| = \int \text{compression wave}$ What is the angle between two songs, two spike-trains, two security prices? What is the angle between two heartbeats? It’s the correlation between them. Also, you can do linear algebra on vectors — provided they’re coming out of the same point. Some might say that the ability to do linear algebra on something is what makes a vector. That can mean different things in different spaces — like maybe you’re superposing wave-forms, or maybe you’re converting bitmap images to JPEG. Or maybe you’re Photoshopping an existing JPEG. Oh, man, Photoshop is so math-y. Shearing the mona lisa (linear algebra on an image — from the Wikipedia page on eigenvectors, one of which is the red arrow)
{"url":"http://isomorphismes.tumblr.com/tagged/functional+analysis","timestamp":"2014-04-18T05:51:38Z","content_type":null,"content_length":"201465","record_id":"<urn:uuid:a02075fd-11f3-4556-bac6-142048c0c967>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian vs. Frequentist: Is there any "there" there? The Bayesian/Frequentist thing has been in the news/blogs recently. Nate Silver's book (which I have not yet read btw) comes out strongly in favor of the Bayesian approach, which has seen some pushback from skeptics at the New Yorker . Meanwhile, Larry Wasserman says Nate Silver is really a frequentist (though Andrew Gelman disagrees), XKCD makes fun of Frequentists quite unfairly, and Brad DeLong suggests a third way that I kind of like. Also, Larry Wasserman gripes about people confusing the two techniques, and Andrew Gelman cautions that Bayesian inference is more a matter of taste tan a true revolution . If you're a stats or probability nerd, dive in and have fun. I'm by no means an expert in this field, so my take is going to be less than professional. But my impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there's really not much "there" there...a sea change in statistical methods is not going to produce big leaps in the performance of statistical models or the reliability of statisticians' conclusions about the world. Why do I think this? Basically, because Bayesian inference has been around for a while - several decades, in fact - and people still do Frequentist inference. If Bayesian inference was clearly and obviously better, Frequentist inference would be a thing of the past. The fact that both still coexist strongly hints that either the difference is a matter of taste, or else the two methods are of different utility in different situations. So, my prior is that despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I actually have some other reasons for thinking this. It seems to me that the big difference between Bayesian and Frequentist generally comes when the data is kind of crappy. When you have tons and tons of (very informative) data, your Bayesian priors are going to get swamped by the evidence, and your Frequentist hypothesis tests are going to find everything worth finding (Note: this is actually not always true; see Cosma Shalizi for an extreme example where Bayesian methods fail to draw a simple conclusion from infinite data). The big difference, it seems to me, comes in when you have a bit of data, but not much. When you have a bit of data, but not much, Frequentist - at least, the classical type of hypothesis testing - basically just throws up its hands and says "We don't know." It provides no guidance one way or another as to how to proceed. Bayesian, on the other hand, says "Go with your priors." That gives Bayesian an opportunity to be better than Frequentist - it's often better to temper your judgment with a little bit of data than to throw away the little bit of data. Advantage: Bayesian. BUT, this is dangerous. Sometimes your priors are totally nuts (again, see Shalizi's example for an extreme case of this). In this case, you're in trouble. And here's where I feel like Frequentist might sometimes have an advantage. In Bayesian, you (formally) condition your priors only on the data. In Frequentist, in practice, it seems to me that when the data is not very informative, people also condition their priors on the fact that the data isn't very informative . In other words, if I have a strong prior, and crappy data, in Bayesian I know exactly what to do; I stick with my priors. In Frequentist, nobody tells me what to do, but what I'll probably do is weaken my prior based on the fact that I couldn't find strong support for it. In other words, Bayesians seem in danger of choosing too narrow a definition of what constitutes "data". (I'm sure I've said this clumsily, and a statistician listening to me say this in person would probably smack me in the head. Sorry.) But anyway, it seems to me that the interesting differences between Bayesian and Frequentist depend mainly on the behavior of the scientist in situations where the data is not so awesome. For Bayesian, it's all about what priors you choose. Choose bad priors, and you get bad results...GIGO, basically. For Frequentist, it's about what hypotheses you choose to test, how heavily you penalize Type 1 errors relative to Type 2 errors, and, most crucially, what you do when you don't get clear results. There can be "good Bayesians" and "bad Bayesians", "good Frequentists" and "bad Frequentists". And what's good and bad for each technique can be highly situational. So I'm guessing that the Bayesian/Frequentist thing is mainly a philosophy-of-science question instead of a practical question with a clear answer. But again, I'm not a statistician, and this is just a guess. I'll try to get a real statistician to write a guest post that explores these issues in a more rigorous, well-informed way. : Every actual statistician or econometrician I've talked to about this has said essentially "This debate is old and boring, both approaches have their uses, we've moved on." So this kind of reinforces my prior that there's no "there" there... Update 2 : Andrew Gelman . This part especially caught my eye: One thing I’d like economists to get out of this discussion is: statistical ideas matter. To use Smith’s terminology, there is a there there. P-values are not the foundation of all statistics (indeed analysis of p-values can lead people seriously astray). A statistically significant pattern doesn’t always map to the real world in the way that people claim. Indeed, I’m down on the model of social science in which you try to “prove something” via statistical significance. I prefer the paradigm of exploration and understanding. (See here for an elaboration of this point in the context of a recent controversial example published in an econ journal.) Update 3 : Interestingly, an anonymous commenter writes: Whenever I've done Bayesian estimation of macro models (using Dynare/IRIS or whatever), the estimates hug the priors pretty tight and so it's really not that different from calibration. Update 4 : A commenter points me to this interesting paper by Robert Kass . Abstract: Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I suggest that a philosophy compatible with statistical practice, labeled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mischaracterize the process of statistical inference and I propose an alternative "big picture" depiction. 55 comments: 1. I'm not a statistician either, but as I under it likelihood models are Bayesian models with (invariantly) flat priors, mathematically. Frequentist models are special cases of generalized linear likelihood models. So frequentists are likelihoodists who are Bayesians who a) don't know it; b) implicitly assume that they know nothing about the world when they specify their models (which is Bayesians frequently test their models to see how sensitive they are to the prior, so your concern above is usually not a problem in practice. Bayesians frequently use uninformative priors to "let the data speak", at least as a baseline. The point is that Bayesian models are more flexible, involve more realistic assumptions about the data generating process, yield much more intuitive statistical results (i.e. actual probabilities), and provide much more information about the relationship between variables (i.e. full probability distributions rather than point estimates). That said, the differences between the two from the perspective of inference are usually minor, as you note. 1. Hmm, I'm not sure there can exist such a thing as an "uninformative prior"...and sensitivity tests almost certainly only test a limited set of priors, which can lead to obvious problems... As for providing "much more information" with the same amount of data...most of that "information" is actually just going to be the prior. 2. "Uninformative" in the sense that when a prior is specified in a Bayesian model it is also given a level of confidence. If one wants weak priors -- so weak as to be trivial, even, except perhaps in extreme cases -- one can specify the model that way and still benefit from the other advantages of Bayes. Yes, sensitivity analyses only include a limited set of priors, but frequentist/likelihoodist analysis only has ONE set of priors, which are implicit and atheoretical. Mathematically that's what's happening even if frequentists don't realize it (which they generally don't). Tell me which is more likely to be problematic. I am a utilitarian about this stuff... I see no reason not to specify the model a variety of ways. If the results diverge sharply then you need to figure out why. If they don't then you can take advantage of the flexibility of Bayes to interpret probabilities directly. 3. The part about weak priors wasn't phrased as precisely as it could have been, but hopefully you get the idea. 4. Anonymous7:30 PM I think some people support a way of defining 'uninformative prior' in an (arguably) more rigorous way by means of using Lagrange multipliers and maximum information entropy. In brief you seek the distribution with the maximum entropy given constraints such as (at least) the need for the distribution to sum to 1. Generally this procedure gives results that are more or less what you would expect - for example if p is the probability of a coin producing a 'heads' when flipped, the maximum entropy prior is to assume a constant probability density for p between 0 and 1. It turns out that the maximum entropy probability density distribution for the common case where you want at least defined and integrable first and second moments (but not necessarily any more) is Gaussian, which provides a link to frequentist methods. I think that one reason that Bayesian methods have not become more popular is the computation difficulty of the integrals involved when you get away from toy problems. They can get quite high-dimensional and multi-modal etc. If you use aggressive approximation techniques to attack these integrals you can end up with results that are hard to independently re-create; which may make it hard to analyse errors; which may be unstable to changes in techniques, coding or parameters; or which may have made subtle assumptions (such as fitting Jacobians to modes) that could make the process dubious. 5. Noah, please google 'uninformative prior'. 6. (note - copy and paste with an iPad doesn't work) Noah, the example Cosma gives isn't a failing of bayesian technigues, but rather the fact that a bad model gives bad results, no matter how much data one has. 7. Anonymous8:48 AM The statement that frequentist statistics (maximum likelihood) is the same as Bayesian with a flat prior, that's true only is specific textbook examples, and only if the supposed Bayesian reports only the posterior mean. A significant difference between Bayesian and frequentist statistics is their conception of the state knowledge once the data are in. The Bayesian has a whole posterior distribution. This describes uncertainies as well as means. 2. Aziz6:21 PM I think it is pretty indisputable that the Bayesian interpretation of probability is the correct one. Probability measures a degree of belief, not a proportion of outcomes. The observed probability distribution (P) does not equal the real probability distribution (P*). In the nonlinear and wild world we live in, only continued measurement into the future can give us P* for any future period (generally, we underestimate the tails). In an ultra-controlled non-fat-tailed environment P can look a lot like P* (making the frequentist approach look correct) but even then P* may diverge from P given a large enough data set (one massive rare event can significantly shift tails). Why does the frequentist approach survive? Because it is useful in controlled environments where black swans are negligibly improbable. But it should come, I think, with the above caveat. 1. walt3:21 AM No, the Bayesian interpretation of probability is obviously ridiculous. "Degree of belief" is just some mumbo-jumbo that Bayesians like to say like it means anything. When I say something has 50% probability, then I definitely think that if you do it over and over again, it will happen 50% of the time. It's a statement about long-run averages that it meant to be objective. 2. Aziz4:04 AM Nothing you said refuted anything I said. Frequentism pretends to be objective, and it really isn't because humans are subjective creatures with priors. Bayesianism is at least honest about its priors. 3. I agree with walt. When a Bayesian talks about "real probability distribution", and "continued measurement", he/she IS a frequentist, at least a frequentist in my understanding. One impression of mine is that the Bayesians tend to be more aggressive than the frequentists, and frequentists tend to talk in a humble way. This is understandable, since, as the author of this blog (Smith) said in the post, the Bayesian is an approach which is likely to be used when the data is not great. It is no surprise that a person who is trying to "get something from nothing" tends to be ambitious, and, aggressive. But ultimately I guess there might be some differences in the brain structure between the frequentists and Bayesians. Anyone interested in testing this, using MRI for example? 3. Anonymous6:31 PM The basic philosophy of Bayesianism seems to appeal to me more, just because you have to put your prior assumptions out there. Most of the critique that I've seen comes from having intentionally stupid priors, but questioning assumptions should be a big part of modeling. 4. Anonymous7:46 PM I dont know how noah smith can think he is an economist... its so embarrasing, please bury yourself or put glue in your mouth 1. OK, will do... 2. walt3:22 AM No, dude, don't do that! That might kill you! 5. I think you've missed an important point about Bayesian statistics--essentially, choosing a prior lets the statistician to formally incorporate information we already know into the analysis. This prior knowledge could come from other research papers on the topic, prior stages in the same experiment (very useful in clinical trials) or maybe just intuitive logic. Frequentists do exactly the same thing, but the difference is that they aren't supposed to--it technically invalidates their results. Consider for example a placebo-controlled clinical trial. The treatment and placebo groups are never truly random--ethics dictate that we balance the two groups to look as similar as possible, because this will increase the statistical power and help us identify potential risks of treatment much sooner, potentially saving lives. At the end of the trial we get a frequentist p-value of, say, 0.05, but in reality this is wrong--we are pretty sure that because of balancing the two groups the real p-value is less than 0.05, but a frequentist has no way of knowing what it actually is. My understanding is that in Bayesian statistics this is no longer a concern--we know that our results are accurate given all the available information, including both the prior and the data. Also, I mentioned the need for a clinical trial to do ongoing statistics to identify risks to the treatment group as soon as possible. Strictly speaking, this ongoing analysis also invalidates the frequentist results--continuation of the trial should be completely independent of the results of the trial, and not shut down when we have evidence of harms to the patients. Bayesian statistics, by contrast, allows us to do ongoing analysis without in anyway invalidating the results. And since Bayesian inferences incorporate both the prior information and the data, it can statistically identify risks to patients in the trial much sooner than can frequentist methods. In that respect, it could actually be considered unethical to rely on frequentist methods for human-subjects research that involves more than minimal risk. More generally, I think the point needs to be made that frequentist probability theory is really just a subset of Bayesian theory but with lots of implicit assumptions about the prior that aren't necessarily justifiable. That said, you are right--Bayesian statistics won't be able to tell us a whole lot that we didn't already know. For the most part it tells us the same things but with a purer internal logic. 1. "formally incorporate information we already know" That's nice and everything, but what's unclear to me is how our prior knowledge, usually vague and diffuse (otherwise there would be no point in further analysis), is supposed to translate into precise distributions with precise parameters. Is it just a coincidence that Bayesians always use same few well-known textbook distributions (gaussian, beta, gamma...) for their priors? Of course, those particular priors are approximations used for analytic convenience, and that's fine, but it still kind of subtracts from advantages of "purer internal logic". Bayesian paradigm is a nice model of how people update beliefs, in the same sense as utility maximization is a nice model of how people decide what to buy - but it's not obvious to me whether we really should take it literally (just as we don't literally solve constrained optimization problems while shopping). As for balancing control and treatment groups, it's not like people are unaware of it. Searching for "stratified randomization" yields quite a few results. 6. If you weaken your priors due to lack of evidence, I posit you are a Bayesian. The Frequentist takes his hypothesis and data as fixed. If he chooses to alter them, he is doing another experiment. In this, a Bayesian is just a Frequentist doing multiple experiments in succession, often on the same data, whereas the Frequentist would be concerned the change in hypothesis might invalidate the data collection. 7. Minor point: I've heard that one reason Bayesian statistics hasn't been used a lot more in economics is simply because, until the last 20(?) years or so, it was very hard to implement computationally. The falling cost of computing power has really opened things up now, because (I think) you can implement a lot of analytically intractable stuff numerically or by simulation. Now there's a bit of slow adjustment going on, as people trained in frequentist methodology update their skill sets. Anyway, that's what they're telling me in some of my econometrics classes. So, maybe Bayesian statistics hasn't had really had several decades to try and prove its superiority. That said, it has seen a lot of use in the past decade, and I agree with you that it doesn't seem to be a game-changer. 1. I'm sure that's true for sophisticated statistical models but not all Bayesian statistics require difficult computation. My undergrad probability theory professor forced us to crank out some basic Bayesian statistics by hand on exams. I'd say one reason Bayesian statistics isn't popular in economics could also be that it is already a profession plagued by too many subjective biases. Since the field has political ramifications, economists want to shield their results from charges that they chose politically motivated priors. And invariably there will be some econ papers that actually do choose politically motivated priors... 2. I'd say one reason Bayesian statistics isn't popular in economics could also be that it is already a profession plagued by too many subjective biases. Since the field has political ramifications, economists want to shield their results from charges that they chose politically motivated priors. And invariably there will be some econ papers that actually do choose politically motivated priors... This rings true to me...very perceptive... 8. Noah, There are at least two kinds of debate that look the same but are not. The philosophical Bayesian x Frequentist and the practical silly "Null Hypothesis Significance Testing (NHST)" x "Please, think about what the hell you're doing". Nate Silver points most to the second debate. When he says frequentism, he is really saying silly NHST. Of course, some people get mad with that, because they claim the name "frequentist" to themselves and do not like when bad practice is associated with that name. Now, why it is important to state the difference between this two debates? Take your statement for example: "When you have a bit of data, but not much, Frequentist - at least, the classical type of hypothesis testing - basically just throws up its hands and says "We don't know."" That is not true in practice. When you have a bit of data, you usually do not reject the Null Hypothesis. And what do people do? They don't say "we don't know", they say that there is evidence in favor of the null, whithout ever checking the sensitivity (power or severity) of the test (needed in a coherent frequentist approach), nor, in Bayesian terms, the prior probability of the hypothesis... So both would make an statement about reality with crappy data. 1. When you have a bit of data, you usually do not reject the Null Hypothesis. And what do people do? They don't say "we don't know", they say that there is evidence in favor of the null, whithout ever checking the sensitivity (power or severity) of the test (needed in a coherent frequentist approach) Yes. This is a very bad way to do things, just as it is bad in Bayesian to start with an extremely strong prior and conclude in favor of that prior. In fact it's a very similar mistake. 9. Posting the comment that I have posted on Brad DeLong's: When you see people doing significance testing in applied work, how often do you see they stating the sensitivity (power or severity) of the test against meaninful alternatives? (I'll answer that... from 80 to 90% do not even mention it, see: http://repositorio.bce.unb.br/handle/10482/11230 (portuguese) or McCloskey "The Standard Error of Regressions") This is not because there are not papers teaching how to do it (at least approximately): e.g. see Andrews (http://ideas.repec.org/a/ecm/emetrp/v57y1989i5p1059-90.html) And there are plenty of papers with meaningless debates going on with underpowered test, for example, the growth debate arround institutions x geography x culture... 10. Anonymous9:21 PM I would appreciate your moving from the abstract to the specific. Slackwire has up a very interesting graph, on the decline in bank lending for tangible capital/investment? You have written about the economy breaking in the early 1970s, which the graph confirms to my eye. Is this graph Bayesian/Frequentist or Delong's third way or something else entirely 11. If the probability of a hypothesis being true given the data is what you want to know, then presenting anything else (e.g. the probability that your data would be different if an alternative hypothesis were true) isn't being frequentist so much as it is being a bad statistician. However in a lot of cases neither of these questions are precisely what matters and you are really using statistics to get at something a little fuzzier -- have I collected enough data, am I taking reasonable steps to prevent myself from seeing patterns in noise, etc. Often when testing scientific hypotheses the precise probability is uninteresting but a significance test is important, and here either approach can help. In fact you would not want to publish a result that passed a bayesian test but failed a frequentist test, not because the conclusion is particularly likely to be wrong but saying "my experiment doesn't add much certainty but given what we already know the conclusion is quite certain" is not an interesting result. I don't agree with this, though: It seems to me that the big difference between Bayesian and Frequentist generally comes when the data is kind of crappy. The difference also comes when the data on priors is good, and especially when the prior is lopsided. The latter may sound like a corner case, but it is the normal case in medicine and there are plenty of other cases in the real world. I (well, not me exactly) had a health scare recently, and it would have seemed much worse had we been presented with inappropriate frequentist statistics (the false positive rate of this test is low, so null hypothesis rejected with high confidence!) rather than the prior and posterior probabilities we were presented with (In the absence of this test you would be very unlikely to have this; given the result there is a 1 in 30 chance you have this.) Of course, DeLong's "value of being right" considerations applied here and we opted to do further testing to make sure we were fine, but it goes to show the difference can be big even with good data. 12. That seems right to me. Although I'm not doing those sorts of analysis much anymore so I'm not sure my view's worth much. Seems to me in practice most scientists practice a staggering array of inconsistency when it comes to either epistemology or metaphysics. (And of course one needn't engage in the philosophical debate here) So just as say a typical physicist engages in an incoherent mix of instrumentalist, empirical and realist approaches to physics I suspect the average science (I'll even throw economists in there) engages in a mix of Bayesian and Frequentist approaches. As you say, often those more sympathetic to Frequentist approaches simply weaken their priors. At least that's what I see in practice although not always what is argued for. 13. My old adviser, Chris Lamereoux, was a big Bayesian, with some well known Bayesian papers. I talked with him about this years ago, about the obviousness of including important prior information, and he said the smart sensible thing; good frequentest statisticians and econometricians of course consider apriori information, but do so in an informal and less rigid way. 14. Let Pr(H) be our degree of belief in hypothesis H; Pr(E) our degree of belief in evidence E. Suppose we perform experiments that repeatedly demonstrate E. We may model this as Pr(E) -> 1. Recall that Pr(H) = Pr(H|E)*Pr(E) + Pr(H|~E)*Pr(~E) [the law of total probability, restated in terms of conditional probabilities]. As Pr(E) -> 1, Pr(~E) -> 0, and Pr(H|~E) -> 0. Thus, Pr(H) -> Pr(H|E). In short, as we become very nearly certain of E, our degree of belief in H ought to condition on E. Frequentists don't really have grounds for disagreeing with the above. Most frequentist procedures can be defended on Bayesian grounds, provided the appropriate loss function and prior, so you're correct that this is not a major practical issue for statisticians. The problem occurs when you're trying to teach a computer to learn from its observations. The only way to do frequentist inference sensibly is to implicitly be a reasonable Bayesian. Without making this explicit, though, a computer is not going to do frequentist inferences sensibly without a human going through its SAS output or the like. 15. That's nice and everything, but what's unclear to me is how our prior knowledge, usually vague and diffuse (otherwise there would be no point in further analysis), is supposed to translate into precise distributions with precise parameters. BINGO. The practical differences are more political than anything else, possibly because Bayesians suffer from what I'll call 'Frequentist envy'. We've all seen this one a million times: some study that claims statistical significance with p set at - surprise! - 0.05. Queue much gnashing of teeth from the stat folks about how this runs against the grain of good statistical practices. Then the Bayesians jump in and start sneering about this thing called 'priors'. 'Priors'? Do you mean the application of domain-specific knowledge that could just as easily been done with the old way, and should have been? That's some mighty thin gruel there, yet that's what the debate really seems to come down to so often in practice. Frequentists are well aware of Bayes theorem, btw, and use it quite routinely. Something that Bayesians like to pretend doesn't happen all that often. And most people I know use Bayesian methods when the situation calls for it and Frequentist methods likewise. They're just tools in the toolbox after all. 16. Ack, I tire of this old debate too. And I agree that there is far more heat than light to be found in it. But you do kind of walk all over the underlying philosophies, as well as some practical issues. A few points: 1.) Frequentists and Bayesians may have nearly identical uncertainty measures in most cases but they interpret them differently. This would see to have little practical difference in how the measures are applied. 2.) philosophical frequentists hate it when anyone uses their uncertainty measures as if they are Bayesian measures (which is typical); the converse is not true. 3.) Bayesian tools, like Makov Chain Monte Carlo, can be useful even if you're a frequentist at heart. 17. When I don't know my audience I go for the low-brow approach (not wrong, not confusing, and hopefully, not insulting), which is the case here. So looking back over the comments, I think Carlos nails it at comments 8:18 and 8:45. Yes, there are significant differences between the two approaches, and yes, there are times when one would clearly prefer one over the other. But 95% of the time what you see is meaningless bickering ;-) The Nate Silver thing falls into that 95%. Imho. 18. Alan Martin3:21 AM For me the reason for the Bayesian surge has been computers. In my job as a Bioinformatician, I built a Bayesian generative model, the results of which are sold for 70 dollars a pop wholesale. In 2007 when I finished the work it took 15 minutes to run. This wouldn't have been possible with Bayes's own calculating methods (paper and pen). The main reason for the extensive computer involvement is the calculation of likelihoods based upon multiple non-independent data sources that can't be dealt with linearly. So hidden-markov-models mixed with generative elements produce a far superior result. None of this would be possible with just frequentist stats. So I say that the reason for the rise of the Bayesians isn't some shift in opinion about how science or statistics should be done but that it's the only proper way to use the incidental data we have and it's only recently that we can actually succeed. 19. I think a key missing element was discovered by Mandelbrot with his research on fractals, which I will get to. But first what if the events cannot be precisely measured? In this case a frequentist interpretation of “proof” is in principle impossible, and we then become Bayesian using subjective data and whatever additional data which we 'deem' relevant to elements of the analysis to form a “prior.” In a review of Mandelbrot’s The Misbehavior of Markets, Taleb offers an interesting formula that he says: “seems to have fooled behavioral finance researchers.” He writes: “A simple implication of the confusion about risk measurement applies to the research-papers-and-tenure-generating equity premium puzzle. It seems to have fooled economists on both sides of the fence (both neoclassical and behavioral finance researchers). They wonder why stocks, adjusted for risks, yield so much more than bonds and come up with theories to “explain” such anomalies. Yet the risk-adjustment can be faulty: take away the Gaussian assumption and the puzzle disappears. Ironically, this simple idea makes a greater contribution to your financial welfare than volumes of self-canceling trading advice.” The pdf of the review is here (http://www.fooledbyrandomness.com/mandelbrotandhudson.pdf) So my question is should we move beyond Bayesian and Frequentist when looking at probabilities and look at fractals otherwise we omit what Mandelbrot called “roughness.” In other words research focuses too much on smoothness, bell curves and the golden mean and if we look at roughness in far more detail will we will be able to provide greater insight to the matter at hand? 20. I really think people should take a look at this Chris Sims' text, found at http://sims.princeton.edu/yftp/EmetSoc607/AppliedBayes.pdf Also, the open letter by Kruschke is worth your while: 21. Min6:14 AM "Basically, because Bayesian inference has been around for a while - several decades, in fact" How about centuries? ;) The frequentist view was a reaction against the Bayesian view, which came to be perceived as subjective. What we are seeing now is a Bayesian revival. Since this is an economics blog, let me highly recommend Keynes's book, "A Treatise on Probability". Keynes was not a mainstream Bayesian, but he grappled with the problems of Bayesianism. Because the frequentist view was so dominant for much of the 20th century, there is a disconnect between modern Bayesianism and earlier writers, such as Keynes. From what I have seen in recent discussions, it seems that modern Bayesians have gone back to simple prior distributions, something that both Keynes and I. J. Good rejected, in different ways. Perhaps we will see some Hegelian synthesis. (Moi, I think that we will come to realize that neither Bayesian nor Fisherian statistics can deliver what they promise.) 22. Min is right. Bayesian probability has been around formally for at least 350 years, and the philosophical idea since before the days of Aristotle. I can tell you from experience that Bayesian probability is way more important in the areas of quality control and environmental impact measurements. Noah touches on the reason. You can't assume your process is unchanging or that new pollutants haven't entered the environment. You have to assume they can and eventually will. Essentially, every sample out of spec has to be treated as evidence of a changed process. 23. Lulz4l1f38:49 AM I liked Nate Silver's book. An unexpectedly good read. I received the book as a gift and didn't buy it myself, and I expected it to be a dry read, but it wasn't at all. I think the book has the potential for a broad appeal, and when you consider the fact that books that revolve around topics like statistical analysis and Bayesian infeference are usually pompous, inaccessible, and dull beyond belief, I think Nate should be commended for writing something that makes these concepts accessible to a wide audience. 24. Anonymous9:54 AM Noah, statistics is not science, they are lies, the worst kinds of lies (Mark Twain: lies, damn lies, and statistics). Someone above, praising the Bayesian view, forgot to read the 2007 prospectus, which actually reads: "2 Recent Successes Macro policy modeling • The ECB and the New York Fed have research groups working on models using a Bayesian approach and meant to integrate into the regular policy cycle. Other central banks and academic researchers are also working on models using these methods http://sims.princeton.edu/yftp/EmetSoc607/AppliedBayes.pdf, page 6 Thank you very much but that track record says it all about statistics. No thanks. From the entire Western experience, statistics show only one thing about statistics---that statisticians lie, always selecting what they count, blah, blah, blah and how they analyze such to come up with the conclusion they were determined to reach 1. Stupidity-catcher10:46 AM Statistics are not science yet scientists use this unscientific technique to test their theories as part of the scientific method. That makes no sense but, hey, if Mark Twain said so, it must be true! 25. Noah, probably repeating some other comments, but here goes. First, Bayesian methods have been around a long time but only recently have emerged because computing costs have come down. It would be interesting to see whether there are now proportionally more papers using Bayesian methods since computing cost went down and if that trend is increasing or stabilizing. Second, what Matthew Martin said above. Frequentist effectively do the same thing as Bayesians, but pretend otherwise. they build empirical models and throw variables in and out based on some implicit prior which never get reported in their write ups. Third, my understanding is that Bayesian models generally provide better forecast than frequentist's models. (Not that either are spectacular). 1. CA11:34 AM David, my understanding is that the variables frequentists throw in come, or at least should come, from a consistent theoretical model. Do Bayesians provide a justification for their priors? Also, in your opinion, are the forecasts of the Bayesiand models better enough to justify the computational costs? 2. "Frequentist effectively do the same thing as Bayesians, but pretend otherwise. they build empirical models and throw variables in and out based on some implicit prior which never get reported in their write ups." This is what I understood to be one of Silver's main points: be up front about your "priors," assumptions and context. Kind of like when bloggers and/or journalists say "full disclosure." The history is that the Frequentist tradtion wanted to be more mathematical and objective, whereas the Bayesians argued that one always has biases and it's better if you are conscious of what they are. 3. CA, you are correct that they should come from a theory, but in practice--I speak from experience--it is never that clean. Here is another way of seeing this. There are some empirical studies whose results fit a nice story, but whose results upon closer inspection are not robust (e.g. throw in an extra lag or drop one variable and results fundamentally change). How long did these authors work to make their empirical model turn out just right so they could publish a great finding? These authors are effectively just finding through trial and error empirical results to fit their own "priors". At least Bayesians are being upfront about what they are doing. This is what appeals to me about the Bayesian approach. I don't know enough to answer your second question, but my sense is that computational costs are now very low for doing Bayesian forecasts. For example, though I generally work with frequentist time series methods I can easily set up a Bayesian vector autoregression using RATS software 4. CA12:48 PM "How long did these authors work to make their empirical model turn out just right so they could publish a great finding?" David, yes, I agree. I too am guilty of trying to find if any specification would work. BUT, when I write the paper I present all reasonable specifications based on theory. And if the results change when, say, I drop a variable, I try to explain why (e.g. that I only have few observations and the variable I dropped is correlated with another). In other words, I present the information and let the reader decide. Yet despite trying to be thorough, I still usually get suggestions from referees asking about this or that. If I add a lag, the referee wants to know why. If I detrend, the referee wants to know why I used this filter and not a different one. It is near impossible to publish in a decent journal if you have only tried a couple of specifications (even though you may present only a couple if your results are robust). 5. CA1:10 PM Forgot to mention that, however, referees do "demand" more robustness tests when the results go against their prior (e.g. against what theory predicts). So I am somewhat sympathetic to what you are saying. 6. CA, What you said, about referees challenging your assumptions, asking "what about this?" and "what about that?" rings true to me. You have experienced it as part of what sounds like responsible peer review of scholarly articles. I have experienced exactly what you describe when presenting my statistical analysis and recommendations for business and manufacturing work. If my results are contrary to precedent, I need to be prepared with examples of different specifications that I used, to establish robustness. Or be willing to run this or that to convince others that my findings are valid. Subject matter knowledge helps a lot, to be able to respond effectively. Yes, I realize that what I just said seems to justify suspicion of frequentists, and validate a Bayesian approach! But that will remain a problem, whether Bayesian or frequentist, when applying quantitative analysis to human behavior, especially complex dynamic systems. There seems to be an underlying mistrust of statistical analysis expressed in some of these comments e.g. the Mark Twain quote, or even the supposedly greater honesty of Bayesians and their priors. Statisticians are not inherently deceitful, nor are we practitioners of pseudo-science. I've used probability models for estimating hazard rates, and time to failure, for mobile phone manufacturing (Weibull distribution, I think). It works. Or rather, it is sufficiently effective to be useful in a very practical way. Economic models are a different matter than cell phones or disk drive reliability. Exogenous influences are important, and human behavior is difficult to quantify. 26. Anonymous12:58 PM Scanned quickly to see if some one commented on your picture at the beginning. Ha Ha, In The Beginning! My reptile brain rang up Einstein immediately and his comment that God does not roll dice. I don't know jack about statistics. Is God a Bayesian or a frequentist? 27. Anonymous3:02 PM Whenever I've done Bayesian estimation of macro models (using Dynare/IRIS or whatever), the estimates hug the priors pretty tight and so it's really not that different from calibration. 28. Minor technical point on discussion of Shalizi. Infinity can actually make things worse for Bayesians, particularly infinite dimensional space. So, it is an old and well known result by Diaconiis and Freedman that if the support is discontinuous and one is using an infinite-sided die, one may not get convergence. The true answer may be 0.5, but one might converge on an oscillation between 0.25 and 0.75, for example using Bayesian metnods. However, it is true that this depends on the matter of having a prior that it is "too far off." If one starts out within the continuous portion of the support that contains 0.5, one will 1. Min1:14 AM To deal with infinities Jaynes recommends starting with finite models and taking them to the limit. There is wisdom there. :) 29. Anonymous6:19 PM In practice, everyone's a statistical pragmatist nowadays anyway. See this paper by Robert Kass: http://arxiv.org/abs/1106.2895 30. Thanks for the post. As a statistician, I think it's nice to see these issues being discussed. However, I think a lot of what has been written both in the post and in the comments is based on a few misconceptions. I think Andrew Gelman's comment did a nice job (as usual) of addressing some of them. To me, his most important point, and the one that I would have raised had he not done so, is this: "...non-Bayeisan work in areas such as wavelets, lasso, etc., are full of regularization ideas that are central to Bayes as well. Or, consider work in multiple comparisons, a problem that Bayesians attack using hierarchical models. And non-Bayesians use the false discovery rate, which has many similarities to the Bayesian approach (as has been noted by Efron and others)." The idea of "shrinkage" or "borrowing strength" is so pervasive in statistics (at least among those who know what they are doing) that it frequently blurs practical distinctions between Bayesian and non-Bayesian analyses. A key compromise is empirical Bayes procedures, which is a favorite strategy of some of our most famous luminaries. Commenter Min mentioned a "Hegelian synthesis." Empirical Bayes is one such synthesis. Reference priors is another. Which brings me to another important point. In the post and in the comments, it is assumed that priors are necessarily somehow invented by the analyst and implied that rigor in this regard is impossible. This is completely wrong. This is a long literature on "reference" priors, which are meant to be default choices when the analyst is unwilling to use subjective priors. An overlapping idea is "non-informative" priors, which are non-informative in a precise and mathematically well-defined sense (actually several different senses, depending on the details). Also, I want to note that it can be proven that Bayes procedures are provably superior to standard frequentist procedures, even when evaluated using frequentist criteria. This is related to shrinkage, empirical Bayes, and all the rest. Wikipedia "admissibility" or "James-Stein" to get a sense for why. Finally, the statement, "If Bayesian inference was clearly and obviously better, Frequentist inference would be a thing of the past," misses a lot of historical context. Nobody knew how to fit non-trivial Bayesian models until 1990 brought is the Gibbs sampler. This is not a matter of computing power, as some have suggested -- the issue was more fundamental. The great Brad Efron wrote a piece called "Why isn't everyone a Bayesian" back in 1986. Despite not being a Bayesian, he doesn't come up with a particularly compelling answer to his own question (http://www.stat.duke.edu/courses/Spring08/sta122/Handouts/EfronWhyEveryone.pdf). One last bit of recommended reading is a piece by Bayarri and Berger (http://www.isds.duke.edu/~berger/papers/ interplay.pdf), who take another stab at this question. 31. One area where the "crappy data" issue becomes extremely important is in pharmaceutical clinical trials. People tend to think that there are two possible outcomes of trials: a) the medication was shown to work or b) the medication was shown to be ineffective. In fact, there is a third possible outcome: c) The trial neither proves nor disproves the hypothesis that the drug works. In practice, outcome (c) is very common. For some indications, it is the most common outcome. This leads to charges that pharma companies intentionally hide trials with negative results. They don't publish all their trials! But it turns out to be really hard to get a journal to accept a paper that basically says, "we ran this trial but didn't learn anything." I forget the exact numbers, but for the trials used to get approval of Prozac, it was something like 2-3 trials with positive outcomes, and 8-10 "failed trials," ie trials which couldn't draw a conclusion one way or the other. This is common in psychiatric medicine. Its hard to consistently identify who has a condition, its hard to determine if the condition improved during the trial, and many patients get better on their own, with no treatment at all (at least temporarily).
{"url":"http://noahpinionblog.blogspot.com/2013/01/bayesian-vs-frequentist-is-there-any.html?showComment=1359380947505","timestamp":"2014-04-19T14:37:07Z","content_type":null,"content_length":"260888","record_id":"<urn:uuid:4d9ff7d4-c928-4877-89be-435d9cdb9771>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Process Validation: Using Tolerance Intervals for Setting Process Validation Acceptance Criteria - - BioPharm International Equation (1) can be used to compute tolerance intervals for the combined large-scale and bench data sets. There are several values that can be considered for.Y mean One approach is to center at the predicted value of the PP when all OPs are at setpoint values. If it is known that there is an "offset" between the bench-scale data such as Edge of Range (EOR) and robustness (ROB); and large-scale (GMP and non-GMP) data, it might be better to center the interval at the large-scale mean. Figure 2 presents such a situation for one PP. The unweighted average of the four groups is 11.4. It is noted that all values of the large-scale GMP runs are less than 11.4, so that one may wish to center the interval at a lesser value. The p-value for the test of equal means among the four groups is less than 0.03 for this example. Alternative centering rules may also be considered when different lots of a key raw material were used for each of the large-scale runs, but the same material (but from a different lot) was used for all of the bench scale runs. Here it might be best to center the interval on a linear combination of the large-scale and bench-scale means. Scenario 3: In this scenario, tolerance intervals are calculated accounting for OPs that vary across the OR. Typically, OPs will vary around the setpoint value due to instrument and equipment tolerances and other factors. Thus, a tolerance interval that describes behavior of the PP must adequately account for this variation in the OP. The formula in Equation (1) will not adequately account for the propagation of error that results from movement in the OPs. To compute the tolerance interval in this situation, a simulation-based approach is necessary. Briefly, one simulates a set of values for the OPs consistent with the expected movement of the OPs within the OR. A regression model based on characterization data is then used to predict the value of the PP for the simulated OP values. This process is repeated many times to construct an empirical distribution of the PP values. From this simulated distribution, one selects the range that covers the desired proportion of the population. A more detailed algorithm for this process is presented in the example at the end of the paper. One issue of interest in any computation of a tolerance interval is the proportion of area contained in the interval and the level of confidence that the reported interval is correct. We have found that two-sided intervals containing 99% (p = 99) of the population with an individual confidence level of 95% (α = 0.05) provide reasonable VAC limits. The decision to include 99% of the population is based on the desire to have limits similar conceptually to those used in process control, but not so wide as to be uninformative. In process control, limits are established to include approximately 99.7% of the data. However, tolerance intervals that cover the middle 99.7% are extremely wide for data sets of the size typically available from process characterization. The 99% coverage used in the tolerance interval represents a good compromise that provides meaningful intervals. If there are many critical and key PPs, one may choose to adjust the individual confidence levels in order to obtain a desired overall confidence level on the entire set of PPs. A simple method for handling this "multiplicity" problem is to use the Bonferroni inequality.^8 For example, assume it is required to have VAC for 10 key and critical PPs. In order to achieve an overall confidence of at least 95% on the set of 10 PPs, individual tolerance intervals must be calculated with a confidence coefficient of: 100(1 – (0.05/10)) = 99.5%.
{"url":"http://www.biopharminternational.com/biopharm/article/articleDetail.jsp?id=432390&sk=&date=&pageID=3","timestamp":"2014-04-20T16:56:41Z","content_type":null,"content_length":"144858","record_id":"<urn:uuid:6d964900-0813-45c0-8519-87e546cc0ab9>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Concave polygons Concave polygons are polygons for which a line segment joining any two points in the interior does not lies completely within the figure The word interior is important. You cannot choose one point inside and one point outside the figure The following figure is concave: Segment AB does not entirely lie within the polygon. That is why the polygon is concave Notice that it is quite possible to find other segments that will lie inside the figure, such as segment FE. However, if you can find at least one segment that does not lie within the figure, the figure is concave The following figure is also concave It is easy to construct a concave figure if the figure has at least 4 sides Just make sure that one interior angle is bigger than 180 degrees. In other words, an interior angle should be a reflex angle Why am I saying at least 4 sides? It is possible to make a concave triangle? The answer is no! Since the sum of the interior angles in any triangle must add up to 180 degrees, no interior angles can be more than 180. It is impossible! Fun math game: Destroy numbered balls by adding to 10
{"url":"http://www.basic-mathematics.com/concave-polygons.html","timestamp":"2014-04-20T05:54:41Z","content_type":null,"content_length":"34354","record_id":"<urn:uuid:667c7d25-9b19-4790-87bd-aeecc12a2dc7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
calculate difference between two dates Displaying 1 - 50 of about 8509 Related Tutorials. calculate difference between two dates hi, I was actually working on to calculate the number of days between two dates of dd/mm/yyyy format... the difference between the two dates to calculate the difference between two dates in java to write a function which calculates the difference between 2 different dates 1.The function...; // Calculate difference in seconds long diffSeconds = day / 1000 calculate difference between two time in jsp How to calculate difference between two dates Javascript calculate number of days between two dates In this tutorial, you will learn how to calculate number of days between two dates.For this, you need... using Date.getTime() function. Once both Dates have been converted, subtracting Hi .Difference between two Dates Hi Friend.... Thanks for ur Very good response.. Can u plz guide me the following Program.... difference between two dates.. I need to display the number of days by Each Month on that. The thing is, that I need to find the difference between the two dates in JAVA... the difference between two dates. public class DateDiffDemo { public static...Difference in two dates Hello there once again........ Dear Sir mysql difference between two numbers How to get total bate difference between two dates for example 1/01/2012 and 1/02/2012 in MYSQL? &nbsp... between two date. The syntax of DATEDIFF is .. SELECT DATEDIFF('2012-01-31 23:59 calculate difference of two times and express it in terms of hours i need jsp code to find out the difference of two times in format hh:mm:ss...(); } } } The above code parse two times and ouputs the difference between two times in terms calculate difference of two times and express it in terms of hours i need jsp code to find out the difference of two times in format hh:mm:ss...(); } } } The above code parse two times and ouputs the difference between two times in terms calculate difference of two times and express it in terms of hours i need jsp code to find out the difference of two times in format hh:mm:ss...(); } } } The above code parse two times and ouputs the difference between two times in terms display dates between two dates hi can any one help me to writing this code, i need to store the dates between to dates so that i can retrive the data from the db using theses dates. its urgent pls tell me as possible Java Swing Date Difference In this tutorial, you will learn how to find the difference between two dates. Here is an example that accepts two dates from... the following method that calculates the date difference in terms of days days between two given dates using PHP How can we know the number of days between two given dates using PHP? Hi friends, Example: <html> <head> <title>Number of days between two difference between any two dates. Understand with Example The Tutorial describe... the time difference between any two consecutive dates. TimeDiff('Year1 Time1', 'Year2 Time2') : The Query performs the time difference between any consecutive JavaScript check if date is between two dates. Check if date is between two dates? <html> <head> <title>Date before...;/head> <body> <h2>Check if date is between two dates</h2&gt calculate difference of two times and express it in terms of hours i need jsp code to find out the difference of two times in format hh:mm:ss and want to round off the result to a hour value.it's very urgent.plz someone help me calculate difference of two times and express it in terms of hours i need jsp code to find out the difference of two times in format hh:mm:ss and want to round off the result to a hour value.it's very urgent.plz someone help me validation on dates how to find difference in days between two dates in java. Hi Friend, Please visit the following link: http://www.roseindia.net/java/java-get-example/number-days-between.shtml How to select Data Between Two dates in Java, MySQL How to select Data Between Two dates in Java, MySQL? Thanks in advance. http://www.v7n.com/forums/coding-forum/294668-how-select-data-between-two- mysql between dates not between dates I am trying to list the data between dates and not between dates using join method ..but that not working for me Calculate Month Difference in mysql hi , i am rahul.Can anybody plz help me.i am calculating month difference in mysql5.0 procedure . i have done coding for it. it's counting difference between months. but what i want ; In this section, we are going to determine the difference between two dates...;"); document.write("Difference between two dates is :"+ Math.ceil... this method in order to get the number of days between two dates. getFullYear Jdbc connectivity in java to oracle for retrieving data between two dates Dear Sir, I Need a program in which i want to retrieve the data b/w two dates from the database table. I am using combo box to get the date. Problem display diffrence between two dates in a text box i have these set of codes // String di=request.getParameter("timestamp2"); String d2=request.getParameter("timestamp3"); SimpleDateFormat formater= new SimpleDateFormat("dd-mm Date Difference This example learns how to make the difference between two dates and how... Object class and makes a difference between a Date object and a set of integer Difference between forward and sendRedirect What's the difference between forward and sendRedirect? RequestDispatcher.forward() and HttpServletResponse.sendRedirect() are the two methods available for URL redirecting How to calculating time difference bewteen two countries using country name? How to calculate the time difference between two countries. Example if i pass India and America, then the program should return the time difference How can we know the number of days between two given dates using PHP? How can we know the number of days between two given dates using PHP the number of days between the two described dates. In order to get the number of days between two dates, the given example has set two dates by using... of days between two dates: return (int)( (),jtextfield2.getText()); whats the difference between these two and which is more...difference between prepared statement and statement i mean in prepared statement we write insert command as INSERT INTO tablename VALUES javascript date difference in years I want to find the difference between two dates with respect to years. I am using javascript. Please help. &nbsp... year difference by subtracting the two RequestDispatcher object, difference between include( ) and forward( ) method. The RequestDispatcher object has two methods, include( ) and forward( ). What is the difference javascript date difference in months How can I find the difference between two dates in terms of months using javascript? <html&gt... difference in month : "+monthDiff); </script> </head> </html&gt javascript date difference in days. I want to find the difference between two dates in terms of days. I am using javascript. Please help. &nbsp...); document.write("Number of day difference : "+dayDiff); </script> &lt Difference between equals() and == ? In this section you will find the difference between equals() and ==. When we create a object using new keyword.... But one more difference is that equals() is a method and "= =&quot class. Below two program are shown which help you to understand the difference between the class and local variable. Showing Local Variable what is difference between one-way data binding and two-way data binding? what is difference between one-way data binding and two-way data binding? Thanks What is the difference between UNION and UNION ALL in SQL? What is the difference between UNION and UNION ALL in SQL? Hi, UNION is an SQL keyword used to merge the results of two or more tables using a Select difference difference between hashtable and hashtree difference what's the difference between mysql and sql What is the difference between an if statement and a switch statement? Hi, What is the difference between an if statement and a switch statement... from two choice of code to execute based on boolean value (only two possible Difference What Is the difference between JDK & SDK " +curyear + " Thn " ); } Here is an example that accepts two dates from the user and calculates the date difference in terms of days. import...Ask date difference Hello, I have a problem about how to calculate was derived from C++ but still there is big difference between these two...Difference between C++ and Java Basic difference between C... am going to give you few differences between these two...1. Java does not support Difference between jQuery UI widget and plugin JQuery UI widgets... as seems to me .Why their names are different ? What are differences between them ? Please explain clearly. Two main differences are given below Difference between Java IO Class What is the difference in function between Two set of Stream class as mention below- 1)FileInputStream & FileOutPutStream vs. 2)InputStreamReader & OutputStreamWriter What is the difference between EJB 3.0 and JPA What is the difference between EJB 3.0 and JPA. How can they work together. Does EJB 3.0 need... which can be used for creating a mapping between plain java bean objects (POJO difference between SessionState and ViewState What is the difference between SessionState and ViewState Difference between DispatchAction and LookupDispatchAction What is the Difference between DispatchAction and LookupDispatchAction difference between ForwardAction and IncludeAction What is the difference between ForwardAction and IncludeAction
{"url":"http://www.roseindia.net/software-tutorials/detail/42653","timestamp":"2014-04-24T06:41:58Z","content_type":null,"content_length":"59843","record_id":"<urn:uuid:18fe165b-08dc-4ce0-a6b1-307cfe0cca68>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with a maximum revenue problem March 3rd 2010, 06:50 PM #1 Mar 2010 Palatka, FL Need help with a maximum revenue problem Find the number of units x that produces a maximum revenue R if R = 30x^2/3-2x. IF I'm right, the first derivative is 2/3(30x)^(-1/3)-2, and the second derivative is -2/9(30x)^-4/3. This is where I run into the issues because I'm not good at all at figuring out which of the numbers in the exponent is supposed to go to the denominator and also what root it's supposed to be. I haven't taken any math classes for nearly 8 years and then I jumped right into Survey of Calculus, which is essentially calculus for business, and I'm having a great deal of difficulty. I need assistance with this question as it's on a quiz that's due tomorrow so if any of you could offer some assistance I would greatly appreciate it! What does that even mean?! 1) Stop thinking like this. Really, just stop it. 2) Figure it out. It is likely that you have been told to find where the first derivative is zero. This is an excellent place to start finding minima and maxima. 3) No one should care where you put exponents. If you realy, REALLY hate negative exponents, move them. Perhaps it would be convenient to move negative exponents from the numberator to their corresponding positive exponents in the denominator. On the other hand, it may not be particularly convenient. 4) Did I mention that you should stop beating yourself up? Last edited by TKHunny; March 4th 2010 at 01:42 PM. Reason: Typox2 I'm not trying to beat myself up. I just don't get it, hence the reason for my post. You seem to have responded to #1 and #4. You did not respond WELL. Denial is a hard thing. You did not respond at all to #2 or #3. Let's see what you get. March 3rd 2010, 08:22 PM #2 MHF Contributor Aug 2007 March 4th 2010, 05:12 AM #3 Mar 2010 Palatka, FL March 4th 2010, 01:42 PM #4 MHF Contributor Aug 2007
{"url":"http://mathhelpforum.com/calculus/131945-need-help-maximum-revenue-problem.html","timestamp":"2014-04-17T13:56:41Z","content_type":null,"content_length":"39517","record_id":"<urn:uuid:f8c72983-42b0-4496-86cf-4c4387c8d8ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
what is Deligne's cohomological descent (and what are some examples) up vote 4 down vote favorite As far as I understand Deligne's far reaching generalisation of Čech cohomology is called cohomological descent and is used to endow any variety with a (mixed) Hodge structure. Again, AFAIU, the idea is to resolve the variety, then take intersections of the exceptional pieces, resolve those and so on. As you can see I don't understand it well, so can someone please help? Also, what is the most ridiculously easy example one can have? (I guess the thing I'm interested in most is very simple examples illustrating the possible behaviours) The only places I know that discuss this are an SGA, Brian Conrad's notes and Peters-Steenbrink. ag.algebraic-geometry etale-cohomology descent hodge-theory Luc Illusie "La descente galoisienne", Moscow Math. Journal 9-1 (2009), 47-55. math.u-psud.fr/~illusie/Deligne_I3.pdf – Niels Mar 10 '13 at 8:44 thanks for that Niels, it seems a bit terse but I'll have a closer look. – Jacob Bell Mar 10 '13 at 20:40 add comment 1 Answer active oldest votes As you said, it is a generalization of Cech theory. The standard example to understand first is a divisor $D=\bigcup D_i$ with simple normal crossings. A resolution of singularities is obtained by simply taking a disjoint union of components $X_0= \coprod D_i$. Let $\pi_0:X_0\to D$ be the obvious map. The cohomology of $X_0$ with your favourite coefficients will not (usually) be the same as $D$. So you want to correct this by adding in "higher simplicies". Let $X_1$ be the disjoint union $\coprod D_i\cap D_j$. This has a pair of "face" maps to $X_1\to X_0$. We can continue this process to get a (strict) simplicial object $$ \ldots X_1\rightrightarrows X_0\to D$$ with an augmentation to $D$. Given a sheaf $F$ on $D$, we can pull it back to a get a collection of sheaves $F_i$ on $X_i$ with various structure maps. We can define the cohomolgy $H^i(X_\bullet, F_\bullet)$ by taking (for example) by compatible injective resolutions, applying $\Gamma$, and taking cohomology of the total complex of the resulting double complex. Now the point is that the machinery of descent tells you that $$H^i(D, F)\cong H^ up vote i(X_\bullet, F_\bullet)$$ or if you prefer, there is a spectral sequence relating $H^*(X_p, F_p)$ and $H^*(D,F)$. This reduces down to a Mayer-Vietoris type sequence $$\ldots H^i(D,F)\to H^ 11 down i(D_1, F)\oplus H^i(D_2,F)\to H^i(D_1\cap D_2, F)\ldots$$ when $D$ has two components. Why is this good? Because for certain things, e.g.constructing mixed Hodge structures, it's better to vote replace the singular space by a bunch of smooth spaces. As per request, I'm expanding my comment, although this may be a bit too concise. If $X_\bullet\to X$ and $Y_\bullet\to X$ are two simplicial resolutions, take the fibre product to get simplicial scheme $Z_n =\coprod_{i+j=n}X_i\times_X Y_j$ dominating both. However, it will be singular. You can build simplicial resolution $\tilde Z_\bullet$ of this inductively. First resolve singularites of $Z_0$ to get $\tilde Z_0$. For the next step, choose a resolution of $Z_1$ which maps to $\tilde Z_0\times_{X}\tilde Z_0$ etc. thanks for the answer! may I ask what happens if I change the simplicial resolution? (I assume that the end-result of the MHS does not depend on a choice, no?) I always heard that the proper base change theorem plays a key role, but I don't explicitly see why. – Jacob Bell Mar 9 '13 at 22:30 Right, the MHS is independent of the choice. The idea is that any two simplicial resolutions are dominated by a third, so the resulting MHS's are comparable... – Donu Arapura Mar 9 '13 at I see, thanks. If you have time to edit your answer and give an example of two resolutions being dominated by a third (and how base change fits in) that would be great. Anyway, before accepting I'll wait in case someone else wants to chip in. – Jacob Bell Mar 9 '13 at 22:54 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry etale-cohomology descent hodge-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/124103/what-is-delignes-cohomological-descent-and-what-are-some-examples","timestamp":"2014-04-16T07:56:58Z","content_type":null,"content_length":"59739","record_id":"<urn:uuid:0141812e-a147-402a-ae95-002ac1ceeeef>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of vigorish Vigorish, or simply the vig, also known as juice or the take, is the amount charged by a bookmaker, or bookie, for his services. In the United States it also means the interest on a shark's loan. The term is Yiddish slang originating from the Russian word for winnings, vyigrysh. Bookmakers use this concept to make money on their wagers regardless of the outcome. Because of the vigorish concept, bookmakers should not have an interest in either side winning in a given sporting event. They are interested, instead, in getting equal action on each side of the event. In this way, the bookmaker minimizes his risk and always collects a small commission from the vigorish. The bookmaker will normally adjust the odds, or line, to attract equal action on each side of an event. The concept is also sometimes referred to as the overround, although this is technically different, being the percentage the event book is above 100% whereas the vigorish is the bookmaker's percentage profit on the total stakes made on the event. For example, 20% overround is vigorish of %. The connecting formulae are v = and o = where o is overround. It is simplest to assume that vigorish is factored in proportionally to the true odds, although this need not be the case. Under proportional vigorish, a moneyline odds bet listed at −100 vs. −100 without vigorish (fair odds) could become −110 vs. −110 with vigorish factored in. Under disproportional vigorish, it could become −120 vs. +100. Common misconceptions about vigorish are that it is paid by only the "loser", only the "winner", or both in all circumstances. A claim on when and to what extent a gambler pays vigorish fees, however, cannot be abstracted from an individual gambler's behavior. A gambler's behavior with respect to different odds on an event must first be defined and only then can a determination be made on how the vigorish affects him when he wins and loses. A fair odds bet: Two people want to bet on opposing sides of an event with even odds. They are going to make the bet between each other without using the services of a bookmaker. Each person is willing to risk $100 to win $100. After each person pays their $100, there is a total of $200 in the pot. The person who loses receives nothing and the winner receives the full $200. By contrast, when using a sportsbook with the odds set at −110 vs. −110 with vigorish factored in, each person would have to risk or lay $110 to win $100. The $10 is, in effect, a bookmaker's commission for taking the action. This $10 is not in play and cannot be doubled by the winning bettor; it can only be lost. A losing bettor simply loses his $110. A winning bettor wins back his original $110, plus his $100 winnings, for a total of $210. In the above example, the bookmaker has taken a , or scaled commission fee, of $10 ÷ $210 = 4.55%. Since the winning bettor got his full $110 wager back, plus $100 in winnings, many observers will assert that only the losing bettor paid the vigorish. Others would attest that the winner — who had risked $110 and only received $210 in the end, instead of doubling his money to $220 — is the only bettor who paid the vigorish. To discuss how the bettors are affected by the vigorish, we must first define what they would have bet at fair odds (without the presence of vigorish) or else there is no way to compare how much tax is placed on the winner or loser due to the vigorish. There are unlimited possibilities for how the presence of vigorish could affect the amount wagered by a bettor, since a bettor is free to bet in any arbitrary way based on the odds. There are, however, several natural options to consider which give different results on how vigorish affects a bettor. 1. The gambler has a target amount he wants to win, which is independent of the presence or absence of vigorish. As an example, for an even match we would have −100 vs. +100 for fair odds and the gambler wagers 100 to win 100. Under proportional vigorish the odds would become −110 vs. −110 and so gamblers must wager 110 to win 100. In this case, losers lose 110 under the juiced odds compared to 100 under fair odds, so the loser pays 10 extra. The winner gets back his 110 plus 100 profit, compared to getting back his 100 plus 100 profit under fair odds. The winner has no net difference since he is up 100 either way. So the loser pays the full vigorish of 10 under this assumption. 2. The gambler has a given amount he is willing to risk, independent of vigorish. Under fair odds the gambler risks 100 to win 100. Under vigorish, the gambler still risks 100 to win 100 × (100 ÷ 110) = 90.9. Under this behavior, the loser loses 100 in both cases, so pays no vigorish. The winner wins 100 net under fair odds and 90.9 net under vigorish, so he pays 9.1 in vigorish. The winner pays the full vigorish under this assumption. 3. The gambler bets more when he has a greater edge (better payout for a given chance of winning). A Kelly gambler is one such gambler, who seeks to maximize his rate of bankroll growth in the limit of infinite bets placed over time. This type of gambler will bet more when the payout reflects a bigger advantage for him. The fact that he bets at all indicates that he thinks he has an advantage in the bet, so the presence of vigorish reduces this edge by reducing the payout for a given amount wagered. Therefore, these gamblers on either side of the wager will both bet less than they would have at fair odds (assuming proportional vigorish). The losers therefore lose less than they would have under fair odds, so counter-intuitively these losers do better with vigorish. The winners not only receives a lower payout factor on his bet, but they also risked less than they would have at fair odds, so they pay the full rake of the bookmaker, plus the amount saved by the losers, since (amount cost by winners) − (amount saved by the losers) = (full vigorish raked by the bookmaker). So for these gamblers, the losers pay negative vigorish, while the winners pay more than the full vigorish raked in by the bookie. These are three examples of possible gambler behaviors that all give different answers to the distribution of vigorish fees amongst winners and losers. One therefore cannot say precisely whether winners or losers or both are paying the vigorish until the gamblers' behaviors with respect to the fair odds and juiced odds is defined. Vigorish percentage Vigorish percentage can be defined in a way independent of the outcome of the event and of bettors' behaviors by defining it as the percentage raked in a risk-free wager. This definition is the rake of the bookie as a percentage of total bets received if the bookie has balanced the wagers so that he makes equal profit regardless of the outcome of the event. For a two outcome event, the vigorish percentage, v is $v = 100*\left(1 - \left\{p*q over p + q\right\}\right)$ where the p and q are the decimal payouts for each outcome. This should not be confused with the percentage a bettor pays due to vigorish. No consistent definition of the percentage a bettor pays due to vigorish can be made without first defining the bettor's behavior under juiced odds and assuming a win-percentage for the bettor. These factors are discussed under the debate section. For example, −110 side pricing of an even match is 4.55% vigorish, and −105 side pricing is 2.38% vigorish. Other kinds of vigorish • In table poker, the vigorish, more commonly called the rake, is a fraction of each bet placed into the pot. The dealer removes the rake from the pot after each bet (or betting round), making change if necessary. The winner of the hand gets the money that remains in the pot after the rake has been removed. Most casinos take 5-10% of the pot, capping the total rake at $3 or $4. • In the house-banked version of baccarat (also mini-baccarat) commonly played in North American casinos, vigorish refers to the 5% commission (called the cagnotte) charged to players who win a bet on the banker hand. The rules of the game are structured so that the banker hand wins slightly more often than the player hand; the 5% vigorish restores the house advantage to the casino for both bets. In most casinos, a winning banker bet is paid at even money, with a running count of the commission owed kept by special markers in a commission box in front of the dealer. This commission must be paid when all the cards are dealt from the shoe or when the player leaves the game. Some casinos don't keep a running commission amount, and instead withdraw the commission directly from the winnings; a few require the commission to be posted along with the bet, in a separate space on the table. • In pai gow poker, a 5% commission charged on all winning bets is referred to as vigorish. Unlike baccarat, the commission is paid after each winning bet, either by the player handing in the amount from his stack of chips, or by having the vig deducted from the winnings. Pai gow poker is an even game, without any built-in advantage for the house; the commission restores the • In craps, vigorish refers to the 5% commission charged on a buy bet, where a player wishes to bet that one of the numbers — 4, 5, 6, 8, 9 or 10 — will be rolled before a 7 is rolled. The commission is charged at the rate of $1 for every $20 bet. The bet is paid off at the true mathematical odds, but the 5% commission is paid as well, restoring the house advantage. For many years, this commission was paid whether the bet won or not. In recent years, many casinos have changed to charging the commission only when the bet wins, which greatly reduces the house advantage; for instance, the house advantage on a buy bet on the 4 or 10 is reduced from 5% to 1.67%, since the bet wins one-third of the time (2:1 odds against). In this case, the vig may be deducted from the winnings (for instance, a $20 bet on the 4 would be paid $39 — $40 at 2:1 odds, less the $1 commission), or the player may simply hand the commission in and receive the full payout. This rule is commonplace in Mississippi casinos, and becoming more widely available in Nevada. • Vig may generically refer to the built-in house advantage on most bets on any game in a casino. • Vig is sometimes used by investment bankers to describe profits from advisory and other activities. • In backgammon, the recube vig is the value of having possession of the doubling cube to the player being offered a double. • The payouts and winning combinations available on most slot machines and in other electronic gambling systems are often designed such that an average of between 0.1% to 10% (varying by machine and facility) of funds taken in are not used to pay out winnings, and thus becomes the house's share. Machines or facilities with a particularly low percentage are often said to be loose. See also
{"url":"http://www.reference.com/browse/vigorish","timestamp":"2014-04-19T06:36:09Z","content_type":null,"content_length":"90076","record_id":"<urn:uuid:80ba5bb6-be22-4195-9f39-d50b5f18f7da>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the work done by nonconservative forces in stopping the plane. 1. The problem statement, all variables and given/known data Find the work done by nonconservative forces in stopping the plane. A 18000 {\rm kg} airplane lands with a speed of 92 {\rm m/s} on a stationary aircraft carrier deck that is 115 {\rm m} long. 2. Relevant equations W = FD 3. The attempt at a solution 18000 * 92 * 115 = 190440000 J Unless it's wanting 1.9 * 10^8 = 190000000 J (which seems wrong since detail is lost) (I don't think that's 2 significant digits.) The best way to solve this type of problem is with energy considerations. What was the KE of the plane just before it traps on the carrier deck? What is the KE right after it is stopped by the wire? Wher did that energy go?
{"url":"http://www.physicsforums.com/showthread.php?p=2904955","timestamp":"2014-04-21T12:08:19Z","content_type":null,"content_length":"33211","record_id":"<urn:uuid:993ca00f-47e4-4087-bacb-a12209c19b89>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - 2d damped wave equation 2d damped wave equation Hi to all! I need to solve following equation: \frac{\partial^2 u}{\partial t^2} + 2 \beta \frac{\partial u}{\partial t} -c^2\nabla^2u=0 It describes a damped wave on a x-y plane. [tex]2\beta[/tex] is damping factor and c is wave speed. I haven't had any luck finding a PDE class that looks like this. Closest match is Helmholtz equation but it doesn't have [tex]\frac{\partial}{\partial t}[/tex] element. Tried to solve it using Mathematica but didn't have any luck (but that is maybe because of the fact that I don't really know how to use Mathematica). Any hints on how to proceed would be appreciated either on manual solving or by using Mathematica (or Matlab, for that matter). jrosen13 Apr15-10 08:04 PM Re: 2d damped wave equation Seperation of variables to turn it into ordinary differential equations. It looks like __ equation for spatial part, and __ for time part, but I wont fill in the blanks, thats cheating :) All times are GMT -5. The time now is 01:33 PM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=395618","timestamp":"2014-04-17T18:33:14Z","content_type":null,"content_length":"4903","record_id":"<urn:uuid:9e51330a-7556-4008-8a61-cf96563f21c4>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
"Garden sprinkler effect" in wave model 6.1 "Garden sprinkler effect" in wave model Stéphane Popinet sh garden.sh Required files garden.gfs (view) (download) garden.sh end.gfv mesh.gfv Running time 41 minutes The wave model is used to reproduce the classical "Garden Sprinkler Effect" (GSE), a numerical artifact of the discrete directions of wave propagation (see Tolman, 2002). A spatially-Gaussian wave spectrum is initialised in a 5000 km-squared domain. The other parameters are those of Tolman, 2002. The final (t = 5 days) significant wave height for different model runs is illustrated in Figure 48. The interval between the isolines is 0.1 metres as in Figure 1 of Tolman, 2002. For a small number of discrete directions (24), the GSE is evident and the results closely match those of Tolman both for the constant resolution and the adaptive version of the code. For larger number of directions (60 and 120), the results do not show any obvious GSE and match the corresponding results of Tolman (Figure 1.b of Tolman, 2002 but note that the spatial resolution of Tolman is finer, 25 km rather than 78 km here). The evolution in time of the significant wave height together with the corresponding adaptive discretisation is illustrated in Figure 49 for 120 directions. The mesh is adapted according to the spatial gradient in the significant wave height. This results in substantial savings in computational cost as illustrated by the timings given in Table 1. The computational cost with 120 directions is comparable to the cost with 24 directions on a regular (i.e. non-adaptive) mesh. This demonstrates that the GSE can be alleviated – at comparable computational cost – by combining adaptive refinement with a refined discretisation in direction space. │Adaptivity│# directions │CPU time (seconds) │ │ No │ 24 │ 582 │ │ Yes │ 24 │ 179 │ │ Yes │ 60 │ 312 │ │ Yes │ 120 │ 699 │
{"url":"http://gfs.sourceforge.net/examples/examples/garden.html","timestamp":"2014-04-17T07:11:16Z","content_type":null,"content_length":"5482","record_id":"<urn:uuid:bca8422e-da37-46b6-93e7-71c93a46fb83>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Lyapunov Fractals The algorithm for computing the fractal is summarized as follows: 1. Choose a string of A’s and B’s of any nontrivial length (e.g., AABAB). 2. Construct the sequence formed by successive terms in the string, repeated as many times as necessary. 3. Choose a point . 4. Define the function if and if . 5. Let , and compute the iterates . 6. Compute the Lyapunov exponent: . In practice, is approximated by choosing a suitably large (in the Manipulate code, the variable “iterations” corresponds to ). 7. Color the point according to the value of obtained. 8. Repeat steps 3–7 for each point in the image plane.
{"url":"http://demonstrations.wolfram.com/LyapunovFractals/","timestamp":"2014-04-17T06:45:09Z","content_type":null,"content_length":"45728","record_id":"<urn:uuid:25ab939a-99d8-42d7-a1ca-c2875860eb0b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Semantical characterizations and complexity of equivalences in answer set programming Results 1 - 10 of 25 - THEORETICAL COMPUTER SCIENCE , 1994 "... In this paper we introduce revision programming -- a logic-based framework for describing constraints on databases and providing a computational mechanism to enforce them. Revision programming captures those constraints that can be stated in terms of the membership (presence or absence) of items (re ..." Cited by 36 (1 self) Add to MetaCart In this paper we introduce revision programming -- a logic-based framework for describing constraints on databases and providing a computational mechanism to enforce them. Revision programming captures those constraints that can be stated in terms of the membership (presence or absence) of items (records) in a database. Each such constraint is represented by a revision rule ff / ff 1 ; : : : ; ff k , where ff and all ff i are of the form in(a) and out(b). Collections of revision rules form revision programs. Similarly as logic programs, revision programs admit both declarative and imperative (procedural) interpretations. In our paper, we introduce a semantics that reflects both interpretations. Given a revision program, this semantics assigns to any database B a collection (possibly empty) of P-justified revisions of B. The paper contains a thorough study of revision programming. We exhibit several fundamental properties of revision programming. We study the relationship of revision programming to logic programming. We investigate complexity of reasoning with revision programs as well as algorithms to compute P -justified revisions. Most importantly from the practical database perspective, we identify two classes of revision programs, safe and stratified, with a desirable property that they determine for each initial database a unique revision. , 2008 "... The notion of forgetting, also known as variable elimination, has been investigated extensively in the context of classical logic, but less so in (nonmonotonic) logic programming and nonmonotonic reasoning. The few approaches that exist are based on syntactic modifications of a program at hand. In t ..." Cited by 17 (5 self) Add to MetaCart The notion of forgetting, also known as variable elimination, has been investigated extensively in the context of classical logic, but less so in (nonmonotonic) logic programming and nonmonotonic reasoning. The few approaches that exist are based on syntactic modifications of a program at hand. In this paper, we establish a declarative theory of forgetting for disjunctive logic programs under answer set semantics that is fully based on semantic grounds. The suitability of this theory is justified by a number of desirable properties. In particular, one of our results shows that our notion of forgetting can be entirely captured by classical forgetting. We present several algorithms for computing a representation of the result of forgetting, and provide a characterization of the computational complexity of reasoning from a logic program under forgetting. As applications of our approach, we present a fairly general framework for resolving conflicts in inconsistent knowledge bases that are represented by disjunctive logic programs, and we show how the semantics of inheritance logic programs and update logic programs from the literature can be characterized through forgetting. The basic idea of the conflict resolution framework is to weaken the preferences of each agent by forgetting certain knowledge that causes inconsistency. In particular, we show how to use the notion of forgetting to provide an elegant solution for preference elicitation in disjunctive logic programming. - ARTIFICIAL INTELLIGENCE , 2010 "... We develop a formal framework for comparing different versions of DL-Lite ontologies. The main feature of our approach is that we take into account the vocabulary ( = signature) with respect to which one wants to compare ontologies. Five variants of difference and inseparability relations between on ..." Cited by 13 (6 self) Add to MetaCart We develop a formal framework for comparing different versions of DL-Lite ontologies. The main feature of our approach is that we take into account the vocabulary ( = signature) with respect to which one wants to compare ontologies. Five variants of difference and inseparability relations between ontologies are introduced and their respective applications for ontology development and maintenance discussed. These variants are obtained by generalising the notion of conservative extension from mathematical logic and by distinguishing between differences that can be observed among concept inclusions, answers to queries over ABoxes, by taking into account additional context ontologies, and by considering a model-theoretic, language-independent notion of difference. We compare these variants, study their meta-properties, determine the computational complexity of the corresponding reasoning tasks, and present decision algorithms. Moreover, we show that checking inseparability can be automated by means of encoding into QBF satisfiability and using off-the-shelf general purpose QBF solvers. Inseparability relations between ontologies are then used to develop a formal framework for (minimal) module extraction. We demonstrate that different types of minimal modules induced by these inseparability relations can be automatically extracted from real-world medium-size DL-Lite ontologies by composing the tractable syntactic locality-based module extraction algorithm with non-tractable extraction algorithms using the multi-engine QBF solver aqme. Finally, we explore the relationship between uniform interpolation (or forgetting) and inseparability between ontologies. - ICLP 2005. LNCS , 2005 "... Abstract. In recent work, a general framework for specifying program correspondences under the answer-set semantics has been defined. The framework allows to define different notions of equivalence, including the well-known notions of strong and uniform equivalence, as well as refined equivalence no ..." Cited by 12 (9 self) Add to MetaCart Abstract. In recent work, a general framework for specifying program correspondences under the answer-set semantics has been defined. The framework allows to define different notions of equivalence, including the well-known notions of strong and uniform equivalence, as well as refined equivalence notions based on the projection of answer sets, where not all parts of an answer set are of relevance (like, e.g., removal of auxiliary letters). In the general case, deciding the correspondence of two programs lies on the fourth level of the polynomial hierarchy and therefore this task can (presumably) not be efficiently reduced to answerset programming. In this paper, we describe an approach to compute program correspondences in this general framework by means of linear-time constructible reductions to quantified propositional logic. We can thus use extant solvers for the latter language as back-end inference engines for computing program correspondence problems. We also describe how our translations provide a method to construct counterexamples in case a program correspondence does not hold. 1 "... Summary. Modularity of ontologies is currently an active research field, and many different notions of a module have been proposed. In this paper, we review the fundamental principles of modularity and identify formal properties that a robust notion of modularity should satisfy. We explore these pro ..." Cited by 11 (4 self) Add to MetaCart Summary. Modularity of ontologies is currently an active research field, and many different notions of a module have been proposed. In this paper, we review the fundamental principles of modularity and identify formal properties that a robust notion of modularity should satisfy. We explore these properties in detail in the contexts of description logic and classical predicate logic and put them into the perspective of well-known concepts from logic and modular software specification such as interpolation, forgetting and uniform interpolation. We also discuss reasoning problems related to modularity. 1 - Principles of Knowledge Representation and Reasoning, Proceedings of the Tenth International Conference (KR2006 , 2006 "... We show that the concepts of strong and uniform equivalence of logic programs can be generalized to an abstract algebraic setting of operators on complete lattices. Our results imply characterizations of strong and uniform equivalence for several nonmonotonic logics including logic programming with ..." Cited by 8 (4 self) Add to MetaCart We show that the concepts of strong and uniform equivalence of logic programs can be generalized to an abstract algebraic setting of operators on complete lattices. Our results imply characterizations of strong and uniform equivalence for several nonmonotonic logics including logic programming with aggregates, default logic and a version of autoepistemic logic. 1 - PROCEEDINGS OF AAAI 2008 , 2008 "... Recent research in nonmonotonic logic programming has focused on certain types of program equivalence, which we refer to here as hyperequivalence, that are relevant for program optimization and modular programming. So far, most results concern hyperequivalence relative to the stable-model semantics. ..." Cited by 7 (6 self) Add to MetaCart Recent research in nonmonotonic logic programming has focused on certain types of program equivalence, which we refer to here as hyperequivalence, that are relevant for program optimization and modular programming. So far, most results concern hyperequivalence relative to the stable-model semantics. However, other semantics for logic programs are also of interest, especially the semantics of supported models which, when properly generalized, is closely related to the autoepistemic logic of Moore. In this paper, we consider a family of hyperequivalence relations for programs based on the semantics of supported and supported minimal models. We characterize these relations in model-theoretic terms. We use the characterizations to derive complexity results concerning testing whether two programs are hyperequivalent relative to supported and supported minimal models. , 2007 "... This note provides background information and references to the tutorial on recent research developments in logic programming inspired by need of knowledge representation. ..." Cited by 7 (0 self) Add to MetaCart This note provides background information and references to the tutorial on recent research developments in logic programming inspired by need of knowledge representation. - In Proc. LPNMR’05, number 3662 in LNCS , 2005 "... Abstract. The rapid expansion of the Internet and World Wide Web led to growing interest in data and information integration, which should be capable to deal with inconsistent and incomplete data. Answer Set solvers have been considered as a tool for data integration systems by different authors. We ..." Cited by 4 (0 self) Add to MetaCart Abstract. The rapid expansion of the Internet and World Wide Web led to growing interest in data and information integration, which should be capable to deal with inconsistent and incomplete data. Answer Set solvers have been considered as a tool for data integration systems by different authors. We discuss why data integration can be an interesting model application of Answer Set programming, reviewing valuable features of non-monotonic logic programs in this respect, and emphasizing the role of the application for driving research. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1103505","timestamp":"2014-04-21T13:03:59Z","content_type":null,"content_length":"38483","record_id":"<urn:uuid:28d1993e-e415-4944-8d59-1c976f750169>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Simple random number generator? Replies: 6 Last Post: Dec 23, 2012 10:25 AM Messages: [ Previous | Next ] JohnF Re: Simple random number generator? Posted: Nov 28, 2012 3:46 AM Posts: 145 Registered: 5/27/08 Clark Smith <noaddress@nowhere.net> wrote: >> Would be the digits of e, pi, et al? >> If that's the case, no need for fancy pyooter algorithms? >> Inneresting article on pi, randomness, chaos. >> http://www.lbl.gov/Science-Articles/Archive/pi-random.html > Is it not the case that the digits of e, pi et al. can't strictly > be random, if it is only because they are highly compressible? I.e. > because there small, compact formulas that spit out as many digits as you > want in a completely deterministic way? That's exactly the viewpoint of Kolmogorov complexity theory (also called algorithmic complexity), already highly formalized, very easily google-able, primarily developed by (Kolmogorov and) G.J.Chaitin. Sounds like you read about it at some point, and subsequently forgot the source. John Forkosh ( mailto: j@f.com where j=john and f=forkosh ) Date Subject Author 11/26/12 Simple random number generator? Existential Angst 11/26/12 Re: Simple random number generator? Clark Smith 11/27/12 Re: Simple random number generator? Existential Angst 11/27/12 Re: Simple random number generator? Existential Angst 11/28/12 Re: Simple random number generator? JohnF 11/28/12 Re: Simple random number generator? Frederick Williams 12/23/12 Re: Simple random number generator? DBatchelo1
{"url":"http://mathforum.org/kb/message.jspa?messageID=7929221","timestamp":"2014-04-18T05:47:48Z","content_type":null,"content_length":"24088","record_id":"<urn:uuid:0ffbc0cc-31fb-43af-8103-19461ed2e319>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Students learn most effectively if they are able to apply inquiry and problem-solving skills to problems that emphasize practical applications. Many experts stress connecting science to other disciplines, such as mathematics, and modeling word problems to real-world situations. Making a connection between mathematics and chemistry (determining the optimal angle between the atoms of covalent bonds) should help answer the trigonometry student's question, "Why do we need to learn these identities and when will we ever use them?" Several trigonometric identities are necessary when doing the proof of the optimal angle for a molecule with four identical atoms bonded to a central atom that has a complete valence shell. This lesson could be taught collaboratively by a chemistry and a mathematics teacher. Manipulatives should be used to aid students' understanding of the lesson. Atoms form covalent bonds with other atoms to create molecules. A covalent bond is formed when two atoms share a pair of electrons. The number of covalent bonds that an atom can form depends on the number of available electrons found in its outermost (valence) shell. In a single covalent bond, the sharing of a pair of electrons forms the bond that holds two atoms together. However, when considering a polyatomic molecule (a molecule in which there are two or more atoms bonded to a central atom) it is important to realize that there are interactions that occur between the covalent bonds that determine the three-dimensional shape of the molecule. What are these interactions that occur between covalent bonds? An electron is by definition a negatively charged atomic particle. In a polyatomic molecule, there are two or more covalent bonds. Because each bond is composed of negatively charged electrons. the negative charges found on the electrons that compose the bonds repel each other. Ultimately, the molecule will be arranged in three dimensions such that the repulsion between the electron pairs of different bonds is at a minimum. The repulsive forces between the electron pairs of different covalent bonds causes the bonds to remain as far apart as possible. The valence-shell electron-pair repulsion (VSEPR) model is used by scientists to account for the geometric arrangements of covalent bonds around a central atom that minimize the repulsion between the electron pairs of the covalent bonds. The simplest molecular shape that can be explained by the VSEPR model is that of a molecule in which two atoms are bonded covalently to a central atom to complete its valence shell. Carbon dioxide (with the molecular formula CO[2]) is an example of a molecule in which two atoms are bonded covalently to the central atom (C), leaving no nonbonding pairs of electrons. A Lewis structure is a two-dimensional representation of a molecule's structure. The Lewis structure for CO[2] appears in Figure 1. The electron pairs that create the covalent bonds between the carbon atom and the oxygen atoms repel each other. In order to minimize the repulsion between the covalent bonds, the bonds must be separated from each other by 180°. In this case, the Lewis structure accurately describes both the two-dimensional and three-dimensional shape of the molecule. A polyatomic molecule that is composed of two atoms covalently bonded to a central atom (leaving no non-bonding pairs of electrons) takes on a linear conformation and a characteristic bond angle of 180°. Using the VSEPR model, students can examine the geometry of a molecule that is composed of three identical atoms covalently bonded to a central atom, leaving no nonbonding electrons. Boron trifluoride (BF[3] ) is a molecule that fits this description. The Lewis structure of BF[3] that appears in Figure 1 accounts for molecular shape in only two dimensions. In reality, the molecule exists in three dimensions. The two-dimensional molecular model in this figure suggests that the optimal bond angle for BF[3] is 120°, and that all four atoms of the molecule are in the same plane. Is there a three-dimensional conformation that would result in a greater bond angle and thus a greater distance between the bonds? Intuitively, the answer is no. If boron were moved out of the plane, the angle in question (FBF) would become smaller, less than 120°. When this angle is reduced, the distance between the two fluorine atoms of the angle is reduced as well. If the two fluorine atoms move closer to each other, the electrons that form the bonds are also brought closer together. If these electrons are brought closer together, they will experience more repulsion. The three-dimensional conformation that BF[3] must take on in order to minimize the repulsion between the covalent bonds is a trigonal planar conformation with an optimal bond angle of 120°. A more interesting problem of molecular geometry is encountered when dealing with a molecule comprised of four atoms covalently bonded to a central atom leaving no nonbonding electron pairs. A common example of such a molecule is methane (CH[4]). The Lewis structure for CH[4] also appears in Figure 1. The Lewis structure suggests that the optimal bond angle for methane is 90°. Does a three-dimensional conformation exist for methane that would allow bond angles greater than 90°? If such a conformation exists, the hydrogen atoms would be farther apart from each other. How does one go about finding the optimal bond angle that places these four hydrogen atoms at points in space that are the greatest distance from each other? Students should be encouraged to experiment with manipulatives, such as gum drops and toothpicks or straws and marshmallows, to build the three-dimensional models of carbon dioxide, boron trifluoride, and methane (Figure 2). The three-dimensional model of methane is a tetrahedron, with the carbon atom at the center of the tetrahedron and the four hydrogen atoms at the vertices (Figure 3). Students should be encouraged to construct cardboard or paper triangles for Figures 4, 5, and 6 and to use a protractor to measure the bond angles of each of their models. This will help students to visualize the methane model and understand the following discussion. To determine the optimal bond angle, draw a perpendicular line from the carbon atom (C) to the plane containing three of the hydrogen atoms. Let Q represent the foot of this perpendicular line (Figures 3 and 4), and let y represent the distance between the carbon atom and any of the hydrogen atoms. Let a represent the distance from Q to one of the hydrogen atoms, and let x represent the measure of the required bond angle. In Figure 3, note that Q is the circumcenter of the equilateral triangle formed by the three hydrogen atoms that lie in the bottom plane. Because the triangle HHH is equilateral, each of the angles HQH measures 120°. In Figure 4, the measure of angle HCQ = (180-x)° and the measure of angle CHQ = (x-90)°. It follows that cos(-A) = cos(A) and cos(90-A)° = sin(A), it follows that a = y cos(90 -x)° and that a = y sin(x). Now, examine the triangle formed by two hydrogen atoms and point Q (Figure 5). The altitude from point Q divides the triangle HQH into two congruent triangles HQT and HQT (hypotenuse - leg theorem). So, the vertex angle HQH is divided into two angles whose measures are each 60°. Figure 6 represents the triangle formed by the carbon atom and two hydrogen atoms. Using the definition of sine, it can be shown that Squaring both sides, one obtains the following result: Using the identity ^2(x) - 2cos(x) - 1 = 0 from which cos(x) = -.33 or cos(x) = 1. Finally, the value of x is 109.5°, which is the measure of the required bond angle. It is important for students to make connections between mathematics and other disciplines. Knowledge of mathematics means much more than just memorizing information or facts; it requires the ability to use information to reason, think, and solve problems. By themselves, trigonometric identities are just facts, but applying them to a real-world problem will give students a deeper appreciation of those identities and of mathematics. This manipulation of several trigonometric identities allows students to discover for themselves that the optimal bond angle for methane is 109.5°, not 90° as suggested by the 2-dimensional representation. Hopefully, students will begin to value and use the connections between mathematics and other disciplines. This article was published in the February 1998 issue of The Science Teacher. At the time, Dr. Pleacher was a medical student at the Medical College of Virginia, Richmond, Virginia.
{"url":"http://www.pleacher.com/mp/mlessons/trig/bonds.html","timestamp":"2014-04-19T04:20:10Z","content_type":null,"content_length":"11682","record_id":"<urn:uuid:f337a530-218f-40ef-97cd-5881d9fe376e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
A Monadic Multi-stage Metalanguage Results 1 - 10 of 12 , 2003 "... We define a calculus for investigating the interactions between mixin modules and computational effects, by combining the purely functional mixin calculus CMS with a monadic metalanguage supporting the two separate notions of simplification (local rewrite rules) and computation (global evaluation ab ..." Cited by 18 (6 self) Add to MetaCart We define a calculus for investigating the interactions between mixin modules and computational effects, by combining the purely functional mixin calculus CMS with a monadic metalanguage supporting the two separate notions of simplification (local rewrite rules) and computation (global evaluation able to modify the store). This distinction is important for smoothly integrating the CMS rules (which are all local) with the rules dealing with the imperative features. In our calculus mixins... - In Proceeding of the 2003 Workshop on Fixed Points in Computer Science (FICS , 2003 "... This paper proposes an operational semantics for value recursion in the context of monadic metalanguages. Our technique for combining value recursion with computational effects works uniformly for all monads. The operational nature of our approach is related to the implementation of recursion in Sch ..." Cited by 18 (7 self) Add to MetaCart This paper proposes an operational semantics for value recursion in the context of monadic metalanguages. Our technique for combining value recursion with computational effects works uniformly for all monads. The operational nature of our approach is related to the implementation of recursion in Scheme and its monadic version proposed by Friedman and Sabry, but it defines a different semantics and does not rely on assignments. When contrasted to the axiomatic approach proposed by Erkök and Launchbury, our semantics for the continuation monad invalidates one of the axioms, adding to the evidence that this axiom is problematic in the presence of continuations. 1 - GENERATIVE PROGRAMMING AND COMPONENT ENGINEERING (GPCE), LECTURE NOTES IN COMPUTER SCIENCE , 2003 "... Recent work proposed defining type-safe macros via interpretation into a multi-stage language. The utility of this approach was illustrated with a language called MacroML, in which all type checking is carried out before macro expansion. Building on this work, the goal of this paper is to develo ..." Cited by 7 (3 self) Add to MetaCart Recent work proposed defining type-safe macros via interpretation into a multi-stage language. The utility of this approach was illustrated with a language called MacroML, in which all type checking is carried out before macro expansion. Building on this work, the goal of this paper is to develop a macro language that makes it easy for programmers to reason about terms locally. We show that defining the semantics of macros in this manner helps in developing and verifying not only type systems for macro languages but also equational reasoning principles. Because the MacroML calculus is sensetive to renaming of (what appear locally to be) bound variables, we present a calculus of staged notational definitions (SND) that eliminates the renaming problem but retains MacroML's phase distinction. Additionally, SND incorporates the generality of Griffin's account of notational definitions. We exhibit a formal equational theory for SND and prove its soundness. - In Karsai and Visser [KV04 "... We define a basic calculus for name management, which is obtained by an appropriate combination of three ingredients: extensible records (in a simplified form), names (as in FreshML), computational types (to allow computational e#ects, including generation of fresh names). The calculus supports the ..." Cited by 3 (1 self) Add to MetaCart We define a basic calculus for name management, which is obtained by an appropriate combination of three ingredients: extensible records (in a simplified form), names (as in FreshML), computational types (to allow computational e#ects, including generation of fresh names). The calculus supports the use of symbolic names for programming in-the-large, e.g. it subsumes Ancona and Zucca's calculus for module systems, and for meta-programming (but not the intensional analysis of object level terms supported by FreshML), e.g. it subsumes (and improves) Nanevski and Pfenning's calculus for meta-programming with names and necessity. Moreover, it models some aspects of Java's class loaders. 1 , 2006 "... Building program generators that do not duplicate generated code can be challenging. At the same time, code duplication can easily increase both generation time and runtime of generated programs by an exponential factor. We identify an instance of this problem that can arise when memoized functions ..." Cited by 3 (1 self) Add to MetaCart Building program generators that do not duplicate generated code can be challenging. At the same time, code duplication can easily increase both generation time and runtime of generated programs by an exponential factor. We identify an instance of this problem that can arise when memoized functions are staged. Without addressing this problem, it would be impossible to effectively stage dynamic programming algorithms. Intuitively, direct staging undoes the effect of memoization. To solve this problem once and for all, and for any function that uses memoization, we propose a staged monadic combinator library. Experimental results confirm that the library works as expected. Preliminary results also indicate that the library is useful even when memoization is not used. , 2003 "... We define a basic calculus ML for manipulating symbolic names inspired by #- calculi with extensible records. The resulting calculus supports the use of symbolic names for meta-programming and programming in-the-large, it subsumes Ancona and Zucca's CMS, and partly Nanevski and Pfenning's # , ..." Cited by 1 (1 self) Add to MetaCart We define a basic calculus ML for manipulating symbolic names inspired by #- calculi with extensible records. The resulting calculus supports the use of symbolic names for meta-programming and programming in-the-large, it subsumes Ancona and Zucca's CMS, and partly Nanevski and Pfenning's # , and seems able to model some aspects of the mechanism of Java class loaders. We present two di# erent extensions of the basic calculus, the first consider the interaction with computational e#ects (in the form of imperative computations), the second shows how CMS can be naturally encoded into ML . 1 , 2004 "... We introduce a monadic metalanguage which combines two previously proposed monadic metalanguages: one for staging and the other for value recursion. The metalanguage includes also extensible records as a basic name management facility. 1 ..." Add to MetaCart We introduce a monadic metalanguage which combines two previously proposed monadic metalanguages: one for staging and the other for value recursion. The metalanguage includes also extensible records as a basic name management facility. 1 "... The paper describes a language consisting of two layers, terms and computation rules, whose operational semantics is given in terms of two relations: simplification and computation. Simplification is induced by confluent rewriting on terms. Computation is induced by chemical reactions, like those in ..." Add to MetaCart The paper describes a language consisting of two layers, terms and computation rules, whose operational semantics is given in terms of two relations: simplification and computation. Simplification is induced by confluent rewriting on terms. Computation is induced by chemical reactions, like those in the Join-calculus. The language can serve as metalanguage for defining the operational semantics of other languages. This is demonstrated by defining encodings of several calculi (representing idealized programming languages). Keywords: Operational Semantics, Confluent Rewriting, Multiset "... Staging is a powerful language construct that allows a program at one stage to manipulate and specialize a program at the next. We propose 〈ML 〉 as a new staged calculus designed with novel features for staged programming in modern computing platforms such as embedded systems. A distinguishing featu ..." Add to MetaCart Staging is a powerful language construct that allows a program at one stage to manipulate and specialize a program at the next. We propose 〈ML 〉 as a new staged calculus designed with novel features for staged programming in modern computing platforms such as embedded systems. A distinguishing feature of 〈ML 〉 is a model of process separation, whereby different stages of computation are executed in different process spaces. Our language also supports dynamic type specialization via type abstraction, dynamic type construction, and a limited form of type dependence. 〈ML 〉 is endowed with a largely standard metatheory, including type preservation and type safety results. We discuss the utility of our language via code examples from the domain of wireless sensor network "... Abstract. We define a calculus for investigating the interactions between mixin modules and computational effects, by combining the purely functional mixin calculus CMS with a monadic metalanguage supporting the two separate notions of simplification (local rewrite rules) and computation (global eva ..." Add to MetaCart Abstract. We define a calculus for investigating the interactions between mixin modules and computational effects, by combining the purely functional mixin calculus CMS with a monadic metalanguage supporting the two separate notions of simplification (local rewrite rules) and computation (global evaluation able to modify the store). This distinction is important for smoothly integrating the CMS rules (which are all local) with the rules dealing with the imperative features. In our calculus mixins can contain mutually recursive computational components which are explicitly computed by means of a new mixin operator whose semantics is defined in terms of a Haskell-like recursive monadic binding. Since we mainly focus on the operational aspects, we adopt a simple type system like that for Haskell, that does not detect dynamic errors related to bad recursive declarations involving effects. The calculus serves as a formal basis for defining the semantics of imperative programming languages supporting first class mixins while preserving the CMS equational reasoning. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=186502","timestamp":"2014-04-19T00:26:47Z","content_type":null,"content_length":"35723","record_id":"<urn:uuid:9d3ea327-e5c3-4a11-bf1b-37146cb0ce53>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 367 Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program Appendix D The Optimal Criterion for Positivity in Screening Consider a population of individuals composed of two subpopulations: with disease, D, and with no disease, ND. Assume that the prevalence of disease is pdis and that in a representative subset of the population we know, by some gold standard, which individuals have disease and which have no disease. Consider that a diagnostic test T is applied to the representative subset and yields two distributions of results: one among patients with disease and the other among patients with no disease. Denote the two probability density distributions as Distdis and Distnodis, respectively. Distdis(x) denotes the probability of test result x in patients with disease; Distnodis(x) denotes the probability of test result x in patients without disease. The distribution of test results in the population as a whole, Distpop, is the weighted average of Distdis and Distnodis, with weights pdis and 1 − pdis, respectively. The distributions can be seen in Figure 9.2. The task in establishing a cutoff criterion (threshold T) for the test—that is, deciding how we classify patients—is in some sense to minimize the burden of misclassification. Among patients with disease, the probability of a positive result (i.e., a result that is > T) is the sensitivity or true-positive rate (TPR) and its complement is the false-negative rate (FNR). Among patients with no disease, the probability of a positive result (i.e., a result that is > T) is the false-positive rate (FPR), and its complement is the specificity. Define the burden of a false positive as Bfp and the burden of a false negative as Bfn. We achieve the goal of minimizing the overall burden by minimizing the expression OCR for page 367 Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program or Changing T will change both FNR and FPR in a fashion determined by the shape and the overlap of Distdis and Distnodis. To minimize the overall burden of false-positive and false-negative results combined (with respect to changing T), one can differentiate the expression with respect to T and set the result to zero. That yields Rearranging, we get: or This can be shown to equal: Another way of thinking about the cutoff criterion is to understand that it is the point t at which Distdis(t) × (p) × Bfn = Distnodis(t) × (1 − p) × Bfp. That is an equivalent formulation of the same equation because dTPR/dT is simply probability density distribution Distdis, and dFPR/dT is simply probability density distribution Distnodis. Now, if we plot TPR (vertical axis) against FPR (horizontal axis), we have the receiver operating characteristic (ROC) curve of the test. The slope of that curve at any point is simply dTPR/dFPR. Hence, the optimal operating point is the value of T where the slope of the curve (or its tangent) is numerically equivalent to Some authors use the term cost of false positive (C) in place of burden of false positive and the term benefit of true positive (B) in place of burden of false negative, all being greater than zero. In that case, the optimal operating point is the value of T where the slope of the ROC curve (or its tangent) is numerically equivalent to The true and false-positive rates (from which one constructs an ROC curve) are the areas under the tails of the corresponding probability density distributions OCR for page 367 Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program FIGURE D.1 True-positive and false-positive rates as a function of test result. Also shown is ratio of the probability of that test result in patients with disease to probability of that test result in patients without disease (plotted on logarithmic scale). (or segments of the cumulative probability distributions). The slope of the ROC curve is the ratio of the height of the probability density distribution for patients with disease to the height of the probability density distribution for patients with no disease. If one plots that ratio on the vertical axis against the test result on the horizontal axis, one can determine the cutoff criterion that corresponds to any given slope; if one also plots the corresponding cumulative distributions against the test result, one can also find the corresponding optimal point on the ROC curve (Figure D.1). AN EXAMPLE Now consider an example of finding the best operating point (the best criterion of positivity) in a population to be screened. Assume that we are screening for a disease (perhaps a slow growing cancer) for which early detection provides a benefit of 0.5 years of survival; thus, Bfn is 0.5. Further assume that a false-positive result is associated with a risk of 0.05 years (perhaps because the population to be screened has a high prevalence of severe chronic pulmonary disease, which substantially increases the risk posed by surgery), making Bfp 0.05. If one were considering screening a population in which the prevalence of disease is [subjunctive case is were] 10%, the best criterion for positivity would OCR for page 367 Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program be the point on the ROC curve where its slope (or tangent) is [(1 − 0.1) × 0.05]/[0.1 × 0.5] or 0.90, a point near the middle of most ROC curves, with modest true-positive and false-positive rates. If we use the distributions displayed in Figure 9.2 and the ROC curve displayed in Figure 9.3, the optimal criterion of positivity would correspond to a true-positive rate (sensitivity) of 80% and a false positive rate of 17% (a specificity of 83%). However, if one were considering screening a population in which the prevalence of disease is [subjunctive case is were] only 1%, then the best criterion for positivity would be the point on the ROC curve where its slope (or tangent) is [(1 − 0.01) × 0.05]/[0.01 × 0.5] or 9.9, a point nearer to the origin for most ROC curves, with both true-positive and false-positive rates low. Again, if we use the distributions displayed in Figure 9.2 and the ROC curve displayed in Figure 9.3, the optimal criterion of positivity would correspond to a true-positive rate (sensitivity) of 42% and a false-positive rate of 1% (a specificity of 99%). In the special case when both probability density distributions (patients with and without disease) are normal or Gaussian in shape, the slope of the corresponding ROC at any point (the ratio of the heights of the corresponding density distributions) can be solved algebraically, although the equation is fairly complex. Because the normal distribution is where x is the test result, is the mean, and σ is the standard deviation, the optimal cutoff criterion will be the value of x where The value of x at which the equality holds can be found by successive approximations or using the “goal seek” function in a spreadsheet program.
{"url":"http://books.nap.edu/openbook.php?record_id=11279&page=367","timestamp":"2014-04-17T06:57:49Z","content_type":null,"content_length":"45520","record_id":"<urn:uuid:b2f96743-005e-4576-b17f-9603b8e3135c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Noether-type theorems for the generalized variational principle of Herglotz The generalized variational principle of Herglotz defines the functional whose extrema are sought by a differential equation rather than an integral. It reduces to the classical variational principle under classical conditions. The Noether theorems are not applicable to functionals defined by differential equations. For a system of differential equations derivable from the generalized variational principle of Herglotz, a first Noether-type theorem is proven, which gives explicit conserved quantities corresponding to the symmetries of the functional defined by the generalized variational principle of Herglotz. This theorem reduces to the classical first Noether theorem in the case when the generalized variational principle of Herglotz reduces to the classical variational principle. Applications of the first Noether-type theorem are shown and specific examples are provided. A second Noether-type theorem is proven, providing a non-trivial identity corresponding to each infinite-dimensional symmetry group of the functional defined by the generalized variational principle of Herglotz. This theorem reduces to the classical second Noether theorem when the generalized variational principle of Herglotz reduces to the classical variational principle. A new variational principle with several independent variables is defined. It reduces to Herglotz's generalized variational principle in the case of one independent variable, time. It also reduces to the classical variational principle with several independent variables, when only the spatial independent variables are present. Thus, it generalizes both. This new variational principle can give a variational description of processes involving physical fields. One valuable characteristic is that, unlike the classical variational principle with several independent variables, this variational principle gives a variational description of nonconservative processes even when the Lagrangian function is independent of time. This is not possible with the classical variational principle. The equations providing the extrema of the functional defined by this generalized variational principle are derived. They reduce to the classical Euler-Lagrange equations (in the case of several independent variables), when this new variational principle reduces to the classical variational principle with several independent variables. A first Noether-type theorem is proven for the generalized variational principle with several independent variables. One of its corollaries provides an explicit procedure for finding the conserved quantities corresponding to symmetries of the functional defined by this variational principle. This theorem reduces to the classical first Noether theorem in the case when the generalized variational principle with several independent variables reduces to the classical variational principle with several independent variables. It reduces to the first Noether-type theorem for Herglotz generalized variational principle when this generalized variational principle reduces to Herglotz's variational principle. A criterion for a transformation to be a symmetry of the functional defined by the generalized variational principle with several independent variables is proven. Applications of the first Noether-type theorem in the several independent variables case are shown and specific examples are provided.
{"url":"http://ir.library.oregonstate.edu/xmlui/handle/1957/7421","timestamp":"2014-04-18T03:55:05Z","content_type":null,"content_length":"27937","record_id":"<urn:uuid:3ba41020-491c-404c-9768-c241be5ae225>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Scott A. Mitchell A Characterization of the Quadrilateral Meshes of a Surface Which Admit a Compatible Hexahedral Mesh of the Enclosed Volume A popular three-dimensional mesh generation scheme is to start with a quadrilateral mesh of the surface of a volume, and then attempt to fill the interior of the volume with hexahedra, so that the hexahedra touch the surface in exactly the given quadrilaterals. Folklore has maintained that there are many quadrilateral meshes for which no such compatible hexahedral mesh exists. In this paper we give an existence proof which contradicts this folklore: A quadrilateral mesh need only satisfy some very weak conditions for there to exist a compatible hexahedral mesh. For a volume that is topologically a ball, any quadrilateral mesh composed of an even number of quadrilaterals admits a compatible hexahedral mesh. We extend this to certain non-ball volumes: there is a construction to reduce to the ball case, and we give a necessary condition as well. Keywords: Computational Geometry, hexahedral mesh generation, existence. Bill Thurston also wrote a newsgroup article outlining a proof that is similar to my paper: "Hexahedral decomposition of polyhedra, sci.math, 25 Oct 1993." David Eppstein has a copy of this online. Scott A. Mitchell, "A Characterization of the Quadrilateral Meshes of a Surface Which Admit a Compatible Hexahedral Mesh of the Enclosed Volume." In proc. 13th Annual Symposium on Theoretical Aspects of Computer Science (STACS `96), Lecture Notes in Computer Science 1046, Springer, pages 465-476, 1996.
{"url":"http://www.sandia.gov/~samitch/exist-abstract.html","timestamp":"2014-04-18T23:17:21Z","content_type":null,"content_length":"2770","record_id":"<urn:uuid:fd7f1e43-9866-4b93-b8ec-aa8af1815861>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Linear Simultaneous Equations I need some help with Non-Linear Simultaneous Equations. Does anyone know of any good tutorial sites? Also, will Cramer's Rule still work with NLSE? I don't see how it would with the addition of Greater Powers and Divisions. "When subtracted from 180, the sum of the square-root of the two equal angles of an isocoles triangle squared will give the square-root of the remaining angle squared."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=27341","timestamp":"2014-04-20T18:32:00Z","content_type":null,"content_length":"9675","record_id":"<urn:uuid:9ebbca16-3e61-467c-9e83-56239826def3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Open set From Encyclopedia of Mathematics in a topological space An element of the topology (cf. Topological structure (topology)) of this space. More specifically, let the topology $\tau$ of a topological space $(X, \tau)$ be defined as a system $\tau$ of subsets of the set $X$ such that: 1. $X\in\tau$, $\emptyset\in\tau$; 2. if $O_i\in\tau$, where $i=1,2$, then $O_1\cap O_2\in\tau$; 3. if $O_{\alpha}\in\tau$, where $\alpha\in\mathfrak{A}$, then $\bigcup\{O_{\alpha} : \alpha\in\mathfrak{A} \}$. The open sets in the space $(X, \tau)$ are then the elements of the topology $\tau$ and only them. [a1] R. Engelking, "General topology" , Heldermann (1989) How to Cite This Entry: Open set. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Open_set&oldid=29690 This article was adapted from an original article by B.A. Pasynkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Open_set","timestamp":"2014-04-21T12:09:21Z","content_type":null,"content_length":"19271","record_id":"<urn:uuid:8a3304f7-d194-4e43-a0a7-f94886e6eaa0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
There are close to zero comments on this script, and much of it is quite obscure. 1. There are still places with T instead of TRUE and unlabeled parameters relying on being in the right place: mxMatrix("Full", 1, 2, T, 20, "mean", dimnames=list(NULL, selVars), name="expMeanMZ"), 2. Most matrices need a comment about what it is they are holding, i.e.: mxMatrix("Full", nrow=1, ncol=1, free=TRUE, values=.6, label="a", name="X") # what is this for? mxAlgebra(X %*% t(X), name="A"), # what does this do? Proposing an algebra for the the groups is critical and needs a comment: mxAlgebra(rbind (cbind(A+C+E , A+C), cbind(A+C , A+C+E)), dimnames = list(selVars, selVars), name="expCovMZ"), # Algebra for expected variance/covariance matrix in MZs The actual MZ and DZ groups are quite obscure, especially because, unlike the pathic example, all the expectations are done outside these groups - some comment on this would be helpful. Something along the lines of "We next build a model for the MZ data. This needs to read in the columns of manifest variables we are modeling (selVars), which will provide the observed covariance matrix and means for this group. We need then to link these observations to an objective - our expected MZ covariance and means, so that the likelihood of the observed data can be calculated from any departure from the expectations our model generates." mxData(mzfData, type="raw"), mxFIMLObjective("twinACE.expCovMZ", "twinACE.expMeanMZ")), Can someone have a look and add these or further clarifications?
{"url":"http://openmx.psyc.virginia.edu/print/95","timestamp":"2014-04-17T10:04:14Z","content_type":null,"content_length":"9030","record_id":"<urn:uuid:46c8b5a0-0fe1-4997-afc9-73fba5498f8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterizations of complex Abelian varieties (especially 3-folds) among projective nonsingular varieties? up vote 11 down vote favorite If $X$ is a complex Abelian variety of dimension $g$, then • The canonical sheaf is trivial • $\dim {\rm H}^i(X; \mathcal{O}_X) = \binom{g}{i}$. When $g =1,2$, then any connected, projective nonsingular $X$ satisfying the above two must be an Abelian variety. Is this true for higher $g$? If not, what other conditions can I add? Or is such a request unreasonable? Disclaimer: I don't know very much about Abelian varieties, so apologies if this material is standard. A search in the literature turned up some papers about characterizations of Abelian varieties up to birational equivalence, but under weaker assumptions. I really want to know if I'm given a variety $X$, how to tell if it is Abelian or not via some sort of reasonably accessible sheaf-related conditions. I'm most interested in the case $g=3$, but results for other $g$ are welcome also. abelian-varieties ag.algebraic-geometry add comment 1 Answer active oldest votes A result of Kawamata (Kawamata, Yujiro, Characterization of abelian varieties. Compositio Mathematica, 43 no. 2 (1981), p. 253-276) implies that, under your assumptions, $X$ is birational to an abelian variety (in fact you just need the Kodaira dimension of $X$ to be zero and the irregularity to be equal to the dimension of $X$). Once you know that $X$ is birational to an abelian variety $A$, a Lemma of Deligne implies that if the canonical divisor on $X$ is trivial, then $X$ is in fact an abelian variety. This is not a particularly deep result. First, the rational map $f \colon X \to A$ is actually a morphism (essentially because $A$ cannot contain rational curves). Second, the morphism $f$ up vote 18 induces a morphism $df$ between the cotangent bundles. The determinant of $df$ is a morphism between the canonical bundles of $A$ and $X$, that are both trivial by assumption. Thus the down vote determinant is either identically zero, or it is an isomorphism. Since the morphism $f$ is generically etale, the determinant is not identically zero. But then it is an isomorphism, so accepted that the morphism $f$ is always etale, and we conclude that $X$ is an abelian variety. EDIT: Ah, as Pete remarked below, I did not answer the question! The answer is "Yes"! Even under weaker assumption: namely it suffices to know that the canonical bundle is trivial and that $\dim (X) = h^1(X,\mathcal{O}_X)$. So, just to be sure...**yes**, right? – Pete L. Clark Aug 8 '10 at 9:56 add comment Not the answer you're looking for? Browse other questions tagged abelian-varieties ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/34891/characterizations-of-complex-abelian-varieties-especially-3-folds-among-projec/34897","timestamp":"2014-04-18T21:16:53Z","content_type":null,"content_length":"53298","record_id":"<urn:uuid:b8511973-36d2-414d-8c26-6a5492ebff2e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics A concept reflecting the algebraic properties of surjective mappings of sets. A morphism category Every isomorphism is an epimorphism. The product of two epimorphisms is an epimorphism. Therefore, all epimorphisms of a category In the categories of sets, vector spaces, groups, and Abelian groups, the epimorphisms are precisely the surjective mappings, i.e. the linear mappings and the homomorphisms of one set, vector space or group onto another set, vector space or group. However, in the categories of topological spaces or associative rings there are non-surjective epimorphisms (that is, mappings that are not "onto" ). The concept of an epimorphism is dual to that of a monomorphism. In the article above, The inclusion [a1] B. Mitchell, "Theory of categories" , Acad. Press (1965) How to Cite This Entry: Epimorphism. M.Sh. Tsalenko (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Epimorphism&oldid=12481 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Epimorphism","timestamp":"2014-04-19T07:02:54Z","content_type":null,"content_length":"17661","record_id":"<urn:uuid:ac70e50f-fed1-4074-918b-28667d415cb0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving the limit... June 17th 2009, 05:11 PM #1 Jun 2009 Proving the limit... I get how to get the answer, but I need to use formal definition of limits... something I am very unfamiliar with. lim x->a g(x) = infinity and g(x) < or equal f(x) for x -> a I know the answer is lim as x->a for f(x) = infinity... but what definitions do I need to do this? I am not sure how to show work for this question. I get how to get the answer, but I need to use formal definition of limits... something I am very unfamiliar with. lim x->a g(x) = infinity and g(x) < or equal f(x) for x -> a I know the answer is lim as x->a for f(x) = infinity... but what definitions do I need to do this? I am not sure how to show work for this question. Well, share the question with us, and maybe we can help. Last edited by VonNemo19; June 17th 2009 at 05:20 PM. Reason: Edited my edit reason Oops... my bad. If lim x->a g(x) = infinity and g(x) < or equal to f(x) for x -> a, then limx-> a f(x) = infinity Prove, using the formal definition of limits, each of the statement is true. So... that's the question. Honestly, I never had to answer this kind of questions. It's usually applications format or equation format. Another one is limx->infinity f(x) = -infinuty and c > 0, then limx ->infinity c f(x) = - infinity Not sure which rules to use, but I can reason... that f(x) is a negative infinity... times bu any constant positive number is still negative infinity. I am not sure if there are any "specific Oops... my bad. If lim x->a g(x) = infinity and g(x) < or equal to f(x) for x -> a, then limx-> a f(x) = infinity Prove, using the formal definition of limits, each of the statement is true. So... that's the question. Honestly, I never had to answer this kind of questions. It's usually applications format or equation format. Another one is limx->infinity f(x) = -infinuty and c > 0, then limx ->infinity c f(x) = - infinity Not sure which rules to use, but I can reason... that f(x) is a negative infinity... times bu any constant positive number is still negative infinity. I am not sure if there are any "specific So we wish to show that if $\lim_{x\to{a}}g(x)=\infty$ and $g(x)\leq{f(x)}$as ? $x\to{a}$ ?, then $\lim_{x\to{a}}f(x)=\infty$ Well, for infinite limits the definition states: Let g(x) be a function that is defined on an interval containing a, except possibly at a. Then we say that if for every number M>0, there is some number $\delta>0$ such that $g(x)>M$ whenever $0<\mid{x-a}\mid<\delta$ So then the proof is straight forward if $g(x)>M$ for every x arbitrarily close to a, then $f(x)>g(x)$ for every value of x arbitrarily close to a $\Rightarrow{f(x)>M}$ $f(x)>M$ whenever $0<\mid{x-a}\mid<\delta$ So we wish to show that if $\lim_{x\to{a}}g(x)=\infty$ and $g(x)\leq{f(x)}$as ? $x\to{a}$ ?, then $\lim_{x\to{a}}f(x)=\infty$ Well, for infinite limits the definition states: Let g(x) be a function that is defined on an interval containing a, except possibly at a. Then we say that if for every number M>0, there is some number $\delta>0$ such that $g(x)>M$ whenever $0<\mid{x-a}\mid<\delta$ So then the proof is straight forward if $g(x)>M$ for every x arbitrarily close to a, then $f(x)>g(x)$ for every value of x arbitrarily close to a $\Rightarrow{f(x)>M}$ $f(x)>M$ whenever $0<\mid{x-a}\mid<\delta$ Umm... I am very good with math. What is M and delta mean? And I've never seen $0<\mid{x-a}\mid<\delta$ expression before, though I kinda understand it. The problem you posted requires knowlege of the epsilon-delta (or formal) defintion of limit. M is some arbitrarily large number, loosely speaking, delta is the distance x is away from a, and the inequality is a way of narrowowing down the distance between a and x to any desired length. June 17th 2009, 05:18 PM #2 June 17th 2009, 05:31 PM #3 Jun 2009 June 17th 2009, 06:02 PM #4 June 17th 2009, 06:11 PM #5 Jun 2009 June 17th 2009, 06:18 PM #6
{"url":"http://mathhelpforum.com/calculus/93145-proving-limit.html","timestamp":"2014-04-21T02:52:40Z","content_type":null,"content_length":"53914","record_id":"<urn:uuid:7e542f71-6483-4cb1-ac6d-ef04a78153aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Intermediate Algebra: Functions & Authentic Applications Unique in its approach, the Lehmann Algebra Series uses curve fitting to model compelling, authentic situations, while answering the perennial question “But what is this good for?” Lehmann begins with interesting data sets, and then uses the data to find models and derive equations that fit the scenario. This interactive approach to the data helps readers connect concepts and motivates them to learn. The curve-fitting approach encourages readers to understand functions graphically, numerically, and symbolically. Because of the multi-faceted understanding that they gain, readers are able to verbally describe the concepts related to functions. Read More Show Less Product Details • ISBN-13: 9780321620958 • Publisher: Pearson • Publication date: 11/9/2009 • Edition description: Older Edition • Edition number: 4 • Pages: 692 • Sales rank: 152,190 • Product dimensions: 8.70 (w) x 11.00 (h) x 1.10 (d) Table of Contents 1. Linear Equations and Linear Functions 1.1 Using Qualitative Graphs to Describe Situations 1.2 Graphing Linear Equations 1.3 Slope of a Line 1.4 Meaning of Slope for Equations, Graphs, and Tables 1.5 Finding Linear Equations 1.6 Functions Chapter Summary Key Points of Chapter 1 Chapter 1 Review Exercises Chapter 1 Test 2. Modeling with Linear Functions 2.1 Using Lines to Model Data 2.2 Finding Equations of Linear Models 2.3 Function Notation and Making Predictions 2.4 Slope Is a Rate of Change Chapter Summary Key Points of Chapter 2 Chapter 2 Review Exercises Chapter 2 Test 3. Systems of Linear Equations 3.1 Using Graphs and Tables to Solve Systems 3.2 Using Substitution and Elimination to Solve Systems 3.3 Using Systems to Model Data 3.4 Value, Interest, and Mixture Problems 3.5 Using Linear Inequalities in One Variable to Make Predictions Chapter Summary Key Points of Chapter 3 Chapter 3 Review Exercises Chapter 3 Test Cumulative Review of Chapters 1—3 4. Exponential Functions 4.1 Properties of Exponents 4.2 Rational Exponents 4.3 Graphing Exponential Functions 4.4 Finding Equations of Exponential Functions 4.5 Using Exponential Functions to Model Data Chapter Summary Key Points of Chapter 4 Chapter 4 Review Exercises Chapter 4 Test 5. Logarithmic Functions 5.1 Inverse Functions 5.2 Logarithmic Functions 5.3 Properties of Logarithms 5.4 Using the Power Property with Exponential Models to Make Predictions 5.5 More Properties of Logarithms 5.6 Natural Logarithms Chapter Summary Key Points of Chapter 5 Chapter 5 Review Exercises Chapter 5 Test Cumulative Review of Chapters 1—5 6. Polynomial Functions 6.1 Adding and Subtracting Polynomial Expressions and Functions 6.2 Multiplying Polynomial Expressions and Functions 6.3 Factoring Trinomials of the Form x2 + bx + c; Factoring out the GCF 6.4 Factoring Polynomials 6.5 Factoring Special Binomials; A Factoring Strategy 6.6 Using Factoring to Solve Polynomial Equations Chapter Summary Key Points of Chapter 6 Chapter 6 Review Exercises Chapter 6 Test 7. Quadratic Functions 7.1 Graphing Quadratic Functions in Vertex Form 7.2 Graphing Quadratic Functions in Standard Form 7.3 Using the Square Root Property to Solve Quadratic Equations 7.4 Solving Quadratic Equations by Completing the Square 7.5 Using the Quadratic Formula to Solve Quadratic Equations 7.6 Solving Systems of Linear Equations in Three Variables; Finding Quadratic Functions 7.7 Finding Quadratic Models 7.8 Modeling with Quadratic Functions Chapter Summary Key Points of Chapter 7 Chapter 7 Review Exercises Chapter 7 Test Cumulative Review of Chapters 1—7 8. Rational Functions 8.1 Finding the Domains of Rational Functions and Simplifying Rational Expressions 8.2 Multiplying and Dividing Rational Expressions 8.3 Adding and Subtracting Rational Expressions 8.4 Simplifying Complex Rational Expressions 8.5 Solving Rational Equations 8.6 Modeling with Rational Functions 8.7 Variation Chapter Summary Key Points of Chapter 8 Chapter 8 Review Exercises Chapter 8 Test 9. Radical Functions 9.1 Simplifying Radical Expressions 9.2 Adding, Subtracting, and Multiplying Radical Expressions 9.3 Rationalizing Denominators and Simplifying Quotients of Radical Expressions 9.4 Graphing and Combining Square Root Functions 9.5 Solving Radical Equations 9.6 Modeling with Square Root Functions Chapter Summary Key Points of Chapter 9 Chapter 9 Review Exercises Chapter 9 Test 10. Sequences and Series 10.1 Arithmetic Sequences 10.2 Geometric Sequences 10.3 Arithmetic Series 10.4 Geometric Series Chapter Summary Key Points of Chapter 10 Chapter 10 Review Exercises Chapter 10 Test Cumulative Review of Chapters 1—10 11. Additional Topics 11.1 Absolute Value: Equations and Inequalities Key Points of Section 11.1 11.2 Linear Inequalities in Two Variables; Systems of Linear Inequalities Key Points of Section 11.2 11.3 Performing Operations with Complex Numbers Key Points of Section 11.3 11.4 Pythagorean Theorem, Distance Formula, and Circles Key Points of Section 11.4 11.5 Ellipses and Hyperbolas Key Points of Section 11.5 11.6 Solving Nonlinear Systems of Equations Key Points of Section 11.6 A. Reviewing Prerequisite Material A.1 Plotting Points A.2 Identifying Types of Numbers A.3 Absolute Value A.4 Performing Operations with Real Numbers A.5 Exponents A.6 Order of Operations A.7 Constants, Variables, Expressions, and Equations A.8 Distributive Law A.9 Combining Like Terms A.10 Solving Linear Equations in One Variable A.11 Solving Equations in Two or More Variables A.12 Equivalent Expressions and Equivalent Equations B Using a TI-83 or TI-84 Graphing Calculator B.1 Turning a Graphing Calculator On or Off B.2 Making the Screen Lighter or Darker B.3 Entering an Equation B.4 Graphing an Equation B.5 Tracing a Curve without a Scattergram B.6 Zooming B.7 Setting the Window Format B.8 Plotting Points in a Scattergram B.9 Tracing a Scattergram B.10 Graphing Equations with a Scattergram B.11 Tracing a Curve with a Scattergram B.12 Turning a Plotter On or Off B.13 Creating a Table B.14 Creating a Table for Two Equations B.15 Using “Ask” in a Table B.16 Finding the Regression Curve for Some Data B.17 Plotting Points in Two Scattergrams B.18 Finding the Intersection Point(s) of Two Curves B.19 Finding the Minimum Point(s) or Maximum Point(s) of a Curve B.20 Storing a Value B.21 Finding Any x-Intercepts of a Curve B.22 Turning an Equation On or Off B.23 Finding Coordinates of Points B.24 Graphing Equations with Axes “Turned Off” B.25 Entering an Equation by Using Yn References B.26 Responding to Error Messages Read More Show Less
{"url":"http://www.barnesandnoble.com/w/intermediate-algebra-jay-lehmann/1100835914?ean=9780321620958&itm=1&usri=9780321620958","timestamp":"2014-04-21T06:31:25Z","content_type":null,"content_length":"123748","record_id":"<urn:uuid:8dca580a-1db2-452f-a5ab-99a15e8df6e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Syllabus Math 410: Numerical Methods for Scientists and Engineers Fall 2011 Catalog description: Prerequisites: CS 111 or ES 111; MATH 231 passed with grade C- or better. Co-requisite: MATH 335, 3 credit hours, 3 class hours. Floating point arithmetic, solution of linear and nonlinear systems of equations, in- terpolation, numerical differentiation and integration, numerical solution of ordinary differential equations, approximation. Class: TR 9:30­10:45 A.M., Jones Annex 102 Instructor: Dr. Rakhim Aitbayev, Weir 236, x5463, aitbayev@nmt.edu Office hours: MWF 1:00­3:00 P. M. or by appointment. Course webpage: http://www.nmt.edu/aitbayev/math410/ Textbook: A First Course in Numerical Methods, 1st edition, by U. M. Ascher and C. Greif, SIAM, 2011, ISBN 978-0-898719-97-0 (required). Recommended Matlab reference: "Matlab Guide" by D.J. Higham and N.J. Higham, SIAM (available on reserve at the Skeen Library). Course outline: Chapters 1­3 and 9­16 of the textbook. The covered topics are as follows: Numerical algorithms, roundoff errors, numerical solution of nonlinear equations and systems, polynomial and spline interpolation, function approximation, discrete Fourier transform, numerical differentiation and integration, numerical solution of problems
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/358/4233220.html","timestamp":"2014-04-20T21:59:08Z","content_type":null,"content_length":"8443","record_id":"<urn:uuid:bd70b794-a7a6-49e6-b33b-d5d956f2e5ef>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Histogram of the percentiles of rainfall for running 3-month periods in chronological order from 1955 through 1996. Numerical symbols that form the bars of the histogram reflect the climatological rainfall amount for the season in question; these repeat identically from one year to the next. The climatological rainfall amount categories represented by each numerical symbol used to form the histogram bars are shown in the legend along the top of the figure. Histogram bars are plotted with respect to the median, extending upward for above-median percentiles and downward for sub-median percentiles. Percentiles are rounded to the nearest 5 %ile, such that 50 %ile indicates 47.5-52.5 %ile, 35 indicates 32.5-37.5 %ile, etc. The 0 and 100 %iles denote the 0.0-2.5 and 97.5 to 100.0 %iles intervals, respectively. The numerical symbols enable a user to determine whether a period of rainfall deficit (shown as a downward-directed bar) coincides with a rainy part of the year or a dry part of the year. To further facilitate the recognition of the normal wetness of the time of year, the shading indicates the six driest 3-month periods of the year. The shading repeats identically for each year as a fixed cycle, in similar fashion to the numerical symbols. The ENSO status of each boreal winter is shown underneath the main panels of the histogram.
{"url":"http://www.cpc.ncep.noaa.gov/pacdir/html-figdir/html1dir/will1.html","timestamp":"2014-04-17T12:33:22Z","content_type":null,"content_length":"2002","record_id":"<urn:uuid:2a61a068-b913-4deb-b895-2f87ae94a045>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
equivalence principle It is said the Einstein Equivalence Principle is the heart of GR , and for this principle, the gravitational field can be convincingly described by the structure of the spacetime. But what puzzles me is how can this principle do it. It just includes WEP, LLI and LPI. How can these imply the gravitational field should be described by the metric thoery?
{"url":"http://www.physicsforums.com/showthread.php?t=199434","timestamp":"2014-04-16T16:01:33Z","content_type":null,"content_length":"19236","record_id":"<urn:uuid:67e2537e-9f78-4a7e-8654-1374efbdef88>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d1f73ce4b069abbb7117fe","timestamp":"2014-04-16T22:42:09Z","content_type":null,"content_length":"37006","record_id":"<urn:uuid:21c24ec0-9c7b-4245-b0d2-18d37d97c0bb>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Miscellaneous Projects Here are some of the other areas I've worked in: The size of ordered binary decision diagrams (BDDs) is sensitive to the variable ordering that is chosen. Finding an optimal variable ordering is NP-complete. I investigated two approaches to tackling the BDD variable ordering problem: (1) Finding approximation algorithms, and (2) Using machine learning. Unfortunately, neither approach was very successful. Approximation hardness: The following report gives some approximation hardness results for problems in Boolean function complexity. S. A. Seshia and R. E. Bryant. Computer Science Department technical report CMU-CS-00-156, August 2000. Machine learning for BDD variable reordering: The main idea was to use a decision tree-based algorithm to learn position and grouping information that could be used in sifting. Minor improvements in the stability of the sifting algorithm were obtained. This work, done in 1999, remains unpublished. Robust implementation of geometric algorithms requires the use of exact arithmetic. Since exact arithmetic is expensive, one must use it only when finite-precision arithmetic proves to be inadequate. Two techniques for uncovering this inadequacy are interval arithmetic and error analysis. We did a performance study of these two alternatives with respect to the line-side and in-circle geometric predicates, and reported the results in the following technical report. S. A. Seshia, G. E. Blelloch, and R. W. Harper. Computer Science Department Technical report CMU-CS-00-172, December 2000. In a term paper, I explored a hierarchical technique for verifying microelectromechanical system (MEMS) designs via simulation. The basic idea was to translate a full-scale schematic into a reduced-order model, and then simulate this model rather than the original, detailed schematic. The simulation results of the reduced-order model were comparable to those of the original schematic while being faster by a factor of about 100. Hierarchical Verification for Microelectromechanical Systems. S. A. Seshia. Unpublished manuscript, December 1998. My B.Tech. (senior undergraduate) thesis was in the area of multisensor data fusion. The thesis reviewed literature on sensor input synchronization, image registration (alignment) and image fusion. A new algorithm for image registration was proposed and implemented, and formal evaluation criteria for image fusion algorithms were given. Multisensor Image Alignment and Fusion. S. A. Seshia. B. Tech. thesis, April 1998. We compared two approaches to formal verification of cryptographic protocols: theory generation and model checking. Based on this comparison, we proposed a combination of the two methods. A Comparison and Combination of Theory Generation and Model Checking for Security Protocol Analysis. Nicholas J. Hopper, Sanjit A. Seshia, Jeannette M. Wing. In Workshop on Formal Methods in Computer Security (FMCS), July 2000, Chicago. Associated with Intl. Conf. on Computer-Aided Verification (CAV'00). An earlier version appeared as a CMU CS technical report. Nicholas J. Hopper, Sanjit A. Seshia, Jeannette M. Wing. Computer Science Department technical report CMU-CS-00-107, January 2000. Esterel is a synchronous, textual language for programming reactive systems. Statecharts is a graphical language for specifying reactive systems. The paper below gives a translation of Statecharts to Esterel so as to avail of both the visual power of Statecharts and the verification tools available with Esterel. S. A. Seshia, R. K. Shyamasundar, A. K. Bhattacharjee and S. D. Dhodapkar. In Proceedings of the World Congress on Formal Methods, FM'99, Toulouse, France, LNCS vol. 1709, Sept. 1999, pp.983-1007. My co-authors implemented and deployed a tool called PERTS based on the above translation. Here is a paper that describes this tool. PERTS: A Graphical Environment for the Specification and Verification of Reactive Systems. A. K. Bhattcharjee, S. D. Dhodapkar, S. A. Seshia and R. K. Shyamasundar. Journal of Reliability Engineering and System Safety, 71 (3), 2001, pp. 299 -310 , Elsevier Science.(corrigendum in vol. 72) An earlier conference version is available here: A.K. Bhattcharjee, S.D. Dhodapkar, S.A. Seshia and R.K. Shyamasundar. In Proceedings of SAFECOMP'99, Toulouse, France. LNCS vol. 1698, Sept. 1999, pp. 431-444.
{"url":"http://www.eecs.berkeley.edu/~sseshia/research/misc.html","timestamp":"2014-04-20T00:41:43Z","content_type":null,"content_length":"7555","record_id":"<urn:uuid:c9da3a21-f753-41cd-8399-5e2c926e314b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
integral of a square root October 29th 2009, 07:20 PM integral of a square root how do you find the integral of a square root of a polynomial? the integral of the sqaure root of (t^2+t^4) October 29th 2009, 07:23 PM October 29th 2009, 07:46 PM in the problem I have, its a definite integral from 0 to 1. but I'm not sure how to find the integral of anything under the square root October 29th 2009, 08:21 PM let $u=1+t^2 \implies du=2tdt \iff \frac{1}{2}du=tdt$ $\int_{1}^{2}\sqrt{u}\frac{1}{2}du=\frac{1}{2}\int_ {1}^{2}u^{1/2}du...$
{"url":"http://mathhelpforum.com/calculus/111290-integral-square-root-print.html","timestamp":"2014-04-19T06:18:10Z","content_type":null,"content_length":"6637","record_id":"<urn:uuid:92d14c85-dd90-454a-a1c2-7a0a0f13d965>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Social Choice Theory First published Wed Dec 18, 2013 Social choice theory is the study of collective decision processes and procedures. It is not a single theory, but a cluster of models and results concerning the aggregation of individual inputs (e.g., votes, preferences, judgments, welfare) into collective outputs (e.g., collective decisions, preferences, judgments, welfare). Central questions are: How can a group of individuals choose a winning outcome (e.g., policy, electoral candidate) from a given set of options? What are the properties of different voting systems? When is a voting system democratic? How can a collective (e.g., electorate, legislature, collegial court, expert panel, or committee) arrive at coherent collective preferences or judgments on some issues, on the basis of its members' individual preferences or judgments? How can we rank different social alternatives in an order of social welfare? Social choice theorists study these questions not just by looking at examples, but by developing general models and proving theorems. Pioneered in the 18th century by Nicolas de Condorcet and Jean-Charles de Borda and in the 19th century by Charles Dodgson (also known as Lewis Carroll), social choice theory took off in the 20th century with the works of Kenneth Arrow, Amartya Sen, and Duncan Black. Its influence extends across economics, political science, philosophy, mathematics, and recently computer science and biology. Apart from contributing to our understanding of collective decision procedures, social choice theory has applications in the areas of institutional design, welfare economics, and social epistemology. The two scholars most often associated with the development of social choice theory are the Frenchman Nicolas de Condorcet (1743–1794) and the American Kenneth Arrow (born 1921). Condorcet was a liberal thinker in the era of the French Revolution who was pursued by the revolutionary authorities for criticizing them. After a period of hiding, he was eventually arrested, though apparently not immediately identified, and he died in prison (for more details on Condorcet, see McLean and Hewitt 1994). In his Essay on the Application of Analysis to the Probability of Majority Decisions (1785), he advocated a particular voting system, pairwise majority voting, and presented his two most prominent insights. The first, known as Condorcet's jury theorem, is that if each member of a jury has an equal and independent chance better than random, but worse than perfect, of making a correct judgment on whether a defendant is guilty (or on some other factual proposition), the majority of jurors is more likely to be correct than each individual juror, and the probability of a correct majority judgment approaches 1 as the jury size increases. Thus, under certain conditions, majority rule is good at ‘tracking the truth’ (e.g., Grofman, Owen, and Feld 1983; List and Goodin 2001). Condorcet's second insight, often called Condorcet's paradox, is the observation that majority preferences can be ‘irrational’ (specifically, intransitive) even when individual preferences are ‘rational’ (specifically, transitive). Suppose, for example, that one third of a group prefers alternative x to y to z, a second third prefers y to z to x, and a final third prefers z to x to y. Then there are majorities (of two thirds) for x against y, for y against z, and for z against x: a ‘cycle’, which violates transitivity. Furthermore, no alternative is a Condorcet winner, an alternative that beats, or at least ties with, every other alternative in pairwise majority contests. Condorcet anticipated a key theme of modern social choice theory: majority rule is at once a plausible method of collective decision making and yet subject to some surprising problems. Resolving or bypassing these problems remains one of social choice theory's core concerns. While Condorcet had investigated a particular voting method (majority voting), Arrow, who won the Nobel Memorial Prize in Economics in 1972, introduced a general approach to the study of preference aggregation, partly inspired by his teacher of logic, Alfred Tarski (1901–1983), from whom he had learnt relation theory as an undergraduate at the City College of New York (Suppes 2005). Arrow considered a class of possible aggregation methods, which he called social welfare functions, and asked which of them satisfy certain axioms or desiderata. He proved that, surprisingly, there exists no method for aggregating the preferences of two or more individuals over three or more alternatives into collective preferences, where this method satisfies five seemingly plausible axioms, discussed below. This result, known as Arrow's impossibility theorem, prompted much work and many debates in social choice theory and welfare economics. William Riker (1920–1993), who inspired the Rochester school in political science, interpreted it as a mathematical proof of the impossibility of populist democracy (e.g., Riker 1982). Others, most prominently Amartya Sen (born 1933), who won the 1998 Nobel Memorial Prize, took it to show that ordinal preferences are insufficient for making satisfactory social choices. Commentators also questioned whether Arrow's desiderata on an aggregation method are as innocuous as claimed or whether they should be relaxed. The lessons from Arrow's theorem depend, in part, on how we interpret an Arrovian social welfare function. The use of ordinal preferences as the ‘aggregenda’ may be easier to justify if we interpret the aggregation rule as a voting method than if we interpret it as a welfare evaluation method. Sen argued that when a social planner seeks to rank different social alternatives in an order of social welfare (thereby employing some aggregation rule as a welfare evaluation method), it may be justifiable to use additional information over and above ordinal preferences, such as interpersonally comparable welfare measurements (e.g., Sen 1982). Arrow himself held the view that interpersonal comparison of utilities has no meaning and … that there is no meaning relevant to welfare comparisons in the measurability of individual utility. (1951/1963: 9) This view was influenced by neoclassical economics, associated with scholars such as Vilfredo Pareto (1848–1923), Lionel Robbins (1898–1984), John Hicks (1904–1989), co-winner of the Economics Nobel Prize with Arrow, and Paul Samuelson (1915–2009), another Nobel Laureate. Arrow's theorem demonstrates the stark implications of the ‘ordinalist’ assumptions of neoclassical thought. Nowadays most social choice theorists have moved beyond the early negative interpretations of Arrow's theorem and are interested in the trade-offs involved in finding satisfactory decision procedures. Sen has promoted this ‘possibilist’ interpretation of social choice theory (e.g., in his 1998 Nobel lecture). Within this approach, Arrow's axiomatic method is perhaps even more influential than his impossibility theorem (on the axiomatic method, see Thomson 2000). The paradigmatic kind of result in contemporary axiomatic work is the ‘characterization theorem’. Here the aim is to identify a set of plausible necessary and sufficient conditions that uniquely characterize a particular solution (or class of solutions) to a given type of collective decision problem. An early example is Kenneth May's (1952) characterization of majority rule, discussed below. Condorcet and Arrow are not the only founding figures of social choice theory. Condorcet's contemporary and co-national Jean-Charles de Borda (1733–1799) defended a voting system that is often seen as a prominent alternative to majority voting. The Borda count, formally defined later, avoids Condorcet's paradox but violates one of Arrow's conditions, the independence of irrelevant alternatives. Thus the debate between Condorcet and Borda is a precursor to some modern debates on how to respond to Arrow's theorem. The origins of this debate precede Condorcet and Borda. In the Middle Ages, Ramon Llull (c1235–1315) proposed the aggregation method of pairwise majority voting, while Nicolas Cusanus (1401–1464) proposed a variant of the Borda count (McLean 1990). In 1672, the German statesman and scholar Samuel von Pufendorf (1632–1694) compared simple majority, qualified majority, and unanimity rules and offered an analysis of the structure of preferences that can be seen as a precursor to later discoveries (e.g., on single-peakedness, discussed below) (Gaertner 2005). In the 19^th century, the British mathematician and clergyman Charles Dodgson (1832–1898), better known as Lewis Carroll, independently rediscovered many of Condorcet's and Borda's insights and also developed a theory of proportional representation. It was largely thanks to the Scottish economist Duncan Black (1908–1991) that Condorcet's, Borda's, and Dodgson's social-choice-theoretic ideas were drawn to the attention of the modern research community (McLean, McMillan, and Monroe 1995). Black also made several discoveries related to majority voting, some of which are discussed below. In France, George-Théodule Guilbaud ([1952] 1966) wrote an important but often overlooked paper, revisiting Condorcet's theory of voting from a logical perspective and sparking a French literature on the Condorcet effect, the logical problem underlying Condorcet's paradox, which has only recently received more attention in Anglophone social choice theory (Monjardet 2005). For further contributions on the history of social choice theory, see McLean, McMillan, and Monroe (1996), McLean and Urken (1995), McLean and Hewitt (1994), and a special issue of Social Choice and Welfare, edited by Salles (2005). To introduce social choice theory formally, it helps to consider a simple decision problem: a collective choice between two alternatives. Let N = {1, 2, …, n} be a set of individuals, where n ≥ 2. The individuals have to choose between two alternatives (candidates, policies etc.). Each individual i ∈ N casts a vote, denoted v[i], where • v[i] = 1 represents a vote for the first alternative, • v[i] = −1 represents a vote for the second alternative, and optionally • v[i] = 0 represents an abstention (for simplicity, we set this possibility aside). A combination of votes across the individuals, <v[1], v[2], …, v[n]>, is called a profile. For any profile, the group seeks to arrive at a social decision v, where • v= 1 represents a decision for the first alternative, • v = −1 represents a decision for the second alternative, and • v = 0 represents a tie. An aggregation rule is a function f that assigns to each profile <v[1], v[2], …, v[n]> (in some domain of admissible profiles) a social decision v = f(v[1], v[2], …, v[n]). Examples are: Majority rule: For each profile <v[1], v[2], …, v[n]>, Dictatorship: For each profile <v[1], v[2], …, v[n]>, f(v[1], v[2], …, v[n]) = v[i], where i ∈ N is an antecedently fixed individual (the ‘dictator’). Weighted majority rule: For each profile <v[1], v[2], …, v[n]>, where w[1], w[2], …, w[n] are real numbers, interpreted as the ‘voting weights’ of the n individuals. Two points about the concept of an aggregation rule are worth noting. First, under the standard definition, an aggregation rule is defined extensionally, not intensionally: it is a mapping ( functional relationship) between individual inputs and collective outputs, not a set of explicit instructions (a rule in the ordinary-language sense) that could be extended to inputs outside the function's formal domain. Secondly, an aggregation rule is defined for a fixed set of individuals N and a fixed decision problem, so that majority rule in a group of two individuals is a different mathematical object from majority rule in a group of three. To illustrate, Tables 1 and 2 show majority rule for these two group sizes as extensional objects. The rows of each table correspond to the different possible profiles of votes; the final column displays the resulting social decisions. Table 1: Majority rule among two individuals Individual 1's vote Individual 2's vote Collective decision 1 −1 0 −1 1 0 −1 −1 −1 Table 2: Majority rule among three individuals Individual 1's vote Individual 2's vote Individual 3's vote Collective decision 1 1 −1 1 1 −1 1 1 1 −1 −1 −1 −1 1 1 1 −1 1 −1 −1 −1 −1 1 −1 −1 −1 −1 −1 The present way of representing an aggregation rule helps us see how many possible aggregation rules there are (see also List 2011). Suppose there are k profiles in the domain of admissible inputs (in the present example, k = 2^n, since each of the n individuals has two choices, with abstention disallowed). Suppose, further, there are l possible social decisions for each profile (in the example, l = 3, allowing ties). Then there are l^k possible aggregation rules: the relevant table has k rows, and in each row, there are l possible ways of specifying the final entry (the collective decision). Thus the number of possible aggregation rules grows exponentially with the number of admissible profiles and the number of possible decision outcomes. To select an aggregation rule non-arbitrarily from this large class of possible ones, some constraints are needed. I now consider three formal arguments for majority rule. The first involves imposing some ‘procedural’ requirements on the relationship between individual votes and social decisions and showing that majority rule is the only aggregation rule satisfying them. May (1952) introduced four such requirements: Universal domain: The domain of admissible inputs of the aggregation rule consists of all logically possible profiles of votes <v[1], v[2], …, v[n]>, where each v[i] ∈ {−1,1}. Anonymity: For any admissible profiles <v[1], v[2], …, v[n]> and <w[1], w[2], …, w[n]> that are permutations of each other (i.e., one can be obtained from the other by reordering the entries), the social decision is the same, i.e., f(v[1], v[2], …, v[n]) = f(w[1], w[2], …, w[n]). Neutrality: For any admissible profile <v[1], v[2], …, v[n]>, if the votes for the two alternatives are reversed, the social decision is reversed too, i.e., f(−v[1], −v[2], …, −v[n]) = −f(v[1], v [2], …, v[n]). Positive responsiveness: For any admissible profile <v[1], v[2], …, v[n]>, if some voters change their votes in favour of one alternative (say the first) and all other votes remain the same, the social decision does not change in the opposite direction; if the social decision was a tie prior to the change, the tie is broken in the direction of the change, i.e., if [w[i] > v[i] for some i and w[j] = v[j] for all other j] and f(v[1], v[2], …, v[n]) = 0 or 1, then f(w[1], w[2], …, w[n]) = 1. Universal domain requires the aggregation rule to cope with any level of ‘pluralism’ in its inputs; anonymity requires it to treat all voters equally; neutrality requires it to treat all alternatives equally; and positive responsiveness requires the social decision to be a positive function of the way people vote. May proved the following: Theorem (May 1952): An aggregation rule satisfies universal domain, anonymity, neutrality, and positive responsiveness if and only if it is majority rule. Apart from providing an argument for majority rule based on four plausible procedural desiderata, the theorem helps us characterize other aggregation rules in terms of which desiderata they violate. Dictatorships and weighted majority rules with unequal individual weights violate anonymity. Asymmetrical supermajority rules (under which a supermajority of the votes, such as two thirds or three quarters, is required for a decision in favour of one of the alternatives, while the other alternative is the default choice) violate neutrality. This may sometimes be justifiable, for instance when there is a presumption in favour of one alternative, such as a presumption of innocence in a jury decision. Symmetrical supermajority rules (under which neither alternative is chosen unless it is supported by a sufficiently large supermajority) violate positive responsiveness. A more far-fetched example of an aggregation rule violating positive responsiveness is the inverse majority rule (here the alternative rejected by a majority wins). Condorcet's jury theorem provides a consequentialist argument for majority rule. The argument is ‘epistemic’, insofar as the aggregation rule is interpreted as a truth-tracking device (e.g., Grofman, Owen and Feld 1983; List and Goodin 2001). Suppose the aim is to make a judgment on some procedure-independent fact or state of the world, denoted X. In a jury decision, the defendant is either guilty (X = 1) or innocent (X = −1). In an expert-panel decision on the safety of some technology, the technology may be either safe (X = 1) or not (X = −1). Each individual's vote expresses a judgment on that fact or state, and the social decision represents the collective judgment. The goal is to reach a factually correct collective judgment. Which aggregation rule performs best at ‘tracking the truth’ depends on the relationship between the individual votes and the relevant fact or state of the world. Condorcet assumed that each individual is better than random at making a correct judgment (the competence assumption) and that different individuals' judgments are stochastically independent, given the state of the world (the independence assumption). Formally, let V[1], V[2], …, V[n] (capital letters) denote the random variables generating the specific individual votes v[1], v[2], …, v[n] (small letters), and let V = f(V[1], V[2], …, V[n]) denote the resulting random variable representing the social decision v = f(v[1], v[2], …, v[n]) under a given aggregation rule f, such as majority rule. Condorcet's assumptions can be stated as follows: Competence: For each individual i ∈ N and each state of the world x ∈ {−1,1}, Pr(V[i] = x | X = x) = p > 1/2, where p is the same across individuals and states. Independence: The votes of different individuals V[1], V[2], …, V[n] are independent of each other, conditional on each value x ∈ {−1,1} of X. Under these assumptions, majority voting is a good truth-tracker: Theorem (Condorcet's jury theorem): For each state of the world x ∈ {−1,1}, the probability of a correct majority decision, Pr(V = x | X = x), is greater than each individual's probability of a correct vote, Pr(V[i] = x | X = x), and converges to 1, as the number of individuals n increases.^[1] The first conjunct (‘is greater than each individual's probability’) is the non-asymptotic conclusion, the second (‘converges to 1’) the asymptotic conclusion. One can further show that, if the two states of the world have an equal prior probability (i.e., Pr(X = 1) = Pr(X = −1) = 1/2), majority rule is the most reliable of all aggregation rules, maximizing Pr(V = X) (e.g., Ben-Yashar and Nitzan 1997). Although the jury theorem is often invoked to establish the epistemic merits of democracy, its assumptions are highly idealistic. The competence assumption is not a conceptual claim but an empirical one and depends on any given decision problem. Although an average (not necessarily equal) individual competence above 1/2 may be sufficient for Condorcet's conclusion (e.g., Grofman, Owen, and Feld 1983; Boland 1989; Kanazawa 1998),^[2] the theorem ceases to hold if individuals are randomizers (no better and no worse than a coin toss) or if they are worse than random (p < 1/2). In the latter case, the probability of a correct majority decision is less than each individual's probability of a correct vote and converges to 0, as the jury size increases. The theorem's conclusion can also be undermined in less extreme cases (Berend and Paroush 1998), for instance when each individual's reliability, though above 1/2, is an exponentially decreasing function approaching 1/2 with increasing jury size (List 2003a). Similarly, whether the independence assumption is true depends on the decision problem in question. Although Condorcet's conclusion is robust to the presence of some interdependencies between individual votes, the structure of these interdependencies matters (e.g., Boland 1989; Ladha 1992; Estlund 1994; Dietrich and List 2004; Berend and Sapir 2007; Dietrich and Spiekermann 2013). If all individuals' votes are perfectly correlated with one another or mimic a small number of opinion leaders, the collective judgment is no more reliable than the judgment among a small number of independent individuals. Bayesian networks, as employed in Pearl's work on causation (2000), have been used to model the effects of voter dependencies on the jury theorem and to distinguish between stronger and weaker variants of conditional independence (Dietrich and List 2004; Dietrich and Spiekermann 2013). Dietrich (2008) has argued that Condorcet's two assumptions are never simultaneously justified, in the sense that, even when they are both true, one cannot obtain evidence to support both at once. Finally, game-theoretic work challenges an implicit assumption of the jury theorem, namely that voters will always reveal their judgments truthfully. Even if all voters prefer a correct to an incorrect collective judgment, they may still have incentives to misrepresent their individual judgments. This can happen when, conditional on the event of being pivotal for the outcome, a voter expects a higher chance of bringing about a correct collective judgment by voting against his or her own private judgment than in line with it (Austin-Smith and Banks 1996; Feddersen and Pesendorfer Another consequentialist argument for majority rule is utilitarian rather than epistemic. It does not require the existence of an independent fact or state of the world that the collective decision is supposed to track. Suppose each voter gets some utility from the collective decision, which depends on whether the decision matches his or her vote (preference): specifically, each voter gets a utility of 1 from a match between his or her vote and the collective outcome and a utility of 0 from a mismatch.^[3] The Rae-Taylor theorem then states that if each individual has an equal prior probability of preferring each of the two alternatives, majority rule maximizes each individual's expected utility (see, e.g., Mueller 2003). Relatedly, majority rule minimizes the number of frustrated voters (defined as voters on the losing side) and maximizes total utility across voters. Brighouse and Fleurbaey (2010) generalize this result. Define voter i's stake in the decision, d[i], as the utility difference between his or her preferred outcome and his or her dispreferred outcome. The Rae-Taylor theorem rests on an implicit equal-stakes assumption, i.e., d[i] = 1 for every i ∈ N. Brighouse and Fleurbaey show that when stakes are allowed to vary across voters, total utility is maximized not by majority rule, but by a weighted majority rule, where each individual i's voting weight w[i] is proportional to his or her stake d[i]. At the heart of social choice theory is the analysis of preference aggregation, understood as the aggregation of several individuals' preference rankings of two or more social alternatives into a single, collective preference ranking (or choice) over these alternatives. The basic model is as follows. Again, consider a set N = {1, 2, …, n} of individuals (n ≥ 2). Let X = {x, y, z, …} be a set of social alternatives, for example possible worlds, policy platforms, election candidates, or allocations of goods. Each individual i ∈ N has a preference ordering R[i] over these alternatives: a complete and transitive binary relation on X.^[4] For any x, y ∈ X, xR[i] y means that individual i weakly prefers x to y. We write xP[i]y if xR[i]y and not yR[i]x (‘individual i strictly prefers x to y’), and xI[i]y if xR[i]y and yR[i]x (‘individual i is indifferent between x and y’). A combination of preference orderings across the individuals, <R[1], R[2], …, R[n]>, is called a profile. A preference aggregation rule, F, is a function that assigns to each profile <R[1], R[2], …, R[n]> (in some domain of admissible profiles) a social preference relation R = F(R[1], R[2], …, R[n]) on X. When F is clear from the context, we simply write R for the social preference relation corresponding to <R[1], R[2], …, R[n]>. For any x, y ∈ X, xRy means that x is socially weakly preferred to y. We also write xPy if xRy and not yRx (‘x is strictly socially preferred to y’), and xIy if xRy and yRx (‘x and y are socially tied’). For generality, the requirement that R be complete and transitive is not built into the definition of a preference aggregation rule. The paradigmatic example of a preference aggregation rule is pairwise majority voting, as discussed by Condorcet. Here, for any profile <R[1], R[2], …, R[n]> and any x, y ∈ X, xRy if and only if at least as many individuals have xR[i]y as have yR[i]x, formally |{i ∈ N : xR[i]y}| ≥ |{i ∈ N : yR[i]x}|. As we have seen, this does not guarantee transitive social preferences.^[5] How frequent are intransitive majority preferences? It can be shown that the proportion of preference profiles (among all possible ones) that lead to cyclical majority preferences increases with the number of individuals (n) and the number of alternatives (|X|). If all possible preference profiles are equally likely to occur (the so-called ‘impartial culture’ scenario), majority cycles should therefore be probable in large electorates (Gehrlein 1983). (Technical work further distinguishes between ‘top-cycles’ and cycles below a possible Condorcet-winning alternative.) However, the probability of cycles can be significantly lower under certain systematic, even small, deviations from an impartial culture (List and Goodin 2001: Appendix 3; Tsetlin, Regenwetter, and Grofman 2003; Regenwetter et al. 2006). Abstracting from pairwise majority voting, Arrow introduced the following conditions on a preference aggregation rule, F. Universal domain: The domain of F is the set of all logically possible profiles of complete and transitive individual preference orderings. Ordering: For any profile <R[1], R[2], …, R[n]> in the domain of F, the social preference relation R is complete and transitive. Weak Pareto principle: For any profile <R[1], R[2], …, R[n]> in the domain of F, if for all i ∈ N xP[i]y, then xPy. Independence of irrelevant alternatives: For any two profiles <R[1], R[2], …, R[n]> and <R*[1], R*[2], …, R*[n]> in the domain of F and any x, y ∈ X, if for all i ∈ N R[i]'s ranking between x and y coincides with R*[i]'s ranking between x and y, then xRy if and only if xR*y. Non-dictatorship: There does not exist an individual i ∈ N such that, for all <R[1], R[2], …, R[n]> in the domain of F and all x, y ∈ X, xP[i]y implies xPy. Universal domain requires the aggregation rule to cope with any level of ‘pluralism’ in its inputs. Ordering requires it to produce ‘rational’ social preferences, avoiding Condorcet cycles. The weak Pareto principle requires that when all individuals strictly prefer alternative x to alternative y, so does society. Independence of irrelevant alternatives requires that the social preference between any two alternatives x and y depend only on the individual preferences between x and y, not on individuals' preferences over other alternatives. Non-dictatorship requires that there be no ‘dictator’, who always determines the social preference, regardless of other individuals' preferences. (Note that pairwise majority voting satisfies all of these conditions except ordering.) Theorem (Arrow 1951/1963): If |X| > 2, there exists no preference aggregation rule satisfying universal domain, ordering, the weak Pareto principle, independence of irrelevant alternatives, and It is evident that this result carries over to the aggregation of other kinds of orderings, as distinct from preference orderings, such as (i) belief orderings over several hypotheses (ordinal credences), (ii) multiple criteria that a single decision maker may use to generate an all-things-considered ordering of several decision options, and (iii) conflicting value rankings to be Examples of other such aggregation problems to which Arrow's theorem has been applied include: intrapersonal aggregation problems (e.g., May 1954; Hurley 1985), constraint aggregation in optimality theory in linguistics (e.g., Harbour and List 2000), theory choice (e.g., Okasha 2011; cf. Morreau forthcoming), evidence amalgamation (e.g., Stegenga 2013), and the aggregation of multiple similarity orderings into an all-things-considered similarity ordering (e.g., Morreau 2010; Kroedel and Huber 2013). In each case, the plausibility of Arrow's theorem depends on the case-specific plausibility of Arrow's ordinalist framework and the theorem's conditions. Generally, if we consider Arrow's framework appropriate and his conditions indispensable, Arrow's theorem raises a serious challenge. To avoid it, we must relax at least one of the five conditions or give up the restriction of the aggregation rule's inputs to orderings and defend the use of richer inputs, as discussed in Section 4. 3.2.1 Relaxing universal domain One way to avoid Arrow's theorem is to relax universal domain. If the aggregation rule is required to accept as input only preference profiles that satisfy certain ‘cohesion’ conditions, then aggregation rules such as pairwise majority voting will produce complete and transitive social preferences. The best-known cohesion condition is single-peakedness (Black 1948). A profile <R[1], R[2], …, R[n]> is single-peaked if the alternatives can be aligned from ‘left’ to ‘right’ (e.g., on some cognitive or ideological dimension) such that each individual has a most preferred position on that alignment with decreasing preference as alternatives get more distant (in either direction) from the most preferred position. Formally, this requires the existence of a linear ordering Ω on X such that, for every triple of alternatives x, y, z ∈ X, if y lies between x and z with respect to Ω, it is not the case that xR[i]y and zR[i]x (this rules out a ‘cave’ between x and z, at y). Single-peakedness is plausible in some democratic contexts. If the alternatives in X are different tax rates, for example, each individual may have a most preferred tax rate (which will be lower for a libertarian individual than for a socialist) and prefer other tax rates less as they get more distant from the ideal. Black (1948) proved that if the domain of the aggregation rule is restricted to the set of all profiles of individual preference orderings satisfying single-peakedness, majority cycles cannot occur, and the most preferred alternative of the median individual relative to the relevant left-right alignment is a Condorcet winner (assuming n is odd). Pairwise majority voting then satisfies the rest of Arrow's conditions. Other domain-restriction conditions with similar implications include single-cavedness, a geometrical mirror image of single-peakedness (Inada 1964), separability into two groups (ibid.), and latin-squarelessness (Ward 1965), the latter two more complicated combinatorial conditions (for a review, see Gaertner 2001). Sen (1966) showed that all these conditions imply a weaker condition, triple-wise value-restriction. It requires that, for every triple of alternatives x, y, z ∈ X, there exists one alternative in {x, y, z} and one rank r ∈ {1, 2, 3} such that no individual ranks that alternative in r^th place among x, y, and z. For instance, all individuals may agree that y is not bottom (r = 3) among x, y, and z. Triple-wise value-restriction suffices for transitive majority preferences (for a simple proof of Sen's theorem, see Elsholtz and List 2005). There has been much discussion on whether, and under what conditions, real-world preferences fall into such a restricted domain. It has been suggested, for example, that group deliberation can induce single-peaked preferences, by leading participants to focus on a shared cognitive or ideological dimension (Miller 1992; Knight and Johnson 1994; Dryzek and List 2003). Experimental evidence from deliberative opinion polls is consistent with this hypothesis (List, Luskin, Fishkin, and McLean 2013), though further empirical work is needed. 3.2.2 Relaxing ordering Preference aggregation rules are normally expected to produce orderings as their outputs, but sometimes we may only require partial orderings or not fully transitive binary relations. An aggregation rule that produces transitive but often incomplete social preferences is the Pareto dominance procedure: here, for any profile <R[1], R[2], …, R[n]> and any x, y ∈ X, xRy if and only if, for all i ∈ N, xP[i]y. An aggregation rule that produces complete but often intransitive social preferences is the Pareto extension procedure: here, for any profile <R[1], R[2], …, R[n]> and any x, y ∈ X, xRy if and only if it is not the case that, for all i ∈ N, yP[i]x. Both rules have a unanimitarian spirit, giving each individual veto power either against the presence of a weak social preference for x over y or against its absence. Gibbard (1969) proved that even if we replace the requirement of transitivity with what he called quasi-transitivity, the resulting possibilities of aggregation are still very limited. Call a preference relation R quasi-transitive if the induced strict relation P is transitive (while the indifference relation I need not be transitive). Call an aggregation rule oligarchic if there is a subset M ⊆ N (the ‘oligarchs’) such that (i) if, for all i ∈ M, xP[i]y, then xPy, and (ii) if, for some i ∈ M, xP[i]y, then xRy. The Pareto extension procedure is an example of an oligarchic aggregation rule with M = N. In an oligarchy, the oligarchs are jointly decisive and have individual veto power. Gibbard proved the following: Theorem (Gibbard 1969): If |X| > 2, there exists no preference aggregation rule satisfying universal domain, quasi-transitivity and completeness of social preferences, the weak Pareto principle, independence of irrelevant alternatives, and non-oligarchy. 3.2.3 Relaxing the weak Pareto principle The weak Pareto principle is arguably hard to give up. One case in which we may lift it is that of spurious unanimity, where a unanimous preference for x over y is based on mutually inconsistent reasons (e.g., Mongin 1997; Gilboa, Samet, and Schmeidler 2004). Two men may each prefer to fight a duel (alternative x) to not fighting it (alternative y) because each over-estimates his chances of winning. There may exist no mutually agreeable probability assignment over possible outcomes of the duel (i.e., who would win) that would ‘rationalize’ the unanimous preference for x over y. In this case, the unanimous preference is a bad indicator of social preferability. This example, however, depends on the fact that the alternatives of fighting and not fighting are not fully specified outcomes but uncertain prospects. Arguably, the weak Pareto principle is more plausible in cases without uncertainty. An aggregation rule that becomes possible when the weak Pareto principle is dropped is an imposed procedure, where, for any profile <R[1], R[2], …, R[n]>, the social preference relation R is an antecedently fixed (‘imposed’) ordering R[imposed] of the alternatives. Though completely unresponsive to individual preferences, this aggregation rule satisfies the rest of Arrow's conditions. Sen (1970a) offered another critique of the weak Pareto principle, showing that it conflicts with a ‘liberal’ principle. Here we interpret the aggregation rule as a method a social planner can use to rank social alternatives in an order of social welfare. Suppose each individual in society is given some basic rights, to the effect that his or her preference is sometimes socially decisive (i.e., cannot be overridden by others' preferences). Each of Lewd and Prude, for example, should be decisive over whether he himself reads a particular book, Lady Chatterley's Lover. Minimal liberalism: There are at least two distinct individuals i, j ∈ N who are each decisive on at least one pair of alternatives; i.e., there is at least one pair of alternatives x, y ∈ X such that, for every profile <R[1], R[2], …, R[n]>, xP[i]y implies xPy, and yP[i]x implies yPx, and at least one pair of alternatives x*, y* ∈ X such that, for every profile <R[1], R[2], …, R[n]>, x*P[j] y* implies x*Py*, and y*P[j]x* implies y*Px*. Sen asked us to imagine that Lewd most prefers that Prude read the book (alternative x), second-most prefers that he read the book himself (alternative y), and least prefers that neither read the book (z). Prude most prefers that neither read the book (z), second-most prefers that he read the book himself (x), and least prefers that Lewd read the book (y). Assuming Lewd is decisive over the pair y and z, society should prefer y to z. Assuming Prude is decisive over the pair x and z, society should prefer z to x. But since Lewd and Prude both prefer x to y, the weak Pareto principle (applied to N = {Lewd, Prude}) implies that society should prefer x to y. So, we are faced with a social preference cycle. Sen called this problem the ‘liberal paradox’ and generalized it as follows. Theorem (Sen 1970a): There exists no preference aggregation rule satisfying universal domain, acyclicity of social preferences, the weak Pareto principle, and minimal liberalism. The result suggests that if we wish to respect individual rights, we may sometimes have to sacrifice Paretian efficiency. An alternative conclusion is that the weak Pareto principle can be rendered compatible with minimal liberalism only when the domain of admissible preference profiles is suitably restricted, for instance to preferences that are ‘tolerant’ or not ‘meddlesome’ (Blau 1975; Craven 1982; Gigliotti 1986; Sen 1983). Lewd's and Prude's preferences in Sen's example are ‘meddlesome’. Several authors have challenged the relevance of Sen's result, however, by criticizing his formalization of rights (e.g., Gaertner, Pattanaik, and Suzumura 1992; Dowding and van Hees 2003). 3.2.4 Relaxing independence of irrelevant alternatives A common way to obtain possible preference aggregation rules is to give up independence of irrelevant alternatives. Almost all familiar voting methods over three or more alternatives that involve some form of preferential voting (with voters being asked to express full or partial preference orderings) violate this condition. A standard example is plurality rule: here, for any profile <R[1], R[2], …, R[n]> and any x, y ∈ X, xRy if and only if |{i ∈ N : for all z ≠ x, xP[i]z}| ≥ |{i ∈ N : for all z ≠ y, yP[i]z}|. Informally, alternatives are socially ranked in the order of how many individuals most prefer each of them. Plurality rule avoids Condorcet's paradox, but runs into other problems. Most notably, an alternative that is majority-dispreferred to every other alternative may win under plurality rule: if 34% of the voters rank x above y above z, 33% rank y above z above x, and 33% rank z above y above x, plurality rule ranks x above each of y and z, while pairwise majority voting would rank y above z above x (y is the Condorcet winner). By disregarding individuals' lower-ranked alternatives, plurality rule also violates the weak Pareto principle. However, plurality rule may be plausible in ‘restricted informational environments’, where the balloting procedure collects information only about voters' top preferences, not about their full preference rankings. Here plurality rule satisfies generalized variants of May's four conditions introduced above (Goodin and List 2006). A second example of a preference aggregation rule that violates independence of irrelevant alternatives is the Borda count (e.g., Saari 1990). Here, for any profile <R[1], R[2], …, R[n]> and any x, y ∈ X, xRy if and only if Σ[i][∈][N]|{z ∈ X : xR[i]z}| ≥ Σ[i][∈][N]|{z ∈ X : yR[i]z}|. Informally, each voter assigns a score to each alternative, which depends on its rank in his or her preference ranking. The most-preferred alternative gets a score of k (where k = |X|), the second-most-preferred alternative a score of k − 1, the third-most-preferred alternative a score of k − 2, and so on. Alternatives are then socially ordered in terms of the sums of their scores across voters: the alternative with the largest sum-total is top, the alternative with the second-largest sum-total next, and so on. To see how this violates independence of irrelevant alternatives, consider the two profiles of individual preference orderings over four alternatives (x, y, z, w) in Tables 3 and 4. Table 3: A profile of individual preference orderings Individual 1 Individuals 2 to 7 Individuals 8 to 15 1^st preference y x z 2^nd preference x z x 3^rd preference z w y 4^th preference w y w Table 4: A slightly modified profile of individual preference orderings Individual 1 Individuals 2 to 7 Individuals 8 to 15 1^st preference x x z 2^nd preference y z x 3^rd preference w w y 4^th preference z y w In Table 3, the Borda scores of the four alternatives are: • x: 9*3 + 6*4 = 51, • y: 1*4 + 6*1 + 8*2 = 26, • z: 1*2 + 6*3 + 8*4 = 52, • w: 1*1 + 6*2 + 8*1 = 21, leading to a social preference for z over x over y over w. In Table 4 the Borda scores are: • x: 7*4 + 8*3 = 52, • y: 1*3 + 6*1 + 8*2 = 25, • z: 1*1 + 6*3 + 8*4 = 51, • w: 7*2 + 8*1 = 22, leading to a social preference for x over z over y over w. The only difference between the two profiles lies in Individual 1's preference ordering, and even here there is no change in the relative ranking of x and z. Despite identical individual preferences between x and z in Tables 3 and 4, the social preference between x and z is reversed, a violation of independence of irrelevant Such violations are common in real-world voting rules, and they make preference aggregation potentially vulnerable to strategic voting and/or strategic agenda setting. I illustrate this in the case of strategic voting. So far we have discussed preference aggregation rules, which map profiles of individual preference orderings to social preference relations. We now consider social choice rules, whose output, instead, is one or several winning alternatives. Formally, a social choice rule, f, is a function that assigns to each profile <R[1], R[2], …, R[n]> (in some domain of admissible profiles) a social choice set f(R[1], R[2], …, R[n]) ⊆ X. A social choice rule f can be derived from a preference aggregation rule F, by defining f(R[1], R[2], …, R[n]) = {x ∈ X : for all y ∈ X, xRy} where R = F(R[1], R[2], …, R[n]); the reverse does not generally hold. We call the set of sometimes-chosen alternatives the range of f.^[6] The Condorcet winner criterion defines a social choice rule, where, for each profile <R[1], R[2], …, R[n]>, f(R[1], R[2], …, R[n]) contains every alternative in X that wins or at least ties with every other alternative in pairwise majority voting. As shown by Condorcet's paradox, this may produce an empty choice set. By contrast, plurality rule and the Borda count induce social choice rules that always produce non-empty choice sets. They also satisfy the following basic conditions (the last for |X| ≥ 3): Universal Domain: The domain of f is the set of all logically possible profiles of complete and transitive individual preference orderings. Non-dictatorship: There does not exist an individual i ∈ N such that, for all <R[1], R[2], …, R[n]> in the domain of f and all x in the range of f, yR[i]x where y ∈ f(R[1], R[2], …, R[n]).^[7] The range constraint: The range of f contains at least three distinct alternatives (and ideally all alternatives in X). When supplemented with an appropriate tie-breaking criterion, the plurality and Borda rules can further be made ‘resolute’: Resoluteness: The social choice rule f always produces a unique winning alternative (a singleton choice set). (We then write x = f(R[1], R[2], …, R[n]) to denote the winning alternative for the profile <R[1], R[2], …, R[n]>.) Surprisingly, this list of conditions conflicts with the following further requirement. Strategy-proofness: There does not exist a profile <R[1], R[2], …, R[n]> in the domain of f at which f is manipulable by some individual i ∈ N, where manipulability means the following: if i submits a false preference ordering R′[i] (≠ R[i]), the winner is an alternative y′ that i strictly prefers (according to R[i]) to the alternative y that would win if i submitted the true preference ordering R[i].^[8] Theorem (Gibbard 1973; Satterthwaite 1975): There exists no social choice rule satisfying universal domain, non-dictatorship, the range constraint, resoluteness, and strategy-proofness. This result raises important questions about the trade-offs between different requirements on a social choice rule. A dictatorship, which always chooses the dictator's most preferred alternative, is trivially strategy-proof. The dictator obviously has no incentive to vote strategically, and no-one else does so either, since the outcome depends only on the dictator. To see that the Borda count violates strategy-proofness, recall the example of Tables 3 and 4 above. If Individual 1 in Table 3 truthfully submits the preference ordering yP[1]xP[1]zP[1]w, the Borda winner is z, as we have seen. If Individual 1 falsely submits the preference ordering xP[1]yP[1]wP[1]z, as in Table 4, the Borda winner is x. But Individual 1 prefers x to z according to his or her true preference ordering (in Table 3), and so he or she has an incentive to vote strategically. Moulin (1980) has shown that when the domain of the social choice rule is restricted to single-peaked preference profiles, pairwise majority voting and other so-called ‘median voting’ schemes can satisfy the rest of the conditions of the Gibbard-Satterthwaite theorem. Similarly, when collective decisions are restricted to binary choices alone, which amounts to dropping the range constraint, majority voting satisfies the rest of the conditions. Other possible escape routes from the theorem open up if resoluteness is dropped. In the limiting case in which all alternatives are always chosen, the other conditions are vacuously satisfied. The requirement of strategy-proofness has been challenged too. One line of argument is that, even when there exist strategic incentives in the technical sense of the Gibbard-Satterthwaite theorem, individuals will not necessarily act on them. They would require detailed information about others' preferences and enough computational power to figure out what the optimal strategically modified preferences would be. Neither demand is generally met. Bartholdi, Tovey, and Trick (1989) showed that, due to computational complexity, some social choice rules are resistant to strategic manipulation: it may be an NP-hard problem for a voter to determine how to vote strategically. In this vein, Harrison and McDaniel (2008) provide experimental evidence suggesting that the ‘Kemeny rule’, an extension of pairwise majority voting designed to avoid Condorcet cycles, is ‘behaviourally incentive-compatible’: i.e., strategic manipulation is computationally hard. Dowding and van Hees (2008) have argued that not all forms of strategic voting are normatively problematic. They distinguish between ‘sincere’ and ‘insincere’ forms of manipulation and argue that only the latter but not the former are normatively troublesome. Sincere manipulation occurs when a voter (i) votes for a compromise alternative whose chances of winning are thereby increased and (ii) genuinely prefers that compromise alternative to the alternative that would otherwise win. For example, in the 2000 US presidential election, supporters of Ralph Nader (a third-party candidate with little chance of winning) who voted for Al Gore to increase his chances of beating George W. Bush engaged in sincere manipulation in the sense of (i) and (ii). Plurality rule is susceptible to sincere manipulation, but not vulnerable to insincere manipulation. An implicit assumption so far has been that preferences are ordinal and not interpersonally comparable: preference orderings contain no information about each individual's strength of preference or about how to compare different individuals' preferences with one another. Statements such as ‘Individual 1 prefers alternative x more than Individual 2 prefers alternative y’ or ‘Individual l prefers a switch from x to y more than Individual 2 prefers a switch from x* to y*’ are considered meaningless. In voting contexts, this assumption may be plausible, but in welfare-evaluation contexts—when a social planner seeks to rank different social alternatives in an order of social welfare—the use of richer information may be justified. Sen (1970b) generalized Arrow's model to incorporate such richer information. As before, consider a set N = {1, 2, …, n} of individuals (n ≥ 2) and a set X = {x, y, z, …} of social alternatives. Now each individual i ∈ N has a welfare function W[i] over these alternatives, which assigns a real number W[i](x) to each alternative x ∈ X, interpreted as a measure of i's welfare under alternative x. Any welfare function on X induces an ordering on X, but the converse is not true: welfare functions encode more information. A combination of welfare functions across the individuals, <W[1], W[2], …, W[n]>, is called a profile. A social welfare functional (SWFL), also denoted F, is a function that assigns to each profile <W[1], W[2], …, W[n]> (in some domain of admissible profiles) a social preference relation R = F(W[1], W [2], …, W[n]) on X, with the familiar interpretation. Again, when F is clear from the context, we write R for the social preference relation corresponding to <W[1], W[2], …, W[n]>. The output of a SWFL is similar to that of a preference aggregation rule (again, we do not build the completeness or transitivity of R into the definition^[9]), but its input is richer. What we gain from this depends on how much of the enriched informational input we allow ourselves to use in determining society's preferences: technically, it depends on our assumption about measurability and interpersonal comparability of welfare. By assigning real numbers to alternatives, welfare profiles contain a lot of information over and above the profiles of orderings on X they induce. In particular, many different assignments of numbers to alternatives can give rise to the same orderings. But we may not consider all this information meaningful. Some of it could be an artifact of the numerical representation. For example, the difference between the profile <W[1], W[2], …, W[n]> and its scaled-up version <10*W[1], 10*W[2], …, 10*W[n]>, where everything is the same in proportional terms, could be like the difference between length measurements in centimeters and in inches. The two profiles might be seen as alternative representations of the exact same information, just on different scales. To express different assumptions about which information is truly encoded by a profile of welfare functions and which information is not (and should thus be seen, at best, as an artifact of the numerical representation), it is helpful to introduce the notion of meaningful statements. Some examples of statements about individual welfare that are candidates for meaningful statements are the following (List 2003b; see also Bossert and Weymark 1996: Section 5): A level comparison: Individual i's welfare under alternative x is at least as great as individual j's welfare under alternative y, formally W[i](x) ≥ W[j](y). (The comparison is intrapersonal if i = j, and interpersonal if i ≠ j.) A unit comparison: The ratio of [individual i's welfare gain or loss if we switch from alternative y[1] to alternative x[1]] to [individual j's welfare gain or loss if we switch from alternative y[2] to alternative x[2]] is λ, where λ is some real number, formally (x[1] − y[1]) / (x[2] − y[2]) = λ. (Again, the comparison is intrapersonal if i = j, and interpersonal if i ≠ j.) A zero comparison: Individual i's welfare under alternative x is greater than / equal to / less than zero, formally sign(W[i](x)) = λ, where λ ∈ {−1, 0, 1} and sign is a real-valued function that maps strictly negative numbers to −1, zero to 0, and strictly positive numbers to +1. Arrow's view, as noted, is that only intrapersonal level comparisons are meaningful, while all other kinds of comparisons are not. Sen (1970b) formalized various assumptions about measurability and interpersonal comparability of welfare by (i) defining an equivalence relation on welfare profiles that specifies when two profiles count as ‘containing the same information’, and (ii) requiring any profiles in the same equivalence class to generate the same social preference ordering. Of the three kinds of comparison statements introduced above, the meaningful ones are those that are invariant in each equivalence class. Arrow's ordinalist assumption can be expressed as follows: Ordinal measurability with no interpersonal comparability (ONC): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = φ[i ](W[i]), where φ[i] is some positive monotonic transformation, possibly different for different individuals. Thus the individual welfare functions in any profile can be arbitrarily monotonically transformed (‘stretched or squeezed’) without informational loss, thereby ruling out any interpersonal comparisons or even intrapersonal unit comparisons. If welfare is cardinally measurable but still interpersonally non-comparable, we have: Cardinal measurability with no interpersonal comparability (CNC): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = a [i]W[i] + b[i], where the a[i]s and b[i]s are real numbers (with a[i] > 0), possibly different for different individuals. Here, each individual's welfare function is unique up to positive affine transformations (‘scaling and shifting’), but there is still no common scale across individuals. This renders intrapersonal level and unit comparisons meaningful, but rules out interpersonal comparisons and zero comparisons. Interpersonal level comparability is achieved under the following enriched variant of ordinal measurability: Ordinal measurability with interpersonal level comparability (OLC): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = φ(W[i]), where φ is the same positive monotonic transformation for all individuals. Here, a profile of individual welfare functions can be arbitrarily monotonically transformed (‘stretched or squeezed’) without informational loss, but the same transformation must be used for all individuals, thereby rendering interpersonal level comparisons meaningful. Interpersonal unit comparability is achieved under the following enriched variant of cardinal measurability: Cardinal measurability with interpersonal unit comparability (CUC): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = aW[i] + b[i], where a is the same real number for all individuals (a > 0) and the b[i]s are real numbers. Here, the welfare functions in each profile can be re-scaled and shifted without informational loss, but the same scalar multiple (though not necessarily the same shifting constant) must be used for all individuals, thereby rendering interpersonal unit comparisons meaningful. Zero comparisons, finally, become meaningful under the following enriched variant of ordinal measurability (List 2001): Ordinal measurability with zero comparability (ONC+0): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = φ[i](W[i]), where φ[i] is some positive monotonic and zero-preserving transformation, possibly different for different individuals. (Here zero-preserving means that φ[i](0) = 0.) This allows arbitrary stretching and squeezing of individual welfare functions without informational loss, provided the welfare level of zero remains fixed, thereby ensuring zero comparability. Several other measurability and interpersonal comparability assumptions have been discussed in the literature. The following ensures the meaningfulness of interpersonal comparisons of both levels and Cardinal measurability with full interpersonal comparability (CFC): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = aW[i] + b, where a, b are the same real numbers for all individuals (a > 0). Lastly, intra- and interpersonal comparisons of all three kinds (level, unit, and zero) are meaningful if we accept the following: Ratio-scale measurability with full interpersonal comparability (RFC): Two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> contain the same information whenever, for each i ∈ N, W*[i] = aW[i], where a is the same real number for all individuals (a > 0). Which assumption is warranted depends on how welfare is interpreted. If welfare is hedonic utility, which can be experienced only from a first-person perspective, interpersonal comparisons are harder to justify than if welfare is the objective satisfaction of subjective preferences or desires (the desire-satisfaction view) or an objective good or state (an objective-list view) (e.g., Hausman 1995; List 2003b). The desire-satisfaction view may render interpersonal comparisons empirically meaningful (by relating the interpersonally significant maximal and minimal levels of welfare for each individual to the attainment of his or her most and least preferred alternatives), but at the expense of running into problems of expensive tastes or adaptive preferences (Hausman 1995). Resource-based, functioning-based, or primary-goods-based currencies of welfare, by contrast, may allow empirically meaningful and less morally problematic interpersonal comparisons. Once we introduce interpersonal comparisons of welfare levels or units, or zero comparisons, there exist possible SWFLs satisfying the analogues of Arrow's conditions as well as stronger desiderata. In a welfare-aggregation context, Arrow's impossibility can therefore be traced to a lack of interpersonal comparability. As noted, a SWFL respects a given assumption about measurability and interpersonal comparability if, for any two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> that are deemed to contain the same information, we have F(W[1], W[2], …, W[n]) = F(W*[1], W*[2], …, W*[n]). Arrow's conditions and theorem can be restated as follows: Universal domain: The domain of F is the set of all logically possible profiles of individual welfare functions. Ordering: For any profile <W[1], W[2], …, W[n]> in the domain of F, the social preference relation R is complete and transitive. Weak Pareto principle: For any profile <W[1], W[2], …, W[n]> in the domain of F, if for all i∈N W[i](x) > W[i](y), then xPy. Independence of irrelevant alternatives: For any two profiles <W[1], W[2], …, W[n]> and <W*[1], W*[2], …, W*[n]> in the domain of F and any x, y ∈ X, if for all i ∈ N W[i](x) = W*[i](x) and W[i]( y) = W*[i](y), then xRy if and only if xR*y. Non-dictatorship: There does not exist an individual i ∈ N such that, for all <W[1], W[2], …, W[n]> in the domain of F and all x, y ∈ X, W[i](x) > W[i](y) implies xPy. Theorem: Under ONC (or CNC, as Sen 1970b has shown), if |X| > 2, there exists no SWFL satisfying universal domain, ordering, the weak Pareto principle, independence of irrelevant alternatives, and non-dictatorship. Crucially, however, each of OLC, CUC, and ONC+0 is sufficient for the existence of SWFLs satisfying all other conditions: Theorem (combining several results from the literature, as illustrated below): Under each of OLC, CUC, and ONC+0, there exist SWFLs satisfying universal domain, ordering, the weak Pareto principle, independence of irrelevant alternatives, and non-dictatorship (as well as stronger conditions). Some examples of such SWFLs come from political philosophy and welfare economics. A possible SWFL under OLC is a version of Rawls's difference principle (1971). Maximin: For any profile <W[1], W[2], …, W[n]> and any x, y ∈ X, xRy if and only if min[i][∈][N](W[i](x)) ≥ min[i][∈][N](W[i](y)). While maximin rank-orders social alternatives in terms of the welfare level of the worst-off individual alone, its lexicographic extension (leximin), which was endorsed by Rawls himself, uses the welfare level of the second-worst-off individual as a tie-breaker when there is tie at the level of the worst off, the welfare level of the third-worst-off individual as a tie-breaker when there is a tie at the second stage, and so on. (Note, however, that Rawls focused on primary goods, rather than welfare, as the relevant ‘currency’.) This satisfies the strong (not just weak) Pareto principle, requiring that if for all i∈N W[i](x) ≥ W[i](y), then xRy, and if in addition for some i ∈ N W[i](x) > W[i](y), then xPy. An example of a possible SWFL under CUC is classical utilitarianism. Utilitarianism: For any profile <W[1], W[2], …, W[n]> and any x, y ∈ X, xRy if and only if W[1](x) + W[2](x) + … + W[n](x) ≥ W[1](y) + W[2](y) + … + W[n](y). Finally, an example of a possible SWFL under ONC+0 is a variant of a frequently used, though rather simplistic poverty measure. A head-count rule: For any profile <W[1], W[2], …, W[n]> and any x, y ∈ X, xRy if and only if |{i ∈ N : W[i](x) < 0}| < |{i ∈ N : W[i](y) < 0}| or [|{i ∈ N : W[i](x) < 0}| = |{i ∈ N : W[i](y) < 0}| and xR[j]y], where j ∈ N is some antecedently fixed tie-breaking individual. While substantively less compelling than maximin or utilitarian rules, head-count rules require only zero-comparability of welfare (List 2001). An important conclusion, therefore, is that Rawls's difference principle, the classical utilitarian principle, and even the head-count method of poverty measurement can all be seen as solutions to Arrow's aggregation problem that become possible once we go beyond Arrow's framework of ordinal, interpersonally non-comparable preferences. Under CFC, one can provide a simultaneous characterization of Rawlsian maximin and utilitarianism (Deschamps and Gevers 1978). It uses two additional axioms. One, minimal equity, requires (in the words of Sen 1977: 1548) ‘that a person who is going to be best off anyway does not always strictly have his way’, and another, separability, requires that two welfare profiles that coincide for some subset M ⊆ N while everyone in N\M is indifferent between all alternatives in X lead to the same social ordering. Theorem (Deschamps and Gevers 1978): Under CFC, any SWFL satisfying universal domain, ordering, the strong Pareto principle, independence of irrelevant alternatives, anonymity (as in May's theorem), minimal equity, and separability is either leximin or of a utilitarian type (meaning that, except possibly when there are ties in sum-total welfare, it coincides with the utilitarian SWFL defined above). Finally, the additional information available under RFC makes ‘prioritarian’ SWFLs possible.^[10] Like utilitarian SWFLs, they rank-order social alternatives on the basis of welfare sums across the individuals in N, but rather than summing up welfare directly, they sum up concavely transformed welfare, giving greater marginal weight to lower levels of welfare. Prioritarianism: For any profile <W[1], W[2], …, W[n]> and any x, y ∈ X, xRy if and only if W[1]^r(x) + W[2]^r(x) + … + W[n] ^r(x) ≥ W[1]^r(y) + W[2]^r(y) + … + W[n] ^r(y), where 0 < r < 1. Prioritarianism requires RFC and not merely CFC because, by design, the prioritarian social ordering for any welfare profile is not invariant under changes in welfare levels (shifting). The present welfare-aggregation framework has been applied to several further areas. It has been generalized to variable-population choice problems, so as to formalize population ethics in the tradition of Parfit (1984). Here, we must rank-order social alternatives (e.g., possible worlds) in which different individuals exist. Let N(x) denote the set of individuals existing under alternative x. For example, the set N(x) could differ from the set N(y), when x and y are distinct alternatives (this generalizes our previous assumption of a fixed set N). The variable-population case raises questions such as whether a world with a smaller number of better-off individuals is better than, equally good as, or worse than a world with a larger number of worse-off individuals. (The focus here is on axiological questions about the relative goodness of such worlds, not normative questions about the rightness or wrongness of bringing them about.) Parfit (1984) and others argued that classical utilitarianism is subject to the repugnant conclusion: a world with a very large number of individuals whose welfare levels are barely above zero could have a larger sum-total of welfare, and therefore count as better, than a world with a smaller number of very well-off individuals. Blackorby, Donaldson, and Bossert (e.g., 2005) have axiomatically characterized different variable-population welfare aggregation methods that avoid the repugnant conclusion and satisfy some other desiderata. One solution is the following: Critical-level utilitarianism: For any profile <W[1], W[2], …, W[n]> and any x, y ∈ X, xRy if and only if Σ[i∈N(x)][W[i](x) − c] ≥ Σ[i∈N(y)][W[i](y) − c], where c ≥ 0 is some ‘critical level’ of welfare above which the quality of life counts as ‘decent/good’. Critical-level utilitarianism avoids the repugnant conclusion when the parameter c is set sufficiently large. It requires stronger measurability of welfare than classical utilitarianism, since it generates a social ordering R that is not generally invariant under re-scaling of welfare units or shifts in welfare levels. Even the rich framework of RFC would force the critical level c to be zero, thereby collapsing critical-level utilitarianism into classical utilitarianism and making it vulnerable to the repugnant conclusion again. As Blackorby, Bossert, and Donaldson (1999: 420) note, [s]ome information environments that are ethically adequate in fixed-population settings have ethically unattractive consequences in variable-population environments. Thus, in the variable-population case, a more significant departure from the limited informational framework of Arrow's original model is needed to avoid impossibility results. The SWFL approach has been generalized to the case in which each individual has multiple welfare functions (e.g., a k-tuple of them), capturing (i) multiple opinions about each individual's welfare (e.g., Roberts 1995; Ooghe and Lauwers 2005) or (ii) multiple dimensions of welfare (e.g., List 2004a). In this case, we are faced not only with issues of measurability and interpersonal comparability, but also with issues of inter-opinion or inter-dimensional comparability. To obtain compelling possibility results, comparability across both individuals and dimensions/opinions is needed. A related literature addresses multidimensional inequality measurement (for an introductory review, see Weymark 2006). Finally, in the philosophy of biology, the one-dimensional and multi-dimensional SWFL frameworks have been used (by Okasha 2009 and Bossert, Qi, and Weymark 2013) to analyse the notion of group fitness, defined as a function of individual fitness indicators. A more recent branch of social choice theory is the theory of judgment aggregation. It can be motivated by observing that votes, orderings, or welfare functions over multiple alternatives are not the only objects we may wish to aggregate from an individual to a collective level. Many decision-making bodies, such as legislatures, collegial courts, expert panels, and other committees, are faced with more complex ‘aggreganda’. In particular, they may have to aggregate individual sets of judgments on multiple, logically connected propositions into collective sets of judgments. A court may have to judge whether a defendant is liable for breach of contract on the basis of whether there was a valid contract in place and whether there was a breach. An expert panel may have to judge whether atmospheric greenhouse-gas concentrations will exceed a particular threshold by 2050, whether there is a causal chain from greater greenhouse-gas concentrations to temperature increases, and whether the temperature will increase. Legislators may have to judge whether a particular end is socially desirable, whether a proposed policy is the best means for achieving that end, and whether to pursue that policy. These problems cannot be formalized in standard preference-aggregation models, since the aggreganda are not orderings but sets of judgments on multiple propositions. The theory of judgment aggregation represents these aggreganda in propositional logic (or another suitable logic). The field was inspired by the ‘doctrinal paradox’ in jurisprudence, with which we begin. Kornhauser and Sager (1986) described the following problem. (A structurally similar problem was discovered by Vacca 1921 and, as Elster 2013 points out, by Poisson 1837.) A three-judge court has to make judgments on the following propositions: • p: The defendant was contractually obliged not to do action X. • q: The defendant did action X. • r: The defendant is liable for breach of contract. According to legal doctrine, the premises p and q are jointly necessary and sufficient for the conclusion r. Suppose the individual judges hold the views shown in Table 5. Table 5: An example of the ‘doctrinal paradox’ p (obligation) q (action) r (liability) Judge 1 True True True Judge 2 False True False Judge 3 True False False Majority True True False Although each individual judge respects the relevant legal doctrine, there is a majority for p, a majority for q, and yet a majority against r—in breach of legal doctrine. The court faces a dilemma: it can either go with the majority judgments on the premises (p and q) and reach a ‘liable’ verdict by logical inference (the issue-by-issue or premise-based approach); or go with the majority judgment on the conclusion (r) and reach a ‘not liable’ verdict, ignoring the majority judgments on the premises (the case-by-case or conclusion-based approach). Kornhauser and Sager's ‘doctrinal paradox’ consists in the fact that these two approaches may lead to opposite outcomes. We can learn another lesson from this example. Relative to the legal doctrine, the majority judgments are logically inconsistent. Formally expressed, the set of majority-accepted propositions, {p, q, not r}, is inconsistent relative to the constraint r if and only if (p and q). This observation was the starting point of the more recent, formal-logic-based literature on judgment aggregation (beginning with a model and impossibility result in List and Pettit 2002). The possibility of inconsistent majority judgments is not tied to the presence of a legal doctrine or other explicit side constraint (as pointed out by Pettit 2001, who called this phenomenon the ‘discursive dilemma’). Suppose, for example, an expert panel has to make judgments on three propositions (and their negations): • p: Atmospheric CO[2] will exceed 600ppm by 2050. • if p then q: If atmospheric CO[2] exceeds this level by 2050, there will be a temperature increase of more than 3.5° by 2010. • q: There will be a temperature increase of more than 3.5° by 2010. If individual judgments are as shown in Table 6, the majority judgments are inconsistent: despite individually consistent judgments, the set of majority-accepted propositions, {p, if p then q, not q }, is logically inconsistent. Table 6: A majoritarian inconsistency p if p then q q Expert 1 True True True Expert 2 False True False Expert 3 True False False Majority True True False The patterns of judgments in Tables 5 and 6 are structurally equivalent to the pattern of preferences leading to Condorcet's paradox, when we reinterpret those preferences as judgments on propositions of the form ‘x is preferable to y’, ‘y is preferable to z’, and so on, as shown in Table 7 (List and Pettit 2004; an earlier interpretation of preferences along these lines can be found in Guilbaud [1952] 1966). Here, the set of majority-accepted propositions is inconsistent relative to the constraint of transitivity. Table 7: Condorcet's paradox, propositionally reinterpreted ‘x is preferable to y’ ‘y is preferable to z’ ‘x is preferable to z’ Individual 1 True True True (prefers x to y to z) Individual 2 False True False (prefers y to z to x) Individual 3 True False False (prefers z to x to y) Majority True True False (prefers x to y to z to x, a ‘cycle’) A general combinatorial result subsumes all these phenomena. Call a set of propositions minimally inconsistent if it is a logically inconsistent set, but all its proper subsets are consistent. Proposition (Dietrich and List 2007a; Nehring and Puppe 2007): Propositionwise majority voting may generate inconsistent collective judgments if and only if the set of propositions (and their negations) on which judgments are to be made has a minimally inconsistent subset of three or more propositions. In the examples of Tables 6, 5, and 7, the relevant minimally inconsistent sets of size (at least) three are: {p, if p then q, not q}, which is minimally inconsistent simpliciter; {p, q, not r}, which is minimally inconsistent relative to the side constraint r if and only if (p and q); and {‘x is preferable to y’, ‘y is preferable to z’, ‘z is preferable to x’}, which is minimally inconsistent relative to a transitivity constraint on preferability. The basic model of judgment aggregation can be defined as follows (List and Pettit 2002). Let N = {1, 2, …, n} be a set of individuals (n ≥ 2). The propositions on which judgments are to be made are represented by sentences from propositional logic (or some other, expressively richer logic, such as a predicate, modal, or conditional logic; see Dietrich 2007). We define the agenda, X, as a finite set of propositions, closed under single negation.^[11] For example, X could be {p, ¬p, p→q, ¬(p→q), q, ¬q}, as in the expert-panel case. Each individual i ∈ N has a judgment set J[i], defined as a subset J[i] ⊆ X and interpreted as the set of propositions that individual i accepts. A judgment set is consistent if it is a logically consistent set of propositions^[12] and complete (relative to X) if it contains a member of every proposition-negation pair p, ¬p ∈ X. A combination of judgment sets across the individuals, <J[1], J[2], …, J[n]>, is called a profile. A judgment aggregation rule, F, is a function that assigns to each profile <J[1], J[2], …, J[n]> (in some domain of admissible profiles) a collective judgment set J = F(J[1], J[2], …, J[n]) ⊆ X, interpreted as the set of propositions accepted by the group as a whole. As before, when F is clear from the context, we write J for the collective judgment set corresponding to <J[1], J[2], …, J[n]>. Again, for generality, we build no rationality requirement on J (such as consistency or completeness) into the definition of a judgment aggregation rule. The simplest example of a judgment aggregation rule is propositionwise majority voting. Here, for any profile <J[1], J[2], …, J[n]>, J = {p ∈ X : |{i ∈ N : p ∈ J[i]}| > n/2}. As we have seen, this may produce inconsistent collective judgments. Consider the following conditions on an aggregation rule: Universal domain: The domain of F is the set of all logically possible profiles of consistent and complete individual judgment sets. Collective rationality: For any profile <J[1], J[2], …, J[n]> in the domain of F, the collective judgment set J is consistent and complete. Anonymity: For any two profiles <J[1], J[2], …, J[n]> and <J*1, J*2, …, J*n> that are permutations of each other, F(J[1], J[2], …, J[n]) = F(J*1, J*1, …, J*n). Systematicity: For any two profiles <J[1], J[2], …, J[n]> and <J*1, J*2, …, J*n> in the domain of F and any p, q ∈ X, if for all i ∈ N p ∈ J[i] if and only if q ∈ J*i, then p ∈ J if and only if q ∈ J^*. The first three conditions are analogous to universal domain, ordering, and anonymity in preference aggregation. The last is the counterpart of independence of irrelevant alternatives, though stronger: it requires that (i) the collective judgment on any proposition p ∈ X (of which a binary ranking proposition such as ‘x is preferable to y’ is a special case) depend only on individual propositions on p (the independence part), and (ii) the pattern of dependence between individual and collective judgments be the same across all propositions in X (the neutrality part). Formally, independence is the special case with quantification restricted to p = q. Propositionwise majority voting satisfies all these conditions, except the consistency part of collective rationality. Theorem (List and Pettit 2002): If {p, q, p∧q} ⊆ X (where p and q are mutually independent propositions and ‘∧’ can also be replaced by ‘∨’ or ‘→’), there exists no judgment aggregation rule satisfying universal domain, collective rationality, anonymity, and systematicity. Like other impossibility theorems, this result is best interpreted as describing the trade-offs between different conditions on an aggregation rule. The result has been generalized and strengthened in various ways, beginning with Pauly and van Hees's (2006) proof that the impossibility persists if anonymity is weakened to non-dictatorship (for other generalizations, see Dietrich 2006 and Mongin As we have seen, in preference aggregation, the ‘boundary’ between possibility and impossibility results is easy to draw: when there are only two decision alternatives, all of the desiderata on a preference aggregation rule reviewed above can be satisfied (and majority rule does the job); when there are three or more alternatives, there are impossibility results. In judgment aggregation, by contrast, the picture is more complicated. What matters is not the number of propositions in X but the nature of the logical interconnections between them. Impossibility results in judgment aggregation have the following generic form: for a given class of agendas, the aggregation rules satisfying a particular set of conditions (usually, a domain condition, a rationality condition, and some responsiveness conditions) are non-existent or degenerate (e.g., dictatorial). Different kinds of agendas trigger different instances of this scheme, with stronger or weaker conditions imposed on the aggregation rule depending on the properties of those agendas (for a more detailed review, see List 2012). The significance of combinatorial properties of the agenda was first discovered by Nehring and Puppe (2002) in a mathematically related but interpretationally distinct framework (strategy-proof social choice over so-called property spaces). Three kinds of agenda stand out: A non-simple agenda: X has a minimally inconsistent subset of three or more propositions. A pair-negatable agenda: X has a minimally inconsistent subset Y that can be rendered consistent by negating a pair of propositions in it. (Equivalently, X is not isomorphic to a set of propositions whose only connectives are ¬ and ↔; see Dokow and Holzman 2010a.) A path-connected agenda (or totally blocked, in Nehring and Puppe 2002): For any p, q ∈ X, there is a sequence p[1], p[2], …, p[k] ∈ X with p[1] = p and p[k] = q such that p[1] conditionally entails p[2], p[2] conditionally entails p[3], …, and p[k][−1] conditionally entails p[k]. (Here, p[i] conditionally entails p[j] if p[i] ∪ Y entails p[j] for some Y ⊆ X consistent with each of p [i] and ¬p[j].) Some agendas have two or more of these properties. The agendas in our ‘doctrinal paradox’ and ‘discursive dilemma’ examples are both non-simple and pair-negatable. The preference agenda, X = {‘x is preferable to y’, ‘y is preferable to x’, ‘x is preferable to z’, ‘z is preferable to x’, …}, is non-simple, pair-negatable, and path-connected (assuming preferability is transitive and complete). The following result holds: Theorem (Dietrich and List 2007b; Dokow and Holzman 2010a; building on Nehring and Puppe 2002): If X is non-simple, pair-negatable, and path-connected, there exists no judgment aggregation rule satisfying universal domain, collective rationality, independence, unanimity preservation (requiring that, for any unanimous profile <J, J, …, J>, F(J, J, …, J) = J), and non-dictatorship.^[13] Applied to the preference agenda, this result yields Arrow's theorem (for strict preference orderings) as a corollary (predecessors of this result can be found in List and Pettit 2004 and Nehring 2003).^[14] Thus Arrovian preference aggregation can be reinterpreted as a special case of judgment aggregation. The literature contains several variants of this theorem. One variant drops the agenda property of path-connectedness and strengthens independence to systematicity. A second variant drops the agenda property of pair-negatability and imposes a monotonicity condition on the aggregation rule (requiring that additional support never hurt an accepted proposition) (Nehring and Puppe 2010; the latter result was first proved in the above-mentioned mathematically related framework by Nehring and Puppe 2002). A final variant drops both path-connectedness and pair-negatability while imposing both systematicity and monotonicity (ibid.). In each case, the agenda properties are not only sufficient but also (if n ≥ 3) necessary for the result (Nehring and Puppe 2002, 2010; Dokow and Holzman 2010a). Note also that path-connectedness implies non-simplicity. Therefore, non-simplicity need not be listed among the theorem's conditions, though it is needed in the variants dropping path-connectedness. 5.5.1 Relaxing universal domain As in preference aggregation, one way to avoid the present impossibility results is to relax universal domain. If the domain of admissible profiles of individual judgment sets is restricted to those satisfying specific ‘cohesion’ conditions, propositionwise majority voting produces consistent collective judgments. The simplest cohesion condition is unidimensional alignment (List 2003c). A profile <J[1], J[2], …, J[n]> is unidimensionally aligned if the individuals in N can be ordered from left to right (e.g., on some cognitive or ideological dimension) such that, for every proposition p ∈ X, the individuals accepting p (i.e., those with p ∈ J[i]) are either all to the left, or all to the right, of those rejecting p (i.e., those with p ∉ J[i]), as illustrated in Table 8. For any such profile, the majority judgments are consistent: the judgment set of the median individual relative to the left-right ordering will prevail (where n is odd). This judgment set will inherit its consistency from the median individual, assuming individual judgments are consistent. By implication, on unidimensionally aligned domains, propositionwise majority voting will satisfy the rest of the conditions on judgment aggregation rules reviewed above. Table 8: Unidimensional alignment Individual 1 Individual 2 Individual 3 Individual 4 Individual 5 p True True False False False q True True True True False r False False False True True p ∧ q ∧ r False False False False False In analogy with the case of single-peakedness in preference aggregation, several less restrictive conditions already suffice for consistent majority judgments. One such condition (introduced in Dietrich and List 2010a, where a survey is provided) generalizes Sen's triple-wise value-restriction. A profile <J[1], J[2], …, J[n]> is value-restricted if every minimally inconsistent subset Y ⊆ X has a pair of elements p, q such that no individual i ∈ N has {p, q} ⊆ J[i]. Value-restriction prevents any minimally inconsistent subset of X from becoming majority-accepted, and hence ensures consistent majority judgments. Applied to the preference agenda, value-restriction reduces to Sen's equally named condition. 5.5.2 Relaxing collective rationality While the requirement that collective judgments be consistent is widely accepted, the requirement that collective judgments be complete (in X) is more contentious. In support of completeness, one might say that a given proposition would not be included in X unless it is supposed to be collectively adjudicated. Against completeness, one might say that there are circumstances in which the level of disagreement on a particular proposition (or set of propositions) is so great that forming a collective view on it is undesirable or counterproductive. Several papers offer possibility or impossibility results on completeness relaxations (e.g., List and Pettit 2002; Gärdenfors 2006; Dietrich and List 2007a, 2008; Dokow and Holzman 2010b). Judgment aggregation rules violating collective completeness while satisfying (all or most of) the other conditions introduced above include: unanimity rule, where, for any profile <J[1], J[2], …, J [n]>, J = {p ∈ X : p ∈ J[i] for all i ∈ N}; supermajority rules, where, for any profile <J[1], J[2], …, J[n]>, J = {p ∈ X : |{i ∈ N : p ∈ J[i]}| > qn} for a suitable acceptance quota q ∈ (0.5,1); and conclusion-based rules, where a subset Y ⊆ X of logically independent propositions (and their negations) is designated as a set of conclusions and J = {p ∈ Y : |{i ∈ N : p ∈ J[i]}| > n/2}. In the multi-member court example of Table 5, the set of conclusions is simply Y = {r, ¬r}. Given consistent individual judgment sets, unanimity rule guarantees consistent collective judgment sets, because the intersection of several consistent sets of propositions is always consistent. Supermajority rules guarantee consistent collective judgment sets too, provided the quota q is chosen to be at least (k−1)/k, where k is the size of the largest minimally inconsistent subset of X. The reason is combinatorial: any k distinct supermajorities of the relevant size will always have at least one individual in common. So, for any minimally inconsistent set of propositions (which is at most of size k) to be majority-accepted, at least one individual would have to accept all the propositions in the set, contradicting this individual's consistency (Dietrich and List 2007a; List and Pettit 2002). Conclusion-based rules, finally, produce consistent collective judgment sets by construction, but always leave non-conclusions undecided. Gärdenfors (2006) and more generally Dietrich and List (2008) and Dokow and Holzman (2010b) have shown that if—while relaxing completeness—we require collective judgment sets to be deductively closed (i.e., for any p ∈ X entailed by J, it must be that p ∈ J), we face an impossibility result again. For the same agendas that lead to the impossibility result reviewed in Section 5.4, there exists no judgment aggregation rule satisfying universal domain, collective consistency and deductive closure, independence, unanimity preservation, and non-oligarchy. An aggregation rule is called oligarchic if there is an antecedently fixed subset M ⊆ N (the ‘oligarchs’) such that, for any profile <J[1], J[2], …, J[n]>, J = {p ∈ X : p ∈ J[i] for all i ∈ M}. Unanimity rule and dictatorships are special cases with M = N and M = {i} for some i ∈ N, respectively. The downside of oligarchic aggregation rules is that they either lapse into dictatorship or lead to stalemate, with the slightest disagreements between oligarchs resulting in indecision (since every oligarch has veto power on every proposition). 5.5.3 Relaxing systematicity/independence A variety of judgment aggregation rules become possible when we relax systematicity/independence. Recall that systematicity combines an independence and a neutrality requirement. Relaxing only neutrality does not get us very far, since for many agendas there are impossibility results with independence alone, as illustrated in Section 5.4. One much-discussed class of aggregation rules violating independence is given by the premise-based rules. Here, a subset Y ⊆ X of logically independent propositions (and their negations) is designated as a set of premises, as in the court example. For any profile <J[1], J[2], …, J[n]>, J = {p ∈ X : J[Y] entails p} where J[Y] are the majority-accepted propositions among the premises, formally {p ∈ Y : |{i ∈ N : p ∈ J[i]}| > n/2}. Informally, majority votes are taken on the premises, and the collective judgments on all other propositions are determined by logical implication. If the premises constitute a logical basis for the entire agenda, a premise-based rule guarantees consistent and (absent ties) complete collective judgment sets. (The present definition follows List and Pettit 2002; for generalizations, see Dietrich and Mongin 2010. The procedural and epistemic properties of premise-based rules are discussed in Pettit 2001; Chapman 2002; Bovens and Rabinowicz 2006; Dietrich 2006; List 2006.) A generalization is given by the sequential priority rules (List 2004b; Dietrich and List 2007a). Here, for each profile <J[1], J[2], …, J[n]>, the propositions in X are collectively adjudicated in a fixed order of priority, for instance, a temporal or epistemic one. The collective judgment on each proposition p ∈ X is made as follows. If the majority judgment on p is consistent with collective judgments on prior propositions, this majority judgment prevails; otherwise the collective judgment on p is determined by the implications of prior judgments. By construction, this guarantees consistent and (absent ties) complete collective judgments. However, it is path-dependent: the order in which propositions are considered may affect the outcome, specifically when the underlying majority judgments are inconsistent. For example, when this aggregation rule is applied to the profiles in Tables 5, 6, and 7 (but not 8), the collective judgments depend on the order in which the propositions are considered. Thus sequential priority rules are vulnerable to agenda manipulation. Similar phenomena occur in sequential pairwise majority voting in preference aggregation (e.g., Riker 1982). Another prominent class of aggregation rules violating independence is given by the distance-based rules (Pigozzi 2006, building on Konieczny and Pino Pérez 2002; see also Miller and Osherson 2009). A distance-based rule is defined in terms of some distance metric between judgment sets, for example the Hamming distance, where, for any two judgment sets J, J′ ⊆ X, d(J, J′) = |{p ∈ X : not [p ∈ J ⇔ p ∈ J′]}|. Each profile <J[1], J[2], …, J[n]> is mapped to a consistent and complete judgment set J that minimizes the sum-total distance from each of the J[i]s. Distance-based rules can be interpreted as capturing the idea of identifying compromise judgments. Unlike premise-based or sequential priority rules, they do not require a distinction between premises and conclusions or any other order of priority among the propositions. As in preference aggregation, the cost of relaxing independence is the loss of strategy-proofness. The conjunction of independence and monotonicity is necessary and sufficient for the non-manipulability of a judgment aggregation rule by strategic voting (Dietrich and List 2007c; for related results, see Nehring and Puppe 2002). Thus we cannot generally achieve strategy-proofness without relaxing either universal domain, or collective rationality, or unanimity preservation, or non-dictatorship. In practice, we must therefore look for ways of rendering opportunities for strategic manipulation less of a threat. As should be evident, social choice theory is a vast field. Areas not covered in this entry, or mentioned only in passing, include: theories of fair division (how to divide one or several divisible or indivisible goods, such as cakes or houses, between several claimants; e.g., Brams and Taylor 1996 and Moulin 2004); behavioural social choice theory (analyzing empirical evidence of voting behaviour under various aggregation rules; e.g., Regenwetter et al. 2006; List, Luskin, Fishkin, and McLean 2013); empirical social choice theory (analysing surveys and experiments on people's intuitions about distributive justice; e.g., Gaertner and Schokkaert 2012); computational social choice theory (analysing computational properties of aggregation rules, including their computational complexity; e.g., Bartholdi, Tovey, and Trick 1989; Brandt, Conitzer, and Endriss 2013); theories of probability aggregation (studying the aggregation of probability or credence functions; e.g., Lehrer and Wagner 1981; McConway 1981; Genest and Zidek 1986; Mongin 1995; Dietrich and List 2007d); theories of general attitude aggregation (generalizing two-valued judgment aggregation, probability/credence aggregation, and preference aggregation; e.g., Dietrich and List 2010b; Dokow and Holzman 2010c); the study of collective decision-making in non-human animals (studying group decisions in a variety of animal species from social insects to primates, as surveyed in Conradt and List 2009 and the special issue it introduces); and applications to social epistemology (the analysis of group doxastic states and their relationship to individual doxastic states; e.g., Goldman 2004, 2010). • Arrow, K., 1951/1963, Social Choice and Individual Values. New York: Wiley. • Austen-Smith, D. and J. S. Banks, 1996, “Information Aggregation, Rationality, and the Condorcet Jury Theorem.” American Political Science Review, 90: 34–45. • Bartholdi, J. J., C. A. Tovey, and M. A. Trick, 1989, “The computational difficulty of manipulating an election.” Social Choice and Welfare, 6: 227–241. • Ben-Yashar, R. and S. Nitzan, 1997, “The optimal decision rule for fixed-size committees in dichotomous choice situations: the general result.” International Economic Review, 38: 175–186. • Berend, D. and J. Paroush, 1998, “When is Condorcet's Jury Theorem valid?” Social Choice and Welfare, 15: 481–488. • Berend, D. and L. Sapir., 2007, “Monotonicity in Condorcet's Jury Theorem with dependent voters.” Social Choice and Welfare, 28: 507–528. • Black, D., 1948, “On the Rationale of Group Decision-Making.” Journal of Political Economy, 56: 23–34. • Blackorby, C., W. Bossert, and D. Donaldson, 1999, “Information Invariance in Variable-Population Social-Choice Problems.” International Economic Review, 40: 403–422. • –––, 2005, Population Issues in Social Choice Theory, Welfare Economics, and Ethics,. Cambridge: Cambridge University Press. • Blau, J. H., 1975, “Liberal Values and Independence.” Review of Economic Studies, 42: 395–401. • Boland, P. J., 1989, “Majority systems and the Condorcet jury theorem.” Statistician, 38: 181–189. • Bossert, W. and J. A. Weymark, 1996, “Utility in social choice.” Handbook of Utility Theory, Volume 2. S. Barberà, P. J. Hammond and C. Seidel, (eds.). Boston: Kluwer. • Bossert, W., C. X. Qi, and J. A. Weymark, 2013, “Extensive social choice and the measurement of group fitness in biological hierarchies.” Biology and Philosophy, 28: 75–98. • Bovens, L. and W. Rabinowicz, 2006, “Democratic Answers to Complex Questions: An Epistemic Perspective.” Synthese, 150: 131–153. • Brams, S. J. and A. D. Taylor, 1996, Fair Division: From Cake-Cutting to Dispute Resolution. Cambridge: Cambridge University Press. • Brandt, F., V. Conitzer, and U. Endriss, 2013, “Computational Social Choice.” Multiagent Systems. G. Weiss (ed.). Cambridge, MA: MIT Press, pp. 213–283. • Brighouse, H. and M. Fleurbaey, 2010, “Democracy and Proportionality.” Journal of Political Philosophy, 18: 137–155. • Chapman, B., 2002, “Rational Aggregation.” Politics, Philosophy and Economics, 1: 337–354. • Condorcet, Nicolas de, 1785, Essay sur l'Application de l'Analyse à la Probabilité des Décisions Rendue à la Pluralité des Voix. Paris. • Conradt, L. and C. List, 2009, “Group decisions in humans and animals: a survey.” Philosophical Transactions of the Royal Society, B 364: 719–742. • Craven, J., 1982, “Liberalism and Individual Preferences.” Theory and Decision, 14: 351–360. • Deschamps, R. and L. Gevers, 1978, “Leximin and utilitarian rules: A joint characterization.” Journal of Economic Theory, 17: 143–163. • Dietrich, F., 2006, “Judgment Aggregation: (Im)Possibility Theorems.” Journal of Economic Theory, 126: 286–298. • –––, 2007, “A generalised model of judgment aggregation.” Social Choice and Welfare, 28: 529–565. • –––, 2008, “The premises of Condorcet's jury theorem are not simultaneously justified.” Episteme, 5: 56–73. • Dietrich, F. and C. List, 2004, “A Model of Jury Decisions Where All Jurors Have the Same Evidence.” Synthese, 142: 175–202. • –––, 2007a, “Judgment aggregation by quota rules: majority voting generalized.” Journal of Theoretical Politics, 19: 391–424. • –––, 2007b, “Arrow's theorem in judgment aggregation.” Social Choice and Welfare, 29: 19–33. • –––, 2007c, “Strategy-proof judgment aggregation.” Economics and Philosophy, 23: 269–300. • –––, 2007d, “Opinion pooling on general agendas.” [Dietrich and List 2007d available online (pdf)] • –––, 2008, “Judgment aggregation without full rationality.” Social Choice and Welfare, 31: 15–39. • –––, 2010a, “Majority voting on restricted domains.” Journal of Economic Theory, 145: 512–543. • –––, 2010b, “The aggregation of propositional attitudes: towards a general theory.” Oxford Studies in Epistemology, 3: 215–234. • Dietrich, F. and P. Mongin, 2010, “The premise-based approach to judgment aggregation.” Journal of Economic Theory, 145: 562–582. • Dietrich, F. and K. Spiekermann, 2013, “Epistemic Democracy with Defensible Premises.” Economics and Philosophy, 29(1): 87–120. • Dokow, E. and R. Holzman, 2010a, “Aggregation of binary evaluations.” Journal of Economic Theory, 145: 495–511. • –––, 2010b, “Aggregation of binary evaluations with abstentions.” Journal of Economic Theory, 145: 544–561. • –––, 2010c, “Aggregation of non-binary evaluations.” Advances in Applied Mathematics, 45: 487–504. • Dowding, K. and M. van Hees, 2003, “The Construction of Rights.” American Political Science Review, 97: 281–293. • –––, 2008, “In Praise of Manipulation.” British Journal of Political Science, 38: 1–15. • Dryzek, J. and C. List, 2003, “Social Choice Theory and Deliberative Democracy: A Reconciliation.” British Journal of Political Science, 33: 1–28. • Elsholtz, C. and C. List, 2005, “A Simple Proof of Sen's Possibility Theorem on Majority Decisions.” Elemente der Mathematik, 60: 45–56. • Elster, J., 2013, “Excessive Ambitions (II).” Capitalism and Society, 8, Issue 1, Article 1. • Estlund, D., 1994, “Opinion Leaders, Independence, and Condorcet's Jury Theorem.” Theory and Decision, 36: 131–162. • Feddersen, T. J. and W. Pesendorfer, 1998, “Convicting the Innocent.” American Political Science Review, 92: 23–35. • Gaertner, W., 2001, Domain Conditions in Social Choice Theory. Cambridge: Cambridge University Press. • –––, 2005, “De jure naturae et gentium: Samuel von Pufendorf's contribution to social choice theory and economics.” Social Choice and Welfare, 25: 231–241. • Gaertner, W., P. K. Pattanaik, and K. Suzumura, 1992, “Individual Rights Revisited.” Economica, 59: 161–177. • Gaertner, W. and E. Schokkaert, 2012, Empirical Social Choice: Questionnaire-Experimental Studies on Distributive Justice. Cambridge: Cambridge University Press. • Gärdenfors, P., 2006, “An Arrow-like theorem for voting with logical consequences.” Economics and Philosophy, 22: 181–190. • Gehrlein, W. V., 1983, “Condorcet's Paradox.” Theory and Decision, 15: 161–197. • Genest, C. and J. V. Zidek, 1986, “Combining Probability Distributions: A Critique and Annotated Bibliography.” Statistical Science, 1: 113–135. • Gibbard, A., 1969, “Social Choice and the Arrow Conditions.” Unpublished manuscript. [Gibbard 1969 available online (pdf)] • –––, 1973, “Manipulation of voting schemes: a general result.” Econometrica, 41: 587–601. • Gigliotti, G. A., 1986, “Comment on Craven.” Theory and Decision, 21: 89–95. • Gilboa, I., D. Samet, and D. Schmeidler, 2004, “Utilitarian Aggregation of Beliefs and Tastes.” Journal of Political Economy, 112: 932–938. • Goldman, A., 2004, “Group Knowledge versus Group Rationality: Two Approaches to Social Epistemology.” Episteme, A Journal of Social Epistemology, 1: 11–22. • –––, 2010, “Why Social Epistemology Is Real Epistemology.” Social Epistemology. A. Haddock, A. Millar, and D. Pritchard (eds.), Oxford: Oxford University Press. • Goodin, R. E. and C. List, 2006, “A Conditional Defense of Plurality Rule: Generalizing May's Theorem in a Restricted Informational Environment.” American Journal of Political Science, 50: • Grofman, B., G. Owen, and S. L. Feld, 1983, “Thirteen theorems in search of the truth.” Theory and Decision, 15: 261–278. • Guilbaud, G. T., [1952] 1966, “Theories of the General Interest, and the Logical Problem of Aggregation.” Readings in Mathematical Social Science. P. F. Lazarsfeld and N. W. Henry (eds.). Cambridge/MA: MIT Press, pp. 262–307. • Harbour, D. and C. List, 2000, “Optimality Theory and the problem of constraint aggregation.” MIT Working Papers in Linguistics and Philosophy 1: The Linguistics/Philosophy Interface. R. Bhatt, P. Hawley, M. Hackl and I. Maitra (eds.). Cambridge, MA: MITWIPL, pp. 175–213. • Harrison, G. W. and T. McDaniel, 2008, “Voting Games and Computational Complexity.” Oxford Economic Papers, 60: 546–565. • Hausman, D., 1995, “The Impossibility of Interpersonal Utility Comparisons.” Mind, 104: 473–490. • Hurley, S., 1985, “Supervenience and the Possibility of Coherence.” Mind, 94: 501–525. • Inada, K.-I., 1964, “A Note on the Simple Majority Decision Rule.” Econometrica, 32: 525–531. • Kanazawa, S., 1998, “A brief note on a further refinement of the Condorcet Jury Theorem for heterogeneous groups.” Mathematical Social Sciences, 35: 69–73. • Knight, J. and J. Johnson, 1994, “Aggregation and Deliberation: On the Possibility of Democratic Legitimacy.” Political Theory, 22: 277–296. • Konieczny, S. and R. Pino Pérez, 2002, “Merging Information Under Constraints: A Logical Framework.” Journal of Logic and Computation, 12: 773–808. • Kornhauser, L. A. and L. G. Sager, 1986, “Unpacking the Court.” Yale Law Journal, 96: 82–117. • Kroedel, T. and F. Huber, 2013, “Counterfactual Dependence and Arrow.” Noûs, 47(3): 453–466. • Ladha, K., 1992, “The Condorcet Jury Theorem, Free Speech and Correlated Votes.” American Journal of Political Science, 36: 617–634. • Lehrer, K. and C. Wagner, 1981, Rational Consensus in Science and Society. Dordrecht/Boston: Reidel. • List, C., 2001, “A Note on Introducing a ‘Zero-Line’ of Welfare as an Escape-Route from Arrow's Theorem.” Pacific Economic Review, 6, special section in honour of Amartya Sen, 223–238. • –––, 2003a, “The epistemology of special majority voting.” Working paper, London School of Economics. [List 2003a available online (pdf)] • –––, 2003b, “Are Interpersonal Comparisons of Utility Indeterminate?” Erkenntnis, 58: 229–260. • –––, 2003c, “A Possibility Theorem on Aggregation over Multiple Interconnected Propositions.” Mathematical Social Sciences, 45: 1–13 (with Corrigendum in Mathematical Social Sciences, 52: • –––, 2004a, “Multidimensional Welfare Aggregation.” Public Choice, 119: 119–142. • –––, 2004b, “A Model of Path-Dependence in Decisions over Multiple Propositions.” American Political Science Review, 98: 495–513. • –––, 2006, “The Discursive Dilemma and Public Reason.” Ethics, 116: 362–402. • –––, 2011, “The Logical Space of Democracy.” Philosophy and Public Affairs, 39: 262–297. • –––, 2012, “The Theory of Judgment Aggregation: An Introductory Review.” Synthese, 187: 179–207. • List, C. and R. E. Goodin, 2001, “Epistemic Democracy: Generalizing the Condorcet Jury Theorem.” Journal of Political Philosophy, 9: 277–306. • List, C., R. C. Luskin, J. S. Fishkin, and I. McLean, 2013, “Deliberation, Single-Peakedness, and the Possibility of Meaningful Democracy: Evidence from Deliberative Polls.” Journal of Politics, 75: 80–95. • List, C. and P. Pettit, 2002, “Aggregating Sets of Judgments: An Impossibility Result.” Economics and Philosophy, 18(1): 89–110. • –––, 2004, “Aggregating Sets of Judgments: Two Impossibility Results Compared.” Synthese, 140: 207–235. • May, K. O., 1952, “A set of independent, necessary and sufficient conditions for simple majority decision.” Econometrica, 20: 680–684. • –––, 1954, “Intransitivity, Utility, and the Aggregation of Preference Patterns.” Econometrica, 22: 1–13. • McConway, K. J., 1981, “Marginalization and Linear Opinion Pools.” Journal of the American Statistical Association, 76: 410–414. • McLean, I., 1990, “The Borda and Condorcet principles: Three medieval applications.” Social Choice and Welfare, 7: 99–108. • McLean, I. and F. Hewitt (eds.), 1994, Condorcet: Foundations of Social Choice and Political Theory. Cheltenham: Edward Elgar Publishing. • McLean, I. S., A. McMillan, and B. L. Monroe, 1995, “Duncan Black and Lewis Carroll.” Journal of Theoretical Politics, 7: 107–123. • ––– (eds.), 1996, A Mathematical Approach to Proportional Representation: Duncan Black on Lewis Carroll. Dordrecht: Kluwer. • McLean, I. and A. B. Urken (eds.), 1995, Classics of Social Choice. Ann Arbor: University of Michigan Press. • Miller, D., 1992, “Deliberative Democracy and Social Choice.” Political Studies, 40 (special issue): 54–67. • Miller, M. K. and D. Osherson, 2009, “Methods for distance-based judgment aggregation.” Social Choice and Welfare, 32: 575–601. • Monjardet, B., 2005, “Social choice theory and the ‘Centre de Mathématique Sociale’: some historical notes.” Social Choice and Welfare, 25: 433–456. • Mongin, P., 1995, “Consistent Bayesian aggregation.” Journal of Economic Theory, 66: 313–351. • –––, 1997, “Spurious Unanimity and the Pareto Principle.” Paper presented at the Conference on Utilitarianism, New Orleans, March 1997. [Mongin 1997 available online (pdf)] • –––, 2008, “Factoring Out the Impossibility of Logical Aggregation.” Journal of Economic Theory, 141: 100–113. • Morreau, M., 2010, “It simply does not add up: Trouble with overall similarity.” Journal of Philosophy, 107: 469–490. • –––, forthcoming, “Theory Choice and Social Choice: Kuhn Vindicated.” Mind. • Moulin, H., 1980, “On Strategy-Proofness and Single Peakedness.” Public Choice, 35: 437–455. • –––, 2004, Fair Division And Collective Welfare. Cambridge, MA: MIT Press. • Mueller, D. C., 2003, Public Choice III. Cambridge: Cambridge University Press. • Nehring, K., 2003, “Arrow's theorem as a corollary.” Economics Letters, 80: 379–382. • Nehring, K. and C. Puppe, 2002, “Strategyproof Social Choice on Single-Peaked Domains: Possibility, Impossibility and the Space Between.” Unpublished manuscript, University of California at • –––, 2007, “The structure of strategy-proof social choice—Part I: General characterization and possibility results on median spaces.” Journal of Economic Theory, 135: 269–305. • –––, 2010, “Abstract Arrovian Aggregation.” Journal of Economic Theory, 145: 467–494. • Okasha, S., 2009, “Individuals, groups, fitness and utility: multi-level selection meets social choice theory.” Biology and Philosophy, 24: 561–584. • –––, 2011, “Theory Choice and Social Choice: Kuhn versus Arrow.” Mind, 120: 83–115. • Ooghe, E. and L. Lauwers, 2005, “Non-dictatorial extensive social choice.” Economic Theory, 25: 721–743. • Parfit, D., 1984, Reasons and Persons. Oxford: Oxford University Press. • Pauly, M. and M. van Hees, 2006, “Logical Constraints on Judgment Aggregation.” Journal of Philosophical Logic, 35: 569–585. • Pearl, J., 2000, Causality: models, reasoning, and inference. Cambridge: Cambridge University Press. • Pettit, P., 2001, “Deliberative Democracy and the Discursive Dilemma.” Philosophical Issues, 11: 268–299. • Pigozzi, G., 2006, “Belief merging and the discursive dilemma: an argument-based account to paradoxes of judgment aggregation.” Synthese, 152: 285–298. • Poisson, S. D., 1837, Recherches sur la probabilité des jugements en matière criminelle et en matière civile: précédées des règles générales du calcul des probabilités. • Rawls, J., 1971, A Theory of Justice. Cambridge/MA: Harvard University Press. • Regenwetter, M., B. Grofman, A. A. J. Marley, and I. Tsetlin, 2006, Behavioral Social Choice. Cambridge: Cambridge University Press. • Riker, W., 1982, Liberalism against Populism. San Francisco: W.H. Freeman and Co. • Roberts, K., 1995, “Valued Opinions or Opinionated Values: The Double Aggregation Problem.” Choice, Welfare and Development: A Festschrift in Honour of Amartya Sen. K. Basu, P. K. Pattanaik, and K. Suzumura (eds.), Oxford: Oxford University Press, pp. 141–165. • Saari, D. G., 1990, “The Borda dictionary.” Social Choice and Welfare, 7: 279–317. • Salles, M. (edited with introduction), 2005, “The history of Social Choice.” Special issue, Social Choice and Welfare, 25: 229–564. • Satterthwaite, M., 1975, “Strategy-proofness and Arrow's conditions: existence and correspondence theorems for voting procedures and social welfare functions.” Journal of Economic Theory, 10: • Sen, A. K., 1966, “A Possibility Theorem on Majority Decisions.” Econometrica, 34: 491–499. • –––, 1970a, “The Impossibility of a Paretian Liberal.” Journal of Political Economy, 78: 152–157. • –––, 1970b, Collective Choice and Social Welfare. San Francisco: Holden-Day. • –––, 1977, “On weights and measures: informational constraints in social welfare analysis.” Econometrica, 45: 1539–1572. • –––, 1982, Choice, Welfare and Measurement. Oxford: Blackwell. • –––, 1983, “Liberty and social choice.” Journal of Philosophy, 80: 5–28. • –––, 1998, “The Possibility of Social Choice.” Nobel lecture, December 8, 1998, Stockholm. [Sen 1998 available online (pdf)] • Shapley, L. and B. Grofman, 1984, “Optimizing group judgment accuracy in the presence of interdependencies.” Public Choice, 43: 329–343. • Stegenga, J., 2013, “An impossibility theorem for amalgamating evidence.” Synthese, 190(2): 2391–2411. • Suppes, P., 2005, “The pre-history of Kenneth Arrow's social choice and individual values.” Social Choice and Welfare, 25: 319–226. • Thomson, W., 2000, “On the axiomatic method and its recent applications to game theory and resource allocation.” Social Choice and Welfare, 18: 327–386. • Tsetlin, I., M. Regenwetter, and B. Grofman, 2003, “The Impartial Culture Maximizes the Probability of Majority Cycles.” Social Choice and Welfare, 21: 387–398. • Vacca, R., 1921, “Opinioni Individuali e Deliberazioni Collettive.” Rivista Internazionale di Filosofia del Diritto,: 52–59. • Ward, B., 1965, “Majority Voting and Alternative Forms of Public Enterprises.” The Public Economy of Urban Communities. J. Margolis (ed.). Baltimore: Johns Hopkins Press. • Weymark, J., 2006, “The Normative Approach to the Measurement of Multidimensional Inequality.” Inequality and Economic Integration. F. Farina and E. Savaglio (eds.), London: Routledge, pp. • Wilson, R., 1975, “On the Theory of Aggregation.” Journal of Economic Theory, 10: 89–99. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, Marquis de | consequentialism | Cusanus, Nicolaus [Nicolas of Cusa] | Pufendorf, Samuel Freiherr von: moral and political philosophy | rationality | Rawls, John | Tarski, Alfred | voting I am grateful to the editors, their reviewers, Franz Dietrich, Iain McLean, and Michael Morreau for comments.
{"url":"http://plato.stanford.edu/entries/social-choice/","timestamp":"2014-04-21T05:11:48Z","content_type":null,"content_length":"171520","record_id":"<urn:uuid:40158ae9-6bbc-46f8-ab63-4017560b8191>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Wavelet-Based Peak Detection In Mathematica the closest thing (currently) to a peak detect function is a function called MaxDetect, which is unfortunately very slow with large datasets and could be better at finding peaks. So (for a project which I will post here soon) I decided to write my own peak detection function. I found a nice article from National Instruments here: I have subsequently implemented this in Mathematica. So the basic idea behind this algorithm is whenever the detail wavelet passes zero, there is a peak. Let me explain this: (I am assuming that you understand the basic concept of wavelets) When you get the wavelet of a 1-dimensional object, you will get the approximation coefficients, and the detail coefficients. As their names suggest the approximation coefficients are the basic shape of the thing (used for noise reduction) while the detail coefficients are used for the little bumps and valleys that texture the original wave. For example Here is some noisy data: So to access the detail vs approx wavelets from some wavelet data like this: data = Table[Sin[x] + RandomReal[{0, .1}], {x, 0, 2 Pi, .01}]; dwd = DiscreteWaveletTransform[data, HaarWavelet[], 3] So I am running a wavelet transform of data with a haar wavelet for 3 layers. I can see the various coefficients like this: The coefficients that end in 1 are detail wavelets while those which end in 0 are approx wavelets. (There is a {0} and a {0,0} wavelet but it doesn't show them). The deeper you go into these coefficients the less wavelets are used to construct the wave and therefor they are more basic. In this case we want the highest quality data so we can just look at level 1. (So the coefficient named "{1}"). So time to get down to the real code. Here is some example data, in this case a fourier transform. This is a fairly easy case but you can see that the number of datapoints is giant and there is some noise toward the bottom. So the first thing we need to do is get rid of all the little peaks we don't care about. Here "data" is the data with peaks in it and "min" is the minimum size of the peaks that will be detected. dwd=DiscreteWaveletTransform[If[# < min, 0, #] & /@ data, HaarWavelet[], In this case I have set "min" to 1. We can now run the inverse wavelet transform of that for the first detail coefficient: InverseWaveletTransform[dwd, HaarWavelet[], {1}] Which looks like this: Now we detect every time those peaks cross zero. You may also notice that there is another threshold here which I found to be useful. If[Abs[#] < ther, 0, #] & /@ DiscreteWaveletTransform[If[# < min, 0, #] & /@ data, HaarWavelet[], 1], HaarWavelet[], {1}]]] This gets the position of the points in each peak: If[Abs[#] < ther, 0, #] & /@ DiscreteWaveletTransform[If[# < min, 0, #] & /@ data, HaarWavelet[], 1], HaarWavelet[], {1}]]] , 1]] We then split them into groups for each peak: If[Abs[#] < ther, 0, #] & /@ DiscreteWaveletTransform[If[# < min, 0, #] & /@ data, HaarWavelet[], 1], HaarWavelet[], {1}]]], 1]], Abs[#1 - #2] < cther &] "cther" is the distance 2 points have to be appart before they are counted as sepperate peaks. In this case I use 50 because I don't want it detecting false peaks right next to the real ones. But you could set it to split after every run of consecutive 1s. After this it is pretty technical and so I present (drumroll) the final function!!! FindPeaks[data_, ther_: .2, min_: 0, cther_: 50] := data[[peaks]]}]][(#[[Ordering[data[[#]], -1][[1]]]] & /@ If[Abs[#] < ther, 0, #] & /@ DiscreteWaveletTransform[If[# < min, 0, #] & /@ data, HaarWavelet[], 1], HaarWavelet[], {1}]]], 1]], Abs[#1 - #2] < cther &])] So here is it working on the fourier transform! The blue is the unmodified data and the red is a line from peak to peak that fell within our requirments, but with different settings we could get all of these smaller peaks off to the right. Well that's all I got so if you have any questions about the code or edits that make it better, post it here! Very nice approach. I wanted to run the code to play around with it but I don’t think you gave the Fourier transform data. Any chance you can post that? Vitaliy Kaurov How can I post it? I can't just write out the data beacsue it is 5000 data points. Christopher Wolfram I thought you are generating some points with a formula and then taking Fourier transform. But, yes, no way to post 5000 points of pure data ;-) Vitaliy Kaurov Hi people, Just adding more ideas, I wrote a code that get the maxima/minima in a data range, for example a experimental Absortion light of a material, what it does: read always 3 points and when the middle point is greater or lesser than the two other we can have a maximum, or when it is lesser than the 2 others we can have a minima. Marcelo De Transpose[{x, y}][[ Cicco 1 + Flatten@ 1 Vote Partition[Transpose[{x, y}][[All, 2]], 3, 1], {a_, b_, c_} /; b < Min[a, c] || b > Max[a, c]]]], All, How fast is it when using a large dataset? Christopher Wolfram I really do not know. Up to 100 points is really fast. Marcelo De Cicco I'm going to try 20,000. Where do you pu tin the list? Christopher Wolfram the code is very simple, in fact is part of another bigger code. So I clean it see below: "your data"[[ 1 + Flatten@ Marcelo De Cicco Partition["your data"[[All, 2]], 3, 1], {a_, b_, c_} /; b > Max[a, c]]]], All, 1]] But I think you should also include some constraints or conditional for the differences between the middle point and two others, otherwise you will get all small fluctuations. For example, you can think a constraint that only considerer differences grater than 1.2 or 2, depends on the scale that you are working with. I got this error Part::partd: Part specification <<24>>,0.000292454,0.000246734,0.000581422,0.000241731,0.000190411,0.000268275,0.00069401,0.000499151,0.000759404,0.000743745,0.000950466,0.000383596,<<64462>>} is longer than depth of Christopher object. >> Also have this looks similar to MinDetect[] and MaxDetect[] but min and max detect are painfully slow. How big is your data (mega, gigas,etcc)? See the following link: http://mathematica.stackexchange.com/questions/14645/prevent-part-from-trying-to-extract-parts-of-symbolic-expressions Marcelo De Cicco Part isn't trying to extract pieces of the expression because I am usion a variable. Also probably kilo of mega but with wavelets you can get the transform for an image which can be giant almost instantlly. Christopher Wolfram To upload the datapoints have you tried using Compress? If the resultant string is not too long you can post it and people can Decompress it. Hector Zenil Let me guess, are you analyzing sound waves to get the current tone (height)? Vladimir Grankovsky Almost. First givin a fourier transform it is actually the position on the x axis which determines the pitch. And second, I am going to write a community post about some pitch recignition stuff soon I was doing some pitch recognition with Wavelet transforms, but it was a long time ago and I didn't know Mathematica well. It was very slow. But anyway, I was using this for peak detection (very empirical): by the way, a useful idea is sample dropping - taking for example every 5th or 15th sample from the initial data (but mind the nyquist frequency) cwd=ContinuousWaveletTransform[signal2, GaborWavelet[15], {7, 30}, SampleRate -> sr]; freq = (#1[[1]] -> sr/#1[[2]] &) /@ cwd["Scales"]; Maxima[scale_, treshold_: 0.7] := Module[{i, max = -\[Infinity], maxpoz = 0}, If[Max[scale] <= treshold, Return[{}];]; For[i = Length[scale], i >= 0, i--, If[max < scale[[i]], max = scale[[i]]; maxpoz = i; Vladimir ]; Grankovsky If[max - scale[[i]] >= treshold, frequencies = With[{list = 2 freq[[Maxima[Abs /@ cwd[All][[50 ;;, 2, #]], 0.1] + 49]][[ All, 2]]}, If[Length[list] >= 1, list[[-1]], Null] ] & /@ Range[Length[cwd[All][[1, 2]]]]; When using the Haar wavelet and only one detail level, isn't this equivalent to finding the crossings of the (upsampled) first differences, which should be efficient too? Rui Rojo
{"url":"http://community.wolfram.com/groups/-/m/t/91868?p_p_auth=zqkd0KKU","timestamp":"2014-04-17T12:33:04Z","content_type":null,"content_length":"136331","record_id":"<urn:uuid:41ca323b-e818-4b1b-8472-673d9541167c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
1 Introduction At high enough energies, Einstein’s theory of general relativity breaks down, and will be superceded by a quantum gravity theory. The classical singularities predicted by general relativity in gravitational collapse and in the hot big bang will be removed by quantum gravity. But even below the fundamental energy scale that marks the transition to quantum gravity, significant corrections to general relativity will arise. These corrections could have a major impact on the behaviour of gravitational collapse, black holes, and the early universe, and they could leave a trace – a “smoking gun” – in various observations and experiments. Thus it is important to estimate these corrections and develop tests for detecting them or ruling them out. In this way, quantum gravity can begin to be subject to testing by astrophysical and cosmological observations. Developing a quantum theory of gravity and a unified theory of all the forces and particles of nature are the two main goals of current work in fundamental physics. There is as yet no generally accepted (pre-)quantum gravity theory. Two of the main contenders are M theory (for reviews see, e.g., [214, 356, 377]) and quantum geometry (loop quantum gravity; for reviews see, e.g., [365, 409]). It is important to explore the astrophysical and cosmological predictions of both these approaches. This review considers only models that arise within the framework of M theory. In this review, we focus on RS brane-worlds (mainly the RS 1-brane model) and their generalizations, with the emphasis on geometry and gravitational dynamics (see [304 , 314, 269, 424, 348, 268, 360, 120, 49, 267, 270 ] for previous reviews with a broadly similar approach). Other reviews focus on string-theory aspects, e.g., [147, 316, 97, 357], or on particle physics aspects, e.g., [354, 366, 261, 151, 75 ]. We also discuss the 5D DGP models, which modify general relativity at low energies, unlike the RS models; these models have become important examples in cosmology for achieving late-time acceleration of the universe without dark energy. Finally, we give brief overviews of 6D models, in which the brane has co-dimension two, introducing very different features to the 5D case with co-dimension one branes. Update
{"url":"http://relativity.livingreviews.org/Articles/lrr-2010-5/articlese1.html","timestamp":"2014-04-17T15:26:37Z","content_type":null,"content_length":"13632","record_id":"<urn:uuid:b808c545-c942-4a1a-9752-315d28e4e287>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Help optimizing an algorithm Chris Weisiger cweisiger@msg.ucsf.... Wed Jan 30 11:29:01 CST 2013 We have a camera at our lab that has a nonlinear (but monotonic) response to light. I'm attempting to linearize the data output by the camera. I'm doing this by sampling the response curve of the camera, generating a linear fit of the sample, and mapping new data to the linear fit by way of the sample. In other words, we have the following functions: f(x): the response curve of the camera (maps photon intensity to reported counts by the camera) g(x): an approximation of f(x), composed of line segments h(x): a linear fit of g(x) We get a new pixel value Y in -- this is counts reported by the camera. We invert g() to get the approximate photon intensity for that many counts. And then we plug that photon intensity into the linear fit. Right now I believe I have a working algorithm, but it's very slow (which in turn makes testing for validity slow), largely because inverting g() involves iterating over each datapoint in the approximation to find the two that bracket Y so that I can linearly interpolate between them. Having to iterate over every pixel in the image in Python isn't doing me any favors either; we typically deal with 528x512 images so that's 270k iterations per If anyone has any suggestions for optimizations I could make, I'd love to hear them. My current algorithm can be seen here: -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20130130/a160296b/attachment.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2013-January/034032.html","timestamp":"2014-04-16T04:42:58Z","content_type":null,"content_length":"4056","record_id":"<urn:uuid:69ea930a-bece-4f28-93d2-a61771b3c234>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Intel® Math Kernel Library Performance: Ready to Use Intel® Math Kernel Library (Intel® MKL) includes a wealth of routines to accelerate application performance and reduce development time. Today’s processors have increasing core counts, wider vector units and more varied architectures. The easiest way to take advantage of all of that processing power is to use a carefully optimized computing math library designed to harness that potential. Even the best compiler can’t compete with the level of performance possible from a hand-optimized library. Because Intel has done the engineering on these ready-to-use, royalty-free functions, you’ll not only have more time to develop new features for your application, but in the long run you’ll also save development, debug and maintenance time while knowing that the code you write today will run optimally on future generations of Intel processors. Intel® MKL includes highly vectorized and threaded Linear Algebra, Fast Fourier Transforms (FFT), Vector Math and Statistics functions. Through a single C or Fortran API call, these functions automatically scale across previous, current and future processor architectures by selecting the best code path for each. Intel® MKL delivers industry-leading performance on Monte Carlo and other math-intensive routines “I’m a C++ and Fortran developer and have high praise for the Intel® Math Kernel Library. One nice feature I’d like to stress is the bitwise reproducibility of MKL which helps me get the assurance I need that I’m getting the same floating point results from run to run." Franz Bernasek CEO and Senior Developer, MSTC Modern Software Technology “Intel MKL is indispensable for any high-performance computer user on x86 platforms.” Prof. Jack Dongarra, Innovative Computing Lab, University of Tennessee, Knoxville Comprehensive Math Functionality – Covers Range of Application Needs Intel® MKL contains a wealth of threaded and vectorized complex math functions to accelerate a wide variety of software applications. Why write these functions yourself when Intel has Click to already done the work for you? Major functional categories include Linear Algebra, Fast Fourier Transforms (FFT), Vector Math and Statistics. Cluster-based versions of LAPACK and FFT are also included to support MPI-based distributed memory computing. Standard APIs – For Immediate Performance Results Wherever available, Intel® MKL uses de facto industry standard APIs so that minimal code changes are required to switch from another library. This makes it quick and easy to improve your application performance through simple function substitutions or relinking. Click to enlarge Simply substituting Intel® MKL’s LAPACK (Linear Algebra PACKage), for example, can yield 500% or higher performance improvement (benchmark left.) In addition to the industry-standard BLAS and LAPACK linear algebra APIs, Intel® MKL also supports MIT’s FFTW C interface for Fast Fourier Transforms. Highest Performance and Scalability across Past, Present & Future Processors – Easily and Automatically Behind a single C or Fortran API, Intel® MKL includes multiple code paths -- each optimized for specific generations of Intel and compatible processors. With no code-branching required by application developers, Intel® MKL utilizes the best code path for maximum performance. Click to Even before future processors are released, new code paths are added under these same APIs. Developers just link to the newest version of Intel® MKL and their applications are ready to enlarge take full advantage of the newest processor architectures. In the case of the Intel® Many Integrated Core Architecture (Intel® MIC Architecture), in addition to full native optimization support, Intel® MKL can also automatically determine the best load balancing between the host CPU and the Intel® Xeon® Phi™ coprocessor. Flexibility to Meet Developer Requirements Developers have many requirements to meet. Sometimes these requirements conflict and need to be balanced. Need consistent floating point results with the best application performance Click to possible? Want faster vector math performance and don’t need maximum accuracy? Intel® MKL gives you control over the necessary tradeoffs. Intel® MKL is also compatible with your choice of compilers, languages, operating systems, linking and threading models. One library solution across multiple environments means only one library to learn and manage. Feature Benefit Overcome the inherently non-associativity characteristics of floating-point arithmetic results with new support in the Intel Conditional Numerical Reproducibility MKL. New in this release is the ability to achieve reproducibility without memory alignment. New and improved optimizations for Haswell Intel® Core™, Intel® Intel MKL is optimized for the latest and upcoming processor architectures to deliver the best performance in the industry. For microarchitecture code name Ivy Bridge, future Broadwell processors example, new optimizations for the fusedmultiply-add (FMA) instruction set introduced in Haswell Core processors deliver up to and Intel® Xeon® Phi™ coprocessors 2x performance improvement for floating point calculations. For selected linear algebra functions, Intel MKL can automatically determine the best way to utilize a system containing one or Automatic offload and compute load balancing between Intel Xeon more Intel Xeon Phi coprocessors. The developer simply calls the MKL function and it will take advantage of the coprocessor if processors and Intel Xeon Phi coprocessors – Now for Windows* present on the system. New functions added for this release plus Windows OS support. ExtendedEigensolver Routines based on the FEAST algorithm New sparse matrix Eigensolver routines handle larger problem sizes and use less memory. API-compatibility with the open source FEAST Eigenvalue Solver makes it easy to switch to the highly optimized Intel MKL implementation. Linear Algebra Intel® MKL BLAS provides optimized vector-vector (Level 1), matrix-vector (Level 2) and matrix-matrix (Level 3) operations for single and double precision real and complex types. Level 1 BLAS routines operate on individual vectors, e.g., compute scalar product, norm, or the sum of vectors. Level 2 BLAS routines provide matrix-vector products, rank 1 and 2 updates of a matrix, and triangular system solvers. Level 3 BLAS level 3 routines include matrix-matrix products, rank k matrix updates, and triangular solvers with multiple right-hand sides. Intel® MKL LAPACK provides extremely well-tuned LU, Cholesky, and QR factorization and driver routines that can be used to solve linear systems of equations. Eigenvalue and least-squares solvers are also included, as are the latest LAPACK 3.4.1 interfaces and enhancements. If your application already relies on the BLAS or LAPACK functionality, simply re-link with Intel® MKL to get better performance on Intel and compatible architectures. Fast Fourier Transforms Intel® MKL FFTs include many optimizations and should provide significant performance gains over other libraries for medium and large transform sizes. The library supports a broad variety of FFTs, from single and double precision 1D to multi-dimensional, complex-to-complex, real-to-complex, and real-to-real transforms of arbitrary length. Support for both FFTW* interfaces simplifies the porting of your FFTW-based applications. Vector Math Intel® MKL provides optimized vector implementations of computationally intensive core mathematical operations and functions for single and double precision real and complex types. The basic vector arithmetic operations include element-by-element summation, subtraction, multiplication, division, and conjugation as well as rounding operations such as floor, ceil, and round to the nearest integer. Additional functions include power, square root, inverse, logarithm, trigonometric, hyperbolic, (inverse) error and cumulative normal distribution, and pack/unpack. Enhanced capabilities include accuracy, denormalized number handling, and error mode controls, allowing users to customize the behavior to meet their individual needs. Intel® MKL includes random number generators and probability distributions that can deliver significant application performance. The functions provide the user the ability to pair Random-Number Generators such as Mersenne Twister and, Niederreiter with a variety of Probability Distributions including Uniform, Gaussian and Exponential. Intel® MKL also provides computationally intensive core/building blocks for statistical analysis both in and out-of-core. This enables users to compute basic statistics, estimation of dependencies, data outlier detection, and missing value replacements. These features can be used to speed-up applications in computational finance, life sciences, engineering/simulations, databases, and other Data Fitting Intel® MKL includes a rich set of splines functions for 1-dimensional interpolation. These are useful in a variety of application domains including data analytics (e.g. histograms), geometric modeling and surface approximation. Splines included are linear, quadratic, cubic, look-up, stepwise constant and user-defined. When exact reproducible calculations are required, Intel® MKL gives developers control over the tradeoffs to maximize performance across a set of target processors while delivering identical floating point results Intel® MKL is optimized for the latest and upcoming processor architectures to deliver the best performance in the industry. Support for the new digital random number generator provides truly random seeding of statistical calculations. Intel® Xeon® processor and Intel® Xeon Phi™ coprocessors For Linear Algebra functionality, Intel® MKL can automatically determine the best way to utilize a system containing one or more Intel® MIC processors. The developer simply calls an MKL function and doesn’t have to worry about the details. A rich set of splines are now included to optimize 1-dimensional interpolation calculations used in a variety of application domains When exact reproducible calculations are required, Intel® MKL gives developers control over the tradeoffs to maximize performance across a set of target processors while delivering identical floating point results Intel® MKL is optimized for the latest and upcoming processor architectures to deliver the best performance in the industry. Support for the new digital random number generator provides truly random seeding of statistical calculations. Intel® Xeon® processor and Intel® Xeon Phi™ coprocessors For Linear Algebra functionality, Intel® MKL can automatically determine the best way to utilize a system containing one or more Intel® MIC processors. The developer simply calls an MKL function and doesn’t have to worry about the details. A rich set of splines are now included to optimize 1-dimensional interpolation calculations used in a variety of application domains Click on images for a larger view of the benchmark graphic. Linear Algebra Performance Charts DGEMM Intel® Optimized SMP LINPACK HPL LINPACK LU Factorization Cholesky Factorization FFT Performance Charts 2D and 3D FFTs on Intel® Xeon and Intel® Core Processors Cluster FFT Performance Cluster FFT Scalability Sparse BLAS and Sparse Solver Data Fitting Performance Charts Performance Charts DCSRGEMV and DCSRMM PARDISO Sparse Solver Natural cubic spline construction and interpolation Random Number Generator Performance Charts Vector Math Performance Chart Application Benchmark Performance Chart MCG31m1 VML exp() Monte-Carlo option pricing performance benchmark Click on images for a larger view of the benchmark graphic. Linear Algebra Performance Charts Intel® Optimized SMP LINPACK LU Factorization QR Factorization HPL LINPACK Cholesky Factorization Matrix Multiply Application Benchmark Performance Chart Batch 1D FFT Performance Chart Black- Scholes Chart Monte Carlo Option Pricing Videos to help you get started. Previously recorded Webinars: • Powered by MKL Accelerating NumPy and SciPy Performance with Intel® MKL- Python • Get Ready for Intel® Math Kernel Library on Intel® Xeon Phi™ Coprocessor • Beginning Intel® Xeon Phi™ Coprocessor Workshop: Advanced Offload Topics • Accelerating financial services applications using Intel® Parallel Studio XE with the Intel® Xeon Phi™ coprocessor Featured Articles More Tech Articles By Mohamad SindiPosted 10/25/20100 This is a step by step procedure on how to run the High Performance Linpack (HPL) benchmark on a Linux cluster using Intel-MPI. This was done on a Linux cluster of 128 nodes running Intel’s Nehalem processor 2.93 MHz with 12GB of RAM on each node. Tips and tricks on how to get the optimal performance settings for your mixed Intel MPI/OpenMP applications. An FAQ regarding starting up and tuning the Intel MPI Library Introduction Parallel programming was once the sole concern of extreme programmers worried about huge supercomputing problems. With the emergence of multi-core processors for mainstream applications, however, parallel programming is well poised to become a technique every professional software de... You can reply to any of the forum topics below by clicking on the title. Please do not include private information such as your email address or product serial number in your posts. If you need to share private information with an Intel employee, they can start a private thread for you. Hi, Does MKL Scalapack include the PXLAWRITE and PXLAREAD (X=Z,C,D,S) subroutines? When I try to use them in my code I get 'undefined reference' link errors. I do not see them in the MKL documentation, but I've come across a number of scalapack functions not included in the documentation that still exist in the library. Thanks, John I am trying to write a mex programm but it gives errors. When I use the VS2012 the code runs, but when I converted it to mex it didn't work. The code gives errors when it tries to use the mkl library (function dcopy, dgemm etc). I have installed the Composer XE 2013 SP1 and I use this command to compile the code mex -largeArrayDims ads.c -I"C:\Program Files (x86)\Intel\Composer XE 2013 SP1\mkl\ include" -L"C:\Program Files (x86)\Intel\Composer XE 2013 SP1\mkl\lib\intel64" -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blas95_lp64 -lmkl_intel_ilp64 -lmkl_sequential -lmkl_blas95_lp64 For example I tried to use the function dcopy and the code and erros are: #include <math.h> #include <stdio.h> #include <stdlib.h> #include <mkl.h> #include "mkl_vml.h" #include "mex.h" #include "matrix.h" #include "mkl_vsl.h" void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) { double a[4]={1,2,3,4}; ... Hi, I just bumped into this error message while attempting to run a newly-built code: "Entry Point Not Found - The procedure entry point mkl_serv_set_xerbla_interface" could not be located in the dynamic link library mkl_intel_thread.dll". Any suggestions to fix this would be appreciated! The code is built with XE 2013 SP1 and the MKL libraries that come with it (Fortran/Libraries/Use Intel Math Kernel Library="Parallel (/Qmkl:parallel)". The runtime library is "Multithread DLL". It also USEs the LAPACK_95 and BLAS_95 modules (Linker/Input/Additional Dependencies = "mkl_blas95_lp64.lib mkl_lapack95_lp64.lib"), not sure if this is relevant. The platform is a Windows 7 64-bit workstation. Thanks in advance for your help, Olivier Hi, I am trying to run an SVD on a 19016x19016 matrix on my Mac OSX Mavericks with Armadillo linked to Intel MKL. But I get the following error: ./example SVD Start: 19016 19016 0.000000 ** On entry to DGESDD, parameter number 12 had an illegal value error: svd(): failed to converge Here is my code: #define ARMA_DONT_USE_WRAPPER #include mat U, V, A; vec s; svd(U,s,V,A); I make the program with: g++-4.2 -O3 -framework Accelerate example.cpp mmio.cpp -o example I know the MKL can deal with the sparse blas. It is very useful for the finite element coding. However, for the parallel computing, it needs parallel sparse blas. So, Is MKL plans to implement the parallel sparse blas? It will be very useful for the parallel finite element simulation. Furthermore, I found in the forum that the MKL will implement the MPI based pardiso in version 11.2. So, when it will be released? I have been using MKL PARDISO in C language projects for many years. I works nicely. Recently, I tried to include PARDISO in C++ project. But it does not work. The linker gives a message that PARDISO and omp_get_max_threads are undefined symbols." I included the following libraries: MKL_C, MKL_IA32, MKL_lapack, MKL_solver and libguide40. What am I missing? Hi, I am trying to reproduce the results of the MKL FFTW interface in this report: http://download-software.intel.com/sites/default/files/article/165868/in... Does anybody know where I can get the source code used in that report? So far, I have been running the source code in this webpage: http://numbercrunch.de/blog/2010/03/parallel-fft-performance/comment-pag... but the MKL FFTW interface shows really poor performance. I wonder if there is something I am missing. Thanks. Hi, I'm using Intel MKL from C#. In General it works. I want to use the Nonlinear Optimization Problem Solvers and I've translated the example, see http://software.intel.com/en-us/node/471540. But the dtrnlsp_solve gives me sometimes a memory exception (attempt to read write protected memory). I've attached all the dllimports, see below. [DllImport("mkl", CallingConvention = CallingConvention.Cdecl, ExactSpelling = true, SetLastError = false)] internal static extern int dtrnlsp_init( ref IntPtr handle, ref int n, ref int m, IntPtr x, [In] double[] eps, ref int iter1, ref int iter2, ref double rs); [DllImport("mkl", CallingConvention = CallingConvention.Cdecl, ExactSpelling = true, SetLastError = false)] internal static extern int dtrnlsp_check( ref IntPtr handle, ref int n, ref int m, IntPtr fjac, IntPtr fvec, [In] double[] eps, [In, Out] int[] info); [DllImport("mkl", CallingConvention = CallingConvention.Cdecl, ExactSpelling = true, SetLastError = false)] internal static extern i... Intel® Math Kernel Library 11.1 Нужно приступить к работе сейчас? Перейдите на вкладку «Обучение» для получения руководств и ссылок, которые помогут быстро приступить к работе. Помощь и советы Поиск по статьям поддержки Форумы — лучшее место для быстрого получения ответов от наших технических специалистов и ваших коллег. Можно использовать и для отчетов об ошибках. Служба поддержки — для получения безопасной веб-поддержки специалистов посетите наш веб-сайт Intel® Premier Support. Требуется регистрация в программе поддержки Intel Premier Support. Справка по загрузке, регистрации и лицензированию — ответы на вопросы, связанные с загрузкой, регистрацией и лицензированием. Примечания к выпуску — ознакомьтесь с примечаниями к выпуску в Интернете! Перечень исправлений — просмотр перечня исправлений для компилятора Документация на библиотеку Intel® Math Kernel Library 11.0 — просмотр документации через Интернет! Документация для других программных продуктов Популярные вопросы по поддержке **Source: Evans Data Software Developer surveys 2011-2013 Top Features • Vectorized and threaded for highest performance on all Intel and compatible processors • Compatible with all C, C++ and Fortran compilers • Royalty-free, per developer licensing for low cost deployment Available in the following suites or standalone:
{"url":"https://software.intel.com/ru-ru/intel-mkl?page=5","timestamp":"2014-04-24T01:48:48Z","content_type":null,"content_length":"121451","record_id":"<urn:uuid:dcb17930-e695-43d3-b399-c96468646ddf>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Aha!Math - Digital Interactive Math Curriculum Aha!Math - Online Math for Elementary and Middle School Get a clear path to success with Aha!Math Aha!Math provides engaging and motivating supplemental K-5 curriculum that builds students' foundational skills for math success. Fully aligned to state and Common Core standards, Aha!Math supports teachers with multiple models of instruction, and the data to make informed decisions. Aha!Math now provides all the components of RTI for K-5 math. Learn more » Excite Math Learning Aha!Math builds foundational skills and improves student achievement in math by providing engaging instructional content and activities, assessments, progress monitoring tools and data-driven learning plans to guide effective instruction. Aha!Math’s research-proven instructional design features engaging lessons, online journaling, digital peer coaches, multiple modes of instruction — including audio, visual, narrative, humor, game play, student modeling and interactive learning — and immediate positive and constructive feedback. Improve Student Outcomes Aha!Math helps you: • Engage and excite young learners • Identify at risk students and learning gaps — early • Gain insight into teaching opportunities • Choose from prescriptive recommendations • Monitor progress over time • Adapt and differentiate instruction for groups and individuals • Address learning gaps • Accelerate progress and improve student achievement Students Model New Math Skills Aha!Math's curriculum units focus on key foundational math concepts, provide opportunities to build conceptual understanding and procedural fluency, discovery and analytical skills, and cultivate student's creative problem-solving and 21st century skills. Class and Curriculum Management Simplified Aha!Math's class management tools let you easily get started in minutes: create and change classes, preview and assign curriculum lessons, individualize instruction, track student progress in real time, and even create your own lessons. Thank you Dr. Barclay Burns, Steve Wyborney, and Dr. Michele Douglass for your expertise in the development of Aha!Math. AHA!MATH and AHA!SCIENCE are registered, pending or common law trademarks of aha! Process, Inc. and are used under license by The Learning Internet, Inc. Research-based Math Model to Support Differentiated Instruction Research shows that Web-based learning stimulates student learning not only because it activates the brain, but also because it allows for genuine interaction, fluidity, and immediate feedback. We know from the research that these qualities enhance learning. Research also shows that the use of narrative and story increases student engagement, keeping students involved in the learning process, especially if the material is presented in well thought out and thematic contexts. The Learning.com interactive instructional model incorporates the following critical characteristics that have been identified by research to enhance learning: • Multisensory experiences – visual, auditory, and interactive – to allow for richer, more complete learning. • Opportunities for students to model, and thus hone their new skills. • Digital coaches that support students with multiple levels of immediate feedback and instructional support. • Content designed to be relevant to students' lives, humor, and a sense of playfulness, that in combination lead to motivated and engaged learners. • Context for learning experiences, giving students a clear understanding of how and when they would apply their knowledge and skills to solve problems. • Game-based learning, all within real-world contexts that students find relevant and interesting, and that include opportunities for students to apply specific learning strategies and build their problem-solving skills. Incorporating NCTM Focal Points The National Council of Teachers of Mathematics has identified curriculum focal points for pre-kindergarten through Grade 8 mathematics. To build students' strength in the use of mathematical processes, instruction in content areas should incorporate these focal points, which Aha!Math incorporates throughout its content, including: • the use of mathematics to solve problems, • an application of logical reasoning to justify procedures and solutions, and • an involvement in the design and analysis of multiple representations to learn, make connections among, and communicate about the ideas within and outside of mathematics. Digital Math Delivered in a Variety of Teaching Environments Teachers decide when and where to use Aha!Math – designed to be flexible for a a variety of settings, and to help teachers meet individual student needs. Teachers can use its interactive math assignments for whole class instruction, small groups and for individual students. An Aha!Math Edition to Suit Your Teaching Situation Aha!Math is available for Grades K-5 with the Elementary School Edition. The Whiteboard Edition features digital math content for your interactive whiteboard or projectors. Foundations for Middle School is a math resource to help middle school students get to grade-level math. Aha!Math is also aligned to Kendall Hunt math products. In the Classroom As a supplemental curriculum, Aha!Math gives teachers instructional material to complement their existing curriculum. Teachers may fit it into their current pacing plan, use it to review and reinforce challenging concepts, and to teach concepts in new ways. For example, Aha!Math provides teachers with multiple models to teach multiplication, from number lines to arrays, and more. Aha!Math is also ideal for whole class instruction using interactive whiteboards and projectors – and designed to be directed by the teacher, who can pause the action to check for student In the Computer Lab Aha!Math is an ideal curriculum for computer labs with individual lessons and games that engage students in using technology, providing immediate feedback – and freeing the teacher to circulate and support students in the lab. For Small Groups and Intervention Students who are struggling with specific concepts get the individualized instruction they need in a nontraditional approach to the topic they may feel they have already failed. With Aha!Math, students get immediate feedback in the games and lessons, and in a way that motivates them to come back again and again. Ideal for ELL Instruction Engage your students and get support for teaching your ELL students with Aha!Math. Get ELL strategy guides, share and find other teachers' proven lessons as well as engaging Spanish-English exercises to support language acquisition. Aha!Math is in both English and Spanish for Grades K-2. At Libraries, Community Centers and Home Build the home-to-school connection by encouraging parents to take their students to libraries or community centers for Web-access to Aha!Math, or log in at home to share the games, and engage the In Extended-Day Programs Students in after-school programs, summer school programs and other extended-day settings benefit from Aha!Math's interactive content, and gives teachers an effective resource in mixed-ability Because it's Web-delivered, teachers can assign content for use anywhere there is an Internet connection: enrolling students in the lower grade units at the beginning of a new school year to refresh concepts, supporting excelling students with curriculum at higher levels, and helping struggling students to work together through units they have covered in class. Online Math Curriculum Aligned to Common Core State Standards Get exactly what you need to help your students meet the new rigorous Common Core State Standards in math. Aha!Math for Common Core supports busy teachers with multiple models of instruction, and the data to make informed decisions. Digital lessons and games are relevant to students and to the way they want to learn. Elementary Math Curriculum Aligned to Common Core Standards Units align to the Common Core Math standards for grades K-6. Data Informs Decisions and Documents Growth Get a clear path with diagnostic assessments and prescriptive content that meet each student's unique needs. Teachers know exactly how students are meeting Common Core standards, and which curriculum items to assign next to address any gaps. Easily see student progress over time with benchmark reports tied to Aha!Math's summative assessments. Practice Fluency with Instructional Math Games The lively action in Aha!Math's Games captures and keeps students' interest, enabling them to apply their learning. Digital coaches motivate and provide feedback. Common Core Ades assessments aligned to the Common Core State Standards and the information you need to provide the right curriculum at the right time. Data from the assessments can be used to prescribe content and differentiate instruction for the class, group, or individual student. Built on the NCTM Curriculum Focal Points With over 400 curriculum assignments across 19 units, Aha!Math provides critical instruction and practice in the foundational math concepts so critical to future success in math. Educators have access to content from all grade levels, giving them the ability to support differentiated learning to target their students specific learning needs. Aligned to NCTM Focal Points* for grades K-5 Aha!Math addresses the following strands: • Numbers and operations • Geometry • Measurement • Algebra • Data analysis and probability Grade K-2 units consist of Games and Activities. Grade 3-5 units consist of Instructional Modules, Lessons, Games, Activities, and Quizzes. Understanding Whole Numbers to 10: Grade K • Introduction to numbers, numerals, and sets • Count, compare, and order sets • Join and separate sets of 20 or less • Quick recognition of numbers and numerals Shapes and Space: Grade K • Names and descriptions of shapes and solids • Describe orientation and position • Compare shapes to determine size • Explore composition and decomposition of shapes Comparing Lengths and Time: Grade K • Measurable attributes of space and time • Direct and indirect methods to measure, compare and sequence shapes Understanding Basic Addition and Subtraction: Facts to 18: Grades K-1 • Variety of models to learn basic addition and subtraction facts through 18 • Commutative and associative properties • Relationship between commutative and associative properties Place Value: Ones and Tens: Grade 1 • Counting patterns through 100 • Place value to the tens place • Use of the number line • Greater than and less than • Comparison of tens and number ordering Pieces of Shapes: Grade 1 • Use composition and decomposition to learn about properties of shapes and solids • Concepts of congruence and symmetry Understanding Numbers to 1000: Grades 1-2 • Place value for numbers through 1000 • Compose, decompose, compare and order whole numbers • Extend to numbers with four to six digits Addition and Subtraction of Multi-Digit Numbers: Grade 2 • Add and subtract larger numbers, both with and without regrouping • Introduces estimation as a tool for verifying accuracy of answers • Foundation for multiplication of numbers Linear Measurement: Grade 2 • Develop fluency with measuring length using standard and non-standard units • Solve problems involving measuring length Multiplication and Facts to 10: Grades 2-3 • Introduction to multiplication as a way to count objects arranged in equal sets • Developing fluency with basic multiplication facts from 0 x 0 to 10 x 10 • Commutative property and the relationship between multiplication and division Fractions: Grade 3 • Introduction of fractions through pictorial models • Becoming familiar with common fractions • Equivalent fractions • Writing and recognizing improper and mixed fractions 2-D Shapes and Transformations: Grade 3 • Elementary geometry • Common shapes and their properties • Congruence and similarity • Line symmetry and transformations • Rudiments of area on the plane Multiplication Facts - 11 and 12: Grades 3-4 • Elementary geometry • Introduction of the distributive property • Develop and practice the algorithm for multiplying large numbers • Estimation with multiplication Decimals and Fractions: Grade 4 • Decimals as an extension of place value • Relationship to fractions in denoting quantities less than a whole • Comparing, ordering, rounding and estimating decimals 2-D Shapes and Area: Grade 4 • Measure area with unit squares • Extend knowledge to calculating area of common shapes using multiplication Whole Number Division: Grades 4-5 • Review of relationship between division and multiplication • Long division algorithm up to two-digits • Estimation, divisibility, factors and multiples • Division with remainders Adding and Subtracting Fractions and Decimals: Grade 5 • Addition and subtraction of fractions with like and unlike denominators and mixed numbers • Addition and subtraction of decimals • Estimation • Connection between operations on fractions and operations on decimals 3-D Shapes including Surface Area and Volume: Grade 5 • Types of 3-D solids and how to calculate surface area of prisms • Calculate volumes of cubes, prisms, and more complicated single and composite solids • Learn strategies for estimating volume Multiplication and Division of Fractions and Decimals: Grades 5-6 • Strategies and techniques used to multiply and divide by fractions and decimals • Application of the number pi • Techniques to multiply by decimals to find the circumference and area of a circle and the volume of a cylinder. * Math Standards used by permission of the National Council of Teachers of Mathematics. Learning.com is solely responsible for the content and alignments to the standards. Full standards available at Digital interactive math assignments for individualized learning Each Aha!Math unit includes several interactive digital assignments for improving student understanding of a math topic. The Aha!Math curriculum has the following unique features: • Digital coaches that provide students with emotionally consistent instruction and feedback whether they are repeating instruction, correcting student responses, or rewarding a correct answer. • Development of procedural fluency and conceptual understanding of foundational concepts in math. • Multisensory experiences – visual, auditory, and interactive – to allow for richer, more complete learning. • Multiple levels of immediate feedback, both corrective and rewarding. • Relevant learning contexts that use humor and a sense of playfulness that motivate and engage students. • Instructional content that is ideal for use by math coaches for in-house professional development. Instruction Modules: Whole-class explicit instruction on key math concepts and skills • Designed for classroom delivery using a projector or interactive whiteboard. • Emphasizes understanding the "why" as well as the "how" of mathematics. • Models analysis and synthesis by showing how these higher-level thinking skills are used to create new knowledge and techniques from existing ones. Lessons: Interactive math concept review and fun practice • Targets the most challenging math concepts introduced in the related Instruction Module. • Provides explicit instruction and interactive environments for practicing computational skills in the targeted math concepts. Games: Educational math games for fluency development of critical math skills • In grades K-2, instructional content is embedded in the Games and the Games serve as replacements for Lessons. • In grades 3 and higher, Games provide an opportunity for students to apply and reinforce what they have learned through other instructional assignments. Quizzes: Evaluate students’ understanding of concepts covered in the curriculum • Designed as both instructional and diagnostic tools to provide teachers with actionable information on student progress. • Gives students practice at retrieving information, to enhance student recall, and ultimately to improve student performance on tests, including high-stakes assessments. • Quiz results are stored and reported to the teacher as raw and percentile scores. Activities: Teacher-led, whole class or group offline projects for concept application • Teacher guides with blackline masters, designed to extend online learning into the classroom. • Each Activity includes the following information for the teacher: Suggested prerequisites, Grade range, Concepts covered, Materials needed, Activity structure, Time needed, Warm up directions, Activity Instructions, Discussion suggestions, Wrap up, Scoring rubric, Extension ideas, Student worksheet. Math games & activities for Grades K, 1st, and 2nd Aha!Math K-2 games instruct and provide a fun and challenging environment for students to practice math concepts and procedures. While Aha!Math Activities extend online instruction into the classroom, allowing students to demonstrate their math knowledge to complete real-life scenarios. View this slideshow or click on a sample math game below to experience it. Kindergarten Math Game: One Fish Two Fish from the Understanding Whole Numbers Unit • Students learn the sequence of numbers from 1 to 10 by counting a specific number of items into a set and responding with a number. • Students understand and demonstrate one-to-one correspondence of numbers to 10. • Students count and write numbers to 20. Kindergarten Math Game: Strong Man Act from the Shapes and Space Unit • Students learn to compare shapes to find two similar shapes. 1st Grade Math Game: All Aboard from the Understanding Basic Addition and Subtraction Facts to 18 Unit • Students learn how to calculate sums and differences by putting together or breaking apart sets of objects. • Students add and subtract using models. • Students write addition and subtraction sentences. 1st Grade Math Game: Hopping Frog from the Place Value: Ones and Tens Unit • Students learn to compare two numbers up to 100 using the greater than symbol, the less than symbol, or the equal to symbol. 2nd Grade Math Game: Number Assembly from the Understanding Numbers to 1000 Unit • Students learn to compare two numbers up to 1000. • Students use place value to find the larger value of two numbers. Digital math curriculum for Grades 3rd, 4th, 5th Aha!Math curriculum for grades 3-5 consists of whole-class instruction modules, educational games, interactive lessons, informative quizzes, and offline activities. View this slideshow or click on a sample math assignment below to experience it. 3rd Grade Instruction Module: Models of Multiplication from the Multiplication and Facts to 10 Unit • Students learn how to represent multiplication using a variety of models: repeated addition, an array, an area model, an intersections model, and a number line. 3rd Grade Math Game: River Crossing from the Multiplication and Facts to 10 Unit • Students learn to use multiplication facts and math strategies to play a game and build fluency in multiplication facts. 3rd Grade Math Lesson: Multiplication Chart from the Multiplication and Facts to 10 Unit • Students learn to use the multiplication chart to find products of two factors between 0 and 10. 3rd Grade Math Quiz: Models of Multiplication Quiz from the Multiplication and Facts to 10 Unit • This quiz is designed to test concept understanding for the accompanying Instruction Module, "Models of Multiplication". 4th Grade Math Game: Pet Rescue Hotel from the Multiplication Facts 11 and 12, Multiplication Algorithm Unit • Students learn to apply their knowledge of multiplying two-digit numbers by one-digit numbers using place value and the distributive property. 4th Grade Math Lesson: Distributive Property from the Multiplication Facts 11 and 12, Multiplication Algorithm Unit • Students learn to use the distributive property to multiply numbers and compare the results with the sums (meaning of multiplication). 5th Grade Math Game: Space Station Repair from the Whole Number Division Unit • Students learn to estimate quotients by rounding the dividend to a compatible number (rounding dividends and divisors) and to divide numbers to find an exact answer. 5th Grade Math Lesson: Estimation of Quotients from the Whole Number Division Unit • Students learn how to estimate a division problem by either rounding the dividend to a compatible number or by estimating by completing the first division step of the problem. Digital math content for English Language Learners Aha!Math includes Spanish math games and activities for grades K-2 and support for ELL instruction in grades 3-5. This online ELL solution meets the needs of both the bilingual educator who needs native-language content instruction for the early primary grades, and the ELL instructor who provides English language instruction with native language support. • Spanish/English glossary to focus on academic and schema vocabulary • ELL strategy guide that assists teachers in using Aha!Math to help students in language acquisition and academic vocabulary • Journal activities for teachers to create math and language literacy exercises in English, Spanish or both Math Games Capture and Keep Student's Interest The lively action in Aha!Math's Games is set in real-world contexts. In grades K-2, digital coaches instruct students in Spanish to provide native-language instruction in math to Spanish-speaking Resources Help Deliver Math Instruction to ELLs Our ELL Strategy Guide, English/Spanish Glossaries, and Lesson Plan Templates help ELL teachers deliver math instruction as well as math teachers who have little experience with ELLs. Traditionally, Response to Intervention (RTI) is a multi-step approach to providing services and early interventions to struggling learners. RTI uses screening and benchmarks to identify needs and then provides academic and behavioral supports early in a student’s academic experience. RTI helps schools focus on high quality, early interventions while monitoring student progress. The information gained from an RTI process is used by school personnel and parents to determine the educational needs of the child and continually adapt instruction to meet those needs. Rather than waiting for failure before offering help, RTI illuminates a clear instructional path to help students overcome learning challenges. RTI was originally designed to accelerate learning for under-achieving students. And, while RTI remains an excellent choice for special education intervention (IDEA), increasingly, schools are recognizing the benefits of RTI for all learners. How does it work? The theory is simple. • Assess all students • Identify learning gaps • provide individualized instruction when students need it • Repeat and improve RTI is designed to give every student the opportunity to succeed and a very effective tool for raising the educational bar for all students. One of the great challenges of RTI is that it can require a lot of time, thought and expertise to execute it well. This is where Aha!Math and Learning.com's digital learning environment come in. We provide powerful yet teacher-friendly tools to streamline the RTI process. Aha!Math respects teachers' autonomy, classroom savy and unmatched knowledge of their own students while simultaneously saving them time by identifying exceptional and appropriate resources for them to choose from to address students' math learning needs. Learn more about RTI and Aha!Math » Assess and Prescribe Diagnostic assessments identify at-risk students, achievement gaps and teaching opportunities. Data from our pre-built Common Core assessments can be used to prescribe content and differentiate for the class, group or individual student. Educators can also create their own assessments using content from Aha!Math to target specific issues or learning gaps. Prescribe the Best Available Resources The Aha!Math prescription engine looks at Aha!Math content, district-created content, and your other standards-aligned district-licensed content to recommend math lessons and activities that best serve your students. Use all your district-licensed math resources to fullest, best effect. Monitor Progress Over Time Progress monitoring tools provide longitudinal data and reporting to document the RTI process and improve district performance. Clear visual representations of student progress help you to know that your students are learning what they need to know. Improving visibility into student performance helps you take corrective action before issues turn into problems. Document Student Growth Use benchmark reports tied to Aha!Math's summative assessments see student growth,monitor progress and use longitudinal data to improve student performance. These clean powerful reporting tools also help document your RTI process and effectiveness. Digital Interactive Math for Elementary School Aha!Math Elementary School Edition is a supplemental online curriculum for grades K-5. With a research-based instructional model and digital content, Aha!Math helps improve students' foundational math skills while developing their higher-level problem-solving and reasoning skills. Save Time with Ready-to-go Instruction Modules Teachers can introduce and explore math concepts and procedures, pause and restart the instruction, direct discussion, navigate the presentation slides, ask questions, and engage students. Math Games Motivate Students to Keep Practicing Teachers can facilitate interactive group learning using Aha!Math's instructional math games and still remain at the center of the action, guiding student participation, and noting where understanding lags. Digital math content for interactive whiteboards Get the elements of Aha!Math designed for use with the whole class, with lively instruction and games that free teachers to focus on how well students are grasping critical math skills. • Grab students’ attention and improve participation • Introduce and reteach math’s toughest concepts • Increase interactive whiteboard use Save Time with Ready-to-go Instruction Modules Teachers can introduce and explore math concepts and procedures, pause and restart the instruction, direct discussion, navigate the presentation slides, ask questions, and engage students. Build Fluency with Engaging Math Games Teachers can facilitate interactive group learning using Aha!Math's instructional math games and still remain at the center of the action, guiding student participation, and noting where understanding lags. Supplemental math for middle school students working below grade level Get the elements of Aha!Math that engages middle school students in learning and practicing, tying new concepts to earlier concepts. Aha!Math models the connection between concrete math models and symbolic math representations. • Interactive digital supplemental math curriculum covers multiplication, division, fractions, decimals, 2D and 3D shapes and area. • Engage and motivate students to review and relearn critical foundational math concepts to be successful with middle school curriculum. Reteach Key Concepts with Instruction Modules Teachers can review math concepts and procedures, pause and restart the instruction, direct discussion, navigate the presentation slides, ask questions, and engage students. Practice Fluency with Instructional Math Games The lively action in Aha!Math's Games captures and keeps students' interest, enabling them to apply their learning, and motivating them to come back again and again. Online Math Curriculum Aligned to TEKS Math Standards Aha!Math for TEKS is designed to engage and motivate students to be excited about math. Fully aligned to the TEKS, its research-based design supports busy teachers with multiple models of instruction, and the data to make informed decisions. Digital lessons and games are relevant to students, and to the way they want to learn. Elemementary Math Curriculum Aligned to TEKS Units align to the Texas Essential Knowledge and Skills (TEKS) for Math standards for grades K-5. Data Informs Decisions and Documents Growth Get a clear path with diagnostic assessments and prescriptive content that meet each student's unique needs. Teachers know exactly how students are meeting the TEKS, and which curriculum items to assign next to address any gaps. Easily see student progress over time with benchmark reports tied to Aha!Math's summative assessments. 2012 CODiE Finalist for Aha!Math - Best Mathematics Instructional Solution 2009 Teachers Pick Award for Aha!Math by Instructor Magazine (Scholastic) 2009 Association of Education Publishers' 2009 Distinguished Achievement Award for K-5 Math Curriculum 2008 CODiE Finalist for Aha!Math 2007-2008 Readers Choice Top 100 Products winner, awarded by District Administration magazine Interested in learning more about how Aha!Math accelerates progress and improves student achievement?
{"url":"http://www.learning.com/ahamath/index.htm","timestamp":"2014-04-18T11:03:40Z","content_type":null,"content_length":"143293","record_id":"<urn:uuid:8dc7174f-bb62-4d4c-88c4-a7f8ff2ec358>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Redondo Beach Math Tutor ...I can explain things in more than one way, and use analogies as well as my own custom created chemistry worksheets to break down the concept in a way that is more understandable to the student. I’m very results oriented with tutoring–to me, the way of explaining things that allows you to underst... 6 Subjects: including algebra 1, biology, ACT Math, geometry ...As an English Literature major, much of my course work has been writing intensive. I spent three years preparing speeches and arguments under time constraints as a forensics competitor. I am also finishing up my BA in English Literature at CSULB next year and have a fair amount of English Education courses under my academic belt as well. 22 Subjects: including algebra 1, English, writing, prealgebra ...I possess a current Clear Multiple Subject Teaching Credential and a Master's degree in Special Education. I have previously been certified in English, Social Studies, and Special Education, and I have experience teaching both general and special education students. I also have been trained by ... 29 Subjects: including geometry, SAT math, prealgebra, algebra 1 ...Please feel free to contact me with any questions you may have. I am also very happy to provide references upon request. Thanks again for your interest, and I hope to hear from you!I am currently a master's student studying clarinet performance at the University of Southern California under the tutelage of Yehuda Gilad. 28 Subjects: including calculus, ACT Math, algebra 2, trigonometry ...It is quite useful for most careers but it is one of the most difficult subjects for students. If they have any learning gaps from previous years, they will show up here. I love to teach math and would love to teach your student. 19 Subjects: including algebra 1, study skills, ESL/ESOL, SAT math
{"url":"http://www.purplemath.com/redondo_beach_math_tutors.php","timestamp":"2014-04-21T10:26:31Z","content_type":null,"content_length":"24036","record_id":"<urn:uuid:8dd4053c-198a-42f7-8528-9fe7d5226653>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Mequon Math Tutor Find a Mequon Math Tutor Hi, my name is Eric and hopefully I can relate math subjects to something you are familiar with. Having been through what you or your child is going through now, I know how it can be frustrating. I have a Bachelor's in Civil Engineering and I'm very computer literate. 5 Subjects: including algebra 1, Microsoft Excel, geometry, Microsoft Word ...I am a Parent Volunteer Leader for WI FACETS, an organization that helps parents understand the special education maze and how to best get what their children need to be successful students. I have worked with many students with ADD/ADHD in my career. I am a parent of an ADD/ADHD child and know first hand the trials and tribulations parenting a child with this disability. 36 Subjects: including SAT math, ACT Math, English, prealgebra ...I have prepared lessons focusing on grammar, vocabulary, and culture. I vary the lessons to involve speaking, reading, writing, and listening and use a great deal of songs, games, hands-on activities, TPR (total physical response), and art to enhance the language learning experience. My undergraduate course work included a major in English. 17 Subjects: including algebra 1, algebra 2, geometry, Spanish ...I provide individualized teaching and assessment and tailor all lessons specifically to each child's individual needs. Students who are struggling in Reading, Writing and Mathematics need intervention at an early age to reach their potential. I have worked with reluctant learners, and students who are English Language Learners. 10 Subjects: including algebra 1, prealgebra, ACT Math, reading ...Learning never stops, and you are never too old to learn. After the age of 50, these are a few of my personal accomplishments:- Learned to sail ("Medium Air" rated)- Learned to fly an airplane- Learned Olympic-style archery, and became the 2008 state champion- Wrote, directed, and produced a vid... 18 Subjects: including algebra 1, biology, vocabulary, grammar
{"url":"http://www.purplemath.com/Mequon_Math_tutors.php","timestamp":"2014-04-17T10:57:59Z","content_type":null,"content_length":"23776","record_id":"<urn:uuid:6df9ee76-465e-42d1-a8df-2393303ccdc6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Vertical Exaggeration Depending on why you are creating your topographic profile, you may want to use vertical exaggeration when constructing it. Vertical exaggeration simply means that your vertical scale is larger than your horizontal scale (in the example you could use one inch is equal to 1000 ft. for your vertical scale, while keeping the horizontal scale the same). Vertical exaggeration is often used if you want to discern subtle topographic features or if the profile covers a large horizontal distance (miles) relative to the relief To determine the amount of vertical exaggeration used to construct a profile, simply divide the real-world units on the horizontal axis by the real-world units on the vertical axis. If the vertical scale is one 1"=1000’ and the horizontal scale is 1"=2000’, the vertical exaggeration is 2x (2000’/1000’).
{"url":"http://geology.isu.edu/geostac/Field_Exercise/topomaps/vert_ex.htm","timestamp":"2014-04-19T06:57:22Z","content_type":null,"content_length":"4649","record_id":"<urn:uuid:b4c8b4cb-f5ce-4c93-b763-2749d3649443>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Phase noise power spectral density to Jitter Following a brief discussion with my friend Mr. Rethnakaran Pulikkoonattu on phase noise profiles, he pointed me to his write up on Oscillator Phase Noise and Sampling Clock Jitter . In this post, we will discuss the math behind integrating the phase noise power spectral density (in dBc/Hz) to find the root mean square jitter value. In the post on ADC SNR with clock jitter we discussed about the effect of having incorrect sampling clock on the signal to quantization noise ratio. The error in sampling clock is caused by the variations on the timing of the signal. Jitter causes the the zero crossing of the clock to vary slightly from the desired location. Figure : Clock with jitter The clock with jitter can be expressed as, $v_j(t) = v\(t + j\(t\)\)$, $j(t) = \frac{\phi\(t\)}{2\pi f_c}$ is the jitter in time $\phi(t)$ is the equivalent phase jitter expressed in radians and $f_c$is the frequency of the clock. The mean square value of phase jitter is computed as, $L\(f\)$ is the ratio of noise power in 1Hz bandwidth (BW) at offset from carrier to carrier signal power (recall from the post on oscillator phase noise) The root mean square (rms) value of the jitter is expressed as, $J_{rms,phase}={\sqrt{E\[\phi^2\(t\)\]}}$ in radians and $J_{rms,time}=\frac{\sqrt{E\[\phi^2\(t\)\]}}{2\pi f_c}$ in seconds. Computing jitter from phase noise profile The mean square value of jitter can be found by integrating the phase noise profile $L\(f\)$ over the frequency range. The phase noise profile $L\(f\)$ is typically specified in decibels, ”decibels below carrier per hertz or dBc/Hz at a frequency $f$ away from the carrier frequency $f_c$“. Phase noise profile From the post on Oscillator phase noise, we know that $\mathcal{L}\(f\)\sim \frac{1}{f^2}$ (or equivalently -20dB per decade). However due to other noise sources there are regions in the phase noise spectrum where $\mathcal{L}\(f\)\sim \begin{array}\frac{1}{f^3}, & \frac{1}{f} \end{array}$ and so on, followed by a region where the phase noise power spectrum does not change with frequency. Typical phase noise profile follows a piece wise linear curve (with varying values of slope) between two frequency points as shown Figure : Power spectral density of oscillator phase noise spectrum The rms phase jitter can be computed from the phase noise profile as, $\begin{array}{lll}J_{rms,phase}&=&{\sqrt{E\[\phi^2\(t\)\]}}\\&=&\sqrt{\int_{-\infty}^{\infty}S_{\phi}\(f\)df}=\sqrt{2\int_{0}^{\infty}L\(f\)df}\\&=& \sqrt{2\(A_{12} + A_{23} + A_{34} + A_{45}\)}\end Area in the region A12 With the piece-wise linear assumption, the line connecting the regions $[f_1,f_2)$ can be expressed as, $L_{12}(f)=m\log(f)+c$ , $m=\frac{L(f_2) - L(f_1)}{\log(f_2) - \log(f_1)}$ is the slope and $c=L(f_1)-m\log(f_1)$ the constant. $L(f)=\underbrace{m}_{slope}\log(f) +\underbrace{L(f_1)-m\log(f_1)}_{constant}$. The area is found by integrating over the frequencies from $[f_1,f_2)$, i.e. Note : Knowing that $10^{\log(x)}=x$, the term a) $10^{\frac{-m\log(f_1)}{10}}=10^{\log\(f_1^{\frac{-m}{10}}\)}=f_1^{\frac{-m}{10}}$ and b) $10^{\frac{m\log(f)}{10}}=10^{\log\(f^{\frac{m}{10}}\)}=f^{\frac{m}{10}}$. The integration simplifies to, When $m=-10$, using L’Hopital’s Rule and the knowledge that $\frac{d}{dx}a^x=a^x\ln\(x\)$, $\begin{array}{lll}\frac{f_2^{\(\frac{m}{10}+1\)}-f_1^{\(\frac{m}{10}+1\)}}{\(\frac{m}{10}+1\)}&=&\ln(f_2)f_2^{\(\frac{m}{10}+1\)}-\ln(f_1)f_1^{\(\frac{m}{10}+1\)}\\&=&\ln(f_2)-\ln(f1),& m=-10\end Summarizing the area under the region A12 is, $\Large{A_{12}=10^{\frac{L(f_1)}{10}}f_1^{\frac{-m}{10}}\[\begin{array}{ll}\frac{f_2^{\(\frac{m}{10}+1\)}-f_1^{\(\frac{m}{10}+1\)}}{\(\frac{m}{10}+1\)},&me-10\\\\\\\ln(f_2)-\ln(f1),& m=-10\end{array} Using the above approach, the area under each region can be found to compute the integrated power. Consider a carrier of frequency 10MHz having an example phase noise profile having power spectral density (dBc/Hz vs frequency as follows). Figure : Example phase noise profile Click here to download Matlab/Octave script for computing the root mean square jitter (in radians and seconds) from the phase noise power spectral density profile. For this example, can be seen that the integrated root mean square (rms) jitter in radians is 0.018052 and corresponding to 287.32 pico seconds for a clock of 10MHz. a) Pulikkoonattu, R. (June 12, 2007), Oscillator Phase Noise and Sampling Clock Jitter, Tech Note, Bangalore, India: ST Microelectronics, retrieved March 29, 2012 b) Phase Noise (dBc/Hz) to Phase Jitter (ps RMS) Calculator from JitterTime.com c) List of Integrals from wikipedia d) L’Hopital’s Rule in wikipedia e) Derivative – entry in wikipedia D id you like this article? Make sure that you do not miss a new article by subscribing to RSS feed OR subscribing to e-mail newsletter. Note: Subscribing via e-mail entitles you to download the free e-Book on BER of BPSK/QPSK/16QAM/16PSK in AWGN. { 0 comments… add one now } Leave a Comment { 3 trackbacks }
{"url":"http://www.dsplog.com/2012/06/22/phase-noise-psd-to-jitter/","timestamp":"2014-04-18T08:39:58Z","content_type":null,"content_length":"70293","record_id":"<urn:uuid:b3905bda-83a2-4149-8bb2-5574e8d998b0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: anyone can? Find the Taylor series expansion of a function of f(z)=z exp(2z) about z = -1 and find the region of convergence., • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51698845e4b015a79c11054c","timestamp":"2014-04-20T21:26:59Z","content_type":null,"content_length":"75740","record_id":"<urn:uuid:b4cf1336-956f-482e-9ed2-6f558cc9ffad>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
8085 Assembly language programming code for beginners Write program in Assembly language which accepts two decimal digits from its user, prints a new line on the display screen, and if the first digit is larger than the second, displays the average of the two digits (to the nearest whole number) and otherwise displays the square root of the product of the two digits (to the nearest whole number). Then the program should return to DOS control. Thus, if the user types then the program displays and if the user types then the program displays Hint: Think of easier way for square root calculation i have just checke this site. I found it very useful. great job,buddy!!!! difference between addition of 8-bit numbers and 16-bit numbers sir i need a exponential series program in assembly language. i m looking forward for ur replay because i need it urgently i have to submitted on firday. Now i am learning assembly language programming.Would you mind providing me the details about this programming that how to write this? When we declare a variable like var dw 92F3h its OK, but if we write it like var dw B2F3, it gives an overflow error. Now to overcome this, we need to give a 0 before it. var dw 0B2F3h. Now this works. My question is why in certain cases we have to give the 0 ? 39.A block starts at 0C20H and ends where 4 consecutive 00H are followed by FFH. Write a program to estimate the size of the block and store the count in BCD in COUNT. Write a program to fill 6410 locations with nos. as shown below: 1st 4 locations with 0FH 2nd 4 locations with 0EH 3rd 4 locations with 0DH last 4 locations with 00H and repeat the same till all locations are fully filled. 58. Two blocks of bytes exist in 0C20H and 0C40H. When the interrupt 6. 5 key is pressed, for each pressing the larger of 1st bytes of each block is to be written in location 0C60H. Each block consists of 10 bytes and hence after 10 interrupts a new block shall exist in OC60H with larger bytes of the two blocks. Disable the interrupt after the new block formation.
{"url":"http://www.go4expert.com/articles/8085-assembly-language-programming-code-t302/page4","timestamp":"2014-04-21T04:53:35Z","content_type":null,"content_length":"52764","record_id":"<urn:uuid:ce094460-f38b-4447-8417-3e7fd7335d1c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
extract variables in formula from a data frame up vote 11 down vote favorite I have a formula that contains some terms and a data frame (the output of an earlier model.frame() call) that contains all of those terms and some more. I want the subset of the model frame that contains only the variables that appear in the formula. ff <- log(Reaction) ~ log(1+Days) + x + y fr <- data.frame(`log(Reaction)`=1:4, The desired result is fr minus the z column (fr[,1:4] is cheating -- I need a programmatic solution ...) Some strategies that don't work: ## Error in `[.data.frame`(fr, all.vars(ff)) : undefined columns selected (because all.vars() gets "Reaction", not log("Reaction")) stripwhite <- function(x) gsub("(^ +| +$)","",x) vars <- stripwhite(unlist(strsplit(as.character(ff)[-1],"\\+"))) ## Error in `[.data.frame`(fr, vars) : undefined columns selected (because splitting on + spuriously splits the log(1+Days) term). I've been thinking about walking down the parse tree of the formula: ff[[3]] ## log(1 + Days) + x + y ff[[3]][[1]] ## `+` ff[[3]][[2]] ## log(1 + Days) + x but I haven't got a solution put together, and it seems like I'm going down a rabbit hole. Ideas? r formula Seems like the main variable that's causing you problems is log(1+Days). Do you have to call it that or could you just use a different name? – Thomas Aug 2 '13 at 13:18 1 What about attr(terms.formula(ff), "term.labels")? – Roman Luštrik Aug 2 '13 at 13:19 1 I'm trying to come up with a general solution. Therefore, anything that could show up in a model.frame() generated from a legal formula has to be handled. That's part of the problem. – Ben Bolker Aug 2 '13 at 13:19 1 Or rownames(attr(terms.formula(ff), "factors")) to get the DV as well. – Thomas Aug 2 '13 at 13:21 1 ?formula lists terms.formula. :) – Roman Luštrik Aug 2 '13 at 13:22 show 4 more comments 1 Answer active oldest votes This should work: > fr[gsub(" ","",rownames(attr(terms.formula(ff), "factors")))] log(Reaction) log(1+Days) x y And props to Roman Luštrik for pointing me in the right direction. up vote 4 down vote Edit: Looks like you could pull it out off the "variables" attribute as well: fr[gsub(" ","",attr(terms(ff),"variables")[-1])] Edit 2: Found first problem case, involving I() or offset(): ff <- I(log(Reaction)) ~ I(log(1+Days)) + x + y fr[gsub(" ","",attr(terms(ff),"variables")[-1])] Those would be pretty easy to correct with regex, though. BUT, if you had situations like in the question where a variable is called, e.g., log(x) and is used in a formula alongside something like I(log(y)) for variable y, this will get really messy. thanks. I can't accept this for another few minutes. the gsub(...) won't be necessary in my case, I think -- the mismatch in white space won't be there. I introduced it accidentally in setting up the example. – Ben Bolker Aug 2 '13 at 13:24 @BenBolker Yea, it would probably be good to test this on some other formula constructions to see if it's general... – Thomas Aug 2 '13 at 13:27 1 but your original answer, rownames(attr(terms.formula(ff), "factors"))), seems to work fine on your problem case. – Ben Bolker Aug 2 '13 at 13:32 @BenBolker Hmm...doesn't work for me... – Thomas Aug 2 '13 at 13:35 add comment Not the answer you're looking for? Browse other questions tagged r formula or ask your own question.
{"url":"http://stackoverflow.com/questions/18017765/extract-variables-in-formula-from-a-data-frame?answertab=oldest","timestamp":"2014-04-19T00:00:42Z","content_type":null,"content_length":"75557","record_id":"<urn:uuid:ca49f3eb-94e3-4257-b978-7f7f291c2a77>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"}