content
stringlengths
86
994k
meta
stringlengths
288
619
MathGroup Archive: June 2008 [00674] [Date Index] [Thread Index] [Author Index] Re: Solving a Sum • To: mathgroup at smc.vnet.net • Subject: [mg89939] Re: Solving a Sum • From: Igor <pischek at gmx.net> • Date: Wed, 25 Jun 2008 06:23:24 -0400 (EDT) Thanks for your explanatory words. However, I figured out that there are infinite solutions as the equations can be transformed to: In order to hold, Subscript[c,t] can for example equal Subscript[s,t]/Sum[Subscript[s,t]*q^(-t),{t,1,T}] But as this shows, the problem as infinite solutions for differing variables Subscript[s,t]. However, if I enter this in Mathematica, the expressions are not simplified to 1 using variables. Only when substituting values in the following expression is evaluated to 1. Why is that so. By the way. This is not some homework for university. Just trying to understand how to work with Mathematica because im interested in it. The solution for the problem is in the script and we are not even supposed to solve this on our own (because for students of business administration this is probably not decisive skill :) ) Best regards,
{"url":"http://forums.wolfram.com/mathgroup/archive/2008/Jun/msg00674.html","timestamp":"2014-04-16T16:03:34Z","content_type":null,"content_length":"25916","record_id":"<urn:uuid:89aaeae1-a4c5-4dfe-aa67-7e2a608f0851>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
San Bruno Statistics Tutor ...Thanks. I help at all levels of math including the test preparation for the quantitative part of the GRE. I had several GRE students and they all exceeded their initial expectations in the math portion. "Excellent tutor improved my GRE score!" - Megan G. 41 Subjects: including statistics, calculus, geometry, algebra 1 ...I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of tutoring most likely to bring them success. In all cases, however, I emphasise risk taking, self-sufficency and critical thinking when approaching problems. My aim is to help students to become independent learners who eventually won't need my help. 11 Subjects: including statistics, chemistry, calculus, physics ...During these 6 years I have had students from different backgrounds and age groups. I started as a tutor for middle and high school students, and later tutored students at The Glendale Community College. Upon successful tutoring experience I received the opportunity to work with undergraduate students at University of California, Irvine. 29 Subjects: including statistics, reading, calculus, geometry ...Gray's research in air pollution includes the use of meteorological and dispersion computer models which involve numerical solutions of large linear (and non-linear) systems. Dr. Gray has also worked with (and helped in the development of) receptor models such as the Chemical Mass Balance Model and other related multivariate factor-analysis techniques. 13 Subjects: including statistics, calculus, physics, algebra 2 ...Yet I like to make the learning environment comfortable to the students, so I usually take breaks every now and then and have relaxed talks with the students. I have been teaching 1-6th math for the past three summers in a local non-profit Summer Day Camp in Seattle (Light and Love Home in Seatt... 12 Subjects: including statistics, calculus, geometry, Chinese
{"url":"http://www.purplemath.com/san_bruno_statistics_tutors.php","timestamp":"2014-04-19T10:15:56Z","content_type":null,"content_length":"24244","record_id":"<urn:uuid:39ee16f9-24e2-4801-b860-a92e3d4f1349>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
category-extras-0.53.6: Various modules and constructs inspired by category theory Contents Index Portability non-portable (either class-associated types or MPTCs with fundeps) Control.Category.Object Stability experimental Maintainer Edward Kmett <ekmett@gmail.com> This module declares the HasTerminalObject and HasInitialObject classes. These are defined in terms of class-associated types rather than functional dependencies because most of the time when you are manipulating a category you don't care about them; this gets them out of the signature of most functions that use the category. Both of these are special cases of the idea of a (co)limit. class Category k => HasTerminalObject k t | k -> t where The Category k has a terminal object Terminal k such that for all objects a in k, there exists a unique morphism from a to Terminal k. class Category k => HasInitialObject k i | k -> i where The Category k has an initial (coterminal) object Initial k such that for all objects a in k, there exists a unique morphism from Initial k to a. Produced by Haddock version 2.1.0
{"url":"http://comonad.com/haskell/category-extras/dist/doc/html/category-extras/Control-Category-Object.html","timestamp":"2014-04-16T20:15:55Z","content_type":null,"content_length":"5469","record_id":"<urn:uuid:eb9ca2c0-f021-4f1f-b1cd-9dc5e0bd4f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Aerosol Light Scattering Measurements at Cape Grim Tasmania during ACE-1 Mark J. Rood, Christian M. Carrico, Rajendra Shrestha, Mathew J. Skific, Univerisity of Illinois at Urbana-Champaign, John A. Ogren, NOAA-CMDL As part of ACE-1, total-light scattering and backscattering coefficients (sigma-tsp and sigma-bsp) of ambient aerosol were measured at Cape Grim Baseline Air Pollution Station. Such measurements will provide important parameters for this unperturbed marine site for use in chemical/radiative transfer models that predict the direct radiative effect of aerosols on climate. The controlled relative humidity (RH) nephelometry system (humidograph) measured the dependence of sigma-tsp and sigma-bsp upon RH, particle size (dp), wavelength of light (lambda), and direction of light scattered. The humidograph employed two TSI Model 3563 nephelometers, a teflon membrane humidification system, and impactors with 10 um and 1 um size cuts. Semi-continuous operation produced a database of over 500 RH scans of sigma-tsp and sigma-bsp. The optical parameters of interest are the increase in sigma-tsp and sigma-bsp due to hygroscopic growth (f(RH)), the backscatter ratio (b), the relative contributions of coarse (dp < 10 um) and fine particles (dp < 1um) to sigma-tsp and sigma-bsp, and the Angstrom exponent (Å). Results for f(RH), b, Å, and fine to coarse light scattering ratios for the aerosol are presently under analysis. On average, sigma-tsp and sigma-bsp are 4.9e-6 +/- 2.2e-6 m-1 and 5.6e-7 +/- 2.4e-7 m-1 respectively at lambda = 550 nm and dp < 1 um. Low signal-to-noise ratios contribute to significant uncertainties especially in sigma-bsp during particularly clean time periods as sigma-tsp and sigma-bsp at Cape Grim are an order of magnitude lower than has been measured at an anthropogenically perturbed continental site (Bondville, IL). Preliminary results indicate mean f(RH) = 2.47 +/- 0.57 (dp < 1 um, total scatter) with 25% variation with lambda, f(RH) = 2.04 +/- 0.22 (dp < 10 um, total scatter) with 5% variation with lambda, and f(RH) = 1.6 +/- 0.2 (dp < 10 um, backscatter) with 11% variation with lambda. In comparison, this research group has measured f(RH) = 1.5 +/- 0.8 (dp < 1 um, total scatter) with 9% variation with lambda at Bondville, IL, while Charlson et al. (1992) estimated a globally averaged f(RH) = 1.7.
{"url":"http://saga.pmel.noaa.gov/Field/ace1/agu/rood.html","timestamp":"2014-04-21T02:02:07Z","content_type":null,"content_length":"2800","record_id":"<urn:uuid:b3435ad7-cf1e-415c-b277-704933b06ccf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: PROBLEMS FOR REAL GROUPS James Arthur* At the suggestion of Bill Casselman, I have tried to compile a set of interesting prob- lems for real groups. I have not made any attempt to represent the field as a whole. Some of the problems are in fact quite idiosyncratic. They all come from real harmonic analysis, and are generally motivated by global questions in automorphic forms. The list was put together rather quickly, and could certainly stand further reflection. I expect that I have overlooked some points, and have perhaps misstated others. The problems should be treated as guidelines, to be reshaped as necessary in any attempts to solve them. Unless otherwise indicated, G will denote a connected, reductive algebraic group over R in the discussion below. §1 Endoscopic transfer §2 Endoscopic character identities §3 Orthogonality relations §4 Weighted orbital integrals §5 Intertwining operators and residues §6 Twisted groups §7 Traces of intertwining operators
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/329/0087204.html","timestamp":"2014-04-21T10:08:32Z","content_type":null,"content_length":"8040","record_id":"<urn:uuid:f40eebb0-6cf1-4290-8d68-7237c7a47563>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekly Problem 40 - 2011 Copyright © University of Cambridge. All rights reserved. 'Weekly Problem 40 - 2011' printed from http://nrich.maths.org/ In the star shown here the sum of the four numbers in any "line" is the same for each of the five "lines". The five missing numbers are 9,10,11,12,13. Which number is represented by R? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solutionView the current weekly problem
{"url":"http://nrich.maths.org/2206/index?nomenu=1","timestamp":"2014-04-18T15:43:42Z","content_type":null,"content_length":"3671","record_id":"<urn:uuid:2ea71bc8-ee71-45a4-9f9f-1ae503795219>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Getting started with the Heritage Health Price competition April 8, 2011 By Allan Engelhardt The US$ 3 million Heritage Health Price competition is on so we take a look at how to get started using the R statistical computing and analysis platform. We do not have the full set of data yet, so this is a simple warm-up session to predict the days in hospital in year 2 based on the year 1 data. Obviously you need to have R installed, and you should also have signed up for the competition (be sure to read the terms carefully) and downloaded and extracted the release 1 data file. Data preparation Let’s load the data into R and do some basic housekeeping: ## example001.R - simple benchmarks for the HHP ## Copyright © 2011 CYBAEA Limited - http://www.cybaea.net/ #### DATA PREPARATION ## Members members <- read.csv(file = "HHP_release1/Members_Y1.csv", colClasses = rep("factor", 3), comment.char = "") ## Claims claims.Y1 <- read.csv(file = "HHP_release1/Claims_Y1.csv", colClasses = c( rep("factor", 7), "integer", # paydelay "character", # LengthOfStay "character", # dsfs "factor", # PrimaryConditionGroup "character" # CharlsonIndex comment.char = "") ## Utility function make.numeric <- function (cv, FUN = mean) { ### make a character vector numeric by splitting on '-' " ", perl = TRUE), " ", fixed = TRUE), function (x) FUN(as.numeric(x))) ## Length of stay as days z <- make.numeric(claims.Y1$LengthOfStay) z.week <- grepl("week", claims.Y1$LengthOfStay, fixed = TRUE) z[z.week] <- z[z.week] * 7 # Weeks are 7 days z[is.nan(z)] <- 0 claims.Y1$LengthOfStay.days <- z los.levels <- c("", "1 day", sprintf("%d days", 2:6), "1- 2 weeks", "2- 4 weeks", "4- 8 weeks", "8-12 weeks", "12-26 weeks", "26+ weeks") stopifnot(all(claims.Y1$LengthOfStay %in% los.levels)) claims.Y1$LengthOfStay <- factor(claims.Y1$LengthOfStay, levels = los.levels, labels = c("0 days", los.levels[-1]), ordered = TRUE) ## Months since first claim claims.Y1$dsfs.months <- make.numeric(claims.Y1$dsfs) ## dsfs is an ordered factor and gives the ordering of the claims dsfs.levels <- c("0- 1 month", sprintf("%d-%2d months", 1:11, 2:12)) claims.Y1$dsfs <- factor(claims.Y1$dsfs, levels = dsfs.levels, ordered = TRUE) ## Index as numeric claims.Y1$CharlsonIndex.numeric <- make.numeric(claims.Y1$CharlsonIndex) claims.Y1$CharlsonIndex <- factor(claims.Y1$CharlsonIndex, ordered = TRUE) ## Days in hospital dih.Y2 <- read.csv(file = "HHP_release1/DayInHospital_Y2.csv", colClasses = c("factor", "integer"), comment.char = "") names(dih.Y2)[1] <- "MemberID" # Fix broken file save(members, claims.Y1, dih.Y2, file = "HHPR1.RData") We will need a function to score our predictions p against the actual values a. The formula is on the evaluation page and we implement it as: ## example001.R - simple benchmarks for the HHP ## Copyright © 2011 CYBAEA Limited - http://www.cybaea.net/ #### FUNCTION TO CALCULATE SCORE HPPScore <- function (p, a) { ### Scorng function after ### http://www.heritagehealthprize.com/c/hhp/Details/Evaluation ### Base 10 log from http://www.heritagehealthprize.com/forums/default.aspx?g=posts&m=2226#post2226 sqrt(mean((log(1+p, 10) - log(1+a, 10))^2)) The simplest benchmarks The simplest models don’t really model at all: they just use the average and are simple benchmarks. ## example001.R - simple benchmarks for the HHP ## Copyright © 2011 CYBAEA Limited - http://www.cybaea.net/ y <- dih.Y2$DaysInHospital_Y2 # Actual p <- rep(mean(y), NROW(dih.Y2)) cat(sprintf("Score using mean : %8.6f\n", HPPScore(p, y))) # Score using mean : 0.278725 p <- rep(median(y), NROW(dih.Y2)) cat(sprintf("Score using median: %8.6f\n", HPPScore(p, y))) # Score using median: 0.267969 Simple single-variable linear models OK, a model that doesn’t use past data isn’t much of a model, so let’s improve on that: ## example001.R - simple benchmarks for the HHP ## Copyright © 2011 CYBAEA Limited - http://www.cybaea.net/ vars <- dcast(claims.Y1, MemberID ~ ., sum, value_var = "LengthOfStay.days") names(vars)[2] <- "LengthOfStay" data <- merge(vars, dih.Y2) model <- lm(DaysInHospital_Y2 ~ LengthOfStay, data = data) p <- predict(model) cat(sprintf("Score using lm(LengthOfStay): %8.6f\n", HPPScore(p, y))) # Score using lm(LengthOfStay): 0.279062 model <- glm(DaysInHospital_Y2 ~ LengthOfStay, family = quasipoisson(), data = data) p <- predict(model, type="response") cat(sprintf("Score using glm(LengthOfStay): %8.6f\n", HPPScore(p, y))) # Score using glm(LengthOfStay): 0.278914 Let the competition begin. Jump to comments. You may also like these posts: 1. Revolutions Analytics recently announced their big data solution for R. This is great news and a lovely piece of work by the team at Revolutions. However, if you want to replicate their analysis in standard R , then you can absolutely do so and we show you how. 2. Employee productivity as function of number of workers revisited We have a mild obsession with employee productivity and how that declines as companies get bigger. We have previously found that when you treble the number of workers, you halve their individual productivity which is mildly scary. We revisit the analysis for the FTSE-100 constituent companies and find that the relation still holds four years later and across a continent. 3. R code for Chapter 2 of Non-Life Insurance Pricing with GLM We continue working our way through the examples, case studies, and exercises of what is affectionately known here as “the two bears book” (Swedish björn = bear) and more formally as Non-Life Insurance Pricing with Generalized Linear Models by Esbjörn Ohl… 4. Area Plots with Intensity Coloring I am not sure apeescape’s ggplot2 area plot with intensity colouring is really the best way of presenting the information, but it had me intrigued enough to replicate it using base R graphics. The key technique is to draw a gradient line which R does not … 5. R: Eliminating observed values with zero variance I needed a fast way of eliminating observed values with zero variance from large data sets using the R statistical computing and analysis platform . In other words, I want to find the columns in a data frame that has zero variance. And as fast as possible… daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/getting-started-with-the-heritage-health-price-competition/","timestamp":"2014-04-17T10:04:48Z","content_type":null,"content_length":"47770","record_id":"<urn:uuid:f36105cf-c1dc-4234-bdb7-2fad5722d6b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Minor >> >> >> >> Mathematics Minor Bachelor of Science in Mathematics A Bachelor of Science in Mathematics encompasses the breadth of Mathematics and its applications in a small, friendly, and supportive setting. Courses in differential equations, analysis, calculus, discrete mathematics, and linear and abstract algebra combine a theoretical and applied understanding of these areas. Additional courses in Physics and Computer Science explore how Mathematics can be used to solve real-world problems. Programs in Mathematics - The programs in Mathematics are designed to prepare the student for further study in mathematics, education, or other subjects or for employment in a variety of fields. Mathematics is the foundation upon which all other technical fields rest, and as such, is the perfect choice for students who have a profound mathematical curiosity, and a desire to apply their problem solving skills. The soaring demand for employees with specialized mathematical expertise allows graduates to follow a wide variety of career paths. Many work in fields that, while not specifically described as mathematical, require clear reasoning, logical thought, and a love and understanding of mathematics. Persons with degrees in mathematics may be found pursuing such diverse careers as actuarial science, education, consulting, systems analysis and quality control, and jobs in industry or government. Others go on to graduate work in mathematics or other mathematics-related fields, such as Computer Science. The B.S. degree candidate will, through the nature of the mathematics electives and the opportunities offered by other programs, have a scientifically and technically oriented program which allows entry into many fields of science, engineering, and technology as well as education and business. Through the second major in Mathematics and the minor in Mathematics, students in other fields may acquire a substantial background and competence in Mathematics. Our professors are professionals with a sincere commitment to teaching. The Mathematics Department at SPSU boasts a faculty that includes a National Science Foundation grant recipient, four University System of Georgia Teaching/Learning Grant recipients, as well as several awards for outstanding teaching by the Student Government Association and the SPSU faculty. The Faculty: Shangrong Deng, Associate Professor Meighan I. Dillon, Professor Steven R. Edwards, Professor Joseph N. Fadyn, Professor Joel C. Fowler, Associate Professor William Griffiths, Assistant Professor Sarah Holliday, Assistant Professor Andrew G. McMorran, Associate Professor and Department Chair Jack R. Pace, Associate Professor Nicolae Pascu, Assistant Professor Laura Ritter, Assistant Professor Jennifer Vandenbussche, Assistant Professor Long L. Wang, Associate Professor Taixi Xu, Associate Professor Advising for Pre-Engineering Program - The Mathematics Program conducts a program of advisement for freshmen and sophomores who wish to begin college locally, but plan to transfer to a full engineering program later. Students who wish to participate in this program should enter as mathematics majors. The advisors in the program will guide the students through an organized course of study which will provide a strong preparation in mathematics and science for the study of engineering and which will transfer with minimum loss of credit or time to most engineering programs. The mathematics portion of the major under the B.S. degree consists of three components: Required Courses, Mathematics Electives, and Guided Electives. Although the Required Courses provide the bulk of the mathematics in the degree, they also provide a framework for other series of Mathematics courses to be included under Mathematics Electives and Guided Electives. Students planning to attend graduate school in Mathematics are urged to select these courses carefully in consultation with an advisor. Students planning to seek employment in business or industry should consider a minor in a related field, such as computer science. A computer science minor can be completed within the Guided Electives of the Mathematics degree. Mathematics Bachelor of Science Requirements
{"url":"http://www.spsu.edu/undergradcatalog/programscourses/undergradprogramsofstudy/mathminor.htm","timestamp":"2014-04-19T14:31:04Z","content_type":null,"content_length":"22003","record_id":"<urn:uuid:7bd785b1-382b-4bfe-9bb3-6d11e8249c0b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
why doesn't this work? 10-29-2011 #1 why doesn't this work? this is my 1st attempt with pointers and I can't get this program to tell if there not real or to produce roots. Any help would be great... cuz I know its not 0.000 0.000 thanks #include <stdio.h> #include <stdlib.h> #include <iostream> #include <cmath> using namespace std; void readabc(double *a, double *b, double *c, char *real); void quadroots(double *a,double *b,double *c,double *root1,double *root2, char *real); int main() double a, b, c, root1, root2; char real; readabc( &a, &b, &c, &real); quadroots(&a, &b, &c, &root1, &root2, &real); real = b*b-4*a*c<0; return 0; void readabc(double *a, double *b, double *c,char *real) cout << "Enter a,b,c for a quad: " << endl; cin >> *a >> *b >> *c; void quadroots(double *a,double *b,double *c,double *root1,double *root2, char *real) if (!*real) printf("roots are not real\n"); printf ("roots are %lf %lf\n", *root1, *root2); }//check for real roots and calculate root1 and root2 AKA : total newbie ....and totally frustrated thanks for any help given..... I feel like I"m going to fail this class....blah! on line 15 you define 'char real;' line 35: 'if (!*real)' breakdown, real is a pointer to line 15. *real, returns the value pointed to by real. ! <value> if that value is 0, it will be true, else 1-255 will return false. You never set the value of real, only the pointer to it. Line 19 should produce a compiler warning, You should read it. Line 16 - 18 are not in the proper logic order for the math your trying to use. and you might have un-used includes Try to help all less knowledgeable than yourself, within the limits provided by time, complexity and tolerance. - Nor Use std::cout for output instead of printf. There is no need for any type modifiers, just output it as you would any text. printf ("roots are %lf %lf\n", *root1, *root2); should be std::cout << "roots are " << *root1 << *root2 << std::endl; For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ ok been working on it and now I just can't get it to give the correct roots.... can anyone help me? #include <stdio.h> #include <stdlib.h> #include <iostream> #include <cmath> using namespace std; void readabc(double *a, double *b, double *c, char *real); void quadroots(double a,double b,double c,double *root1,double *root2, char *real); int main() double a, b, c, root1, root2; char real; readabc( &a, &b, &c, &real); quadroots(a, b, c, &root1, &root2, &real); return 0; void readabc(double *a, double *b, double *c,char *real) cout << "Enter a,b,c for a quad: " << endl; cin >> *a >> *b >> *c; void quadroots(double a,double b,double c,double *root1,double *root2, char *real) if (b*b-4*a*c < 0) printf("roots are not real\n"); printf ("roots are %lf %lf\n", *root1, *root2); }//check for real roots and calculate root1 and root2 AKA : total newbie ....and totally frustrated thanks for any help given..... I feel like I"m going to fail this class....blah! ok I got it to work better but it is acting funny with 0 0 1 when it should say "Sorry the roots are not real!!" anyone know why it does that and how I can fix that? #include <stdio.h> #include <stdlib.h> #include <iostream> #include <cmath> using namespace std; void readabc(double *a, double *b, double *c, char *real); void quadroots(double a,double b,double c,double *root1,double *root2, char *real); int main() double a, b, c, root1, root2; char real; readabc( &a, &b, &c, &real); quadroots(a, b, c, &root1, &root2, &real); return 0; void readabc(double *a, double *b, double *c,char *real) cout << "Enter a,b,c for your quadratic you need to calculate:" << endl; cout << "(note:you must type a enter b enter c enter) " << endl; cin >> *a >> *b >> *c; void quadroots(double a,double b,double c,double *root1,double *root2, char *real) double realnum = b*b-4*a*c; if (realnum < 0) printf("Sorry the roots are not real!!\n"); *real = 0; *real= 1; printf ("roots are %lf %lf\n", *root1, *root2); }//check for real roots and calculate root1 and root2 AKA : total newbie ....and totally frustrated thanks for any help given..... I feel like I"m going to fail this class....blah! What result were you expecting to get when dividing by zero? You tend to need to avoid doing that. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" Why have you not replaced the printfs? For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ 10-29-2011 #2 10-29-2011 #3 10-29-2011 #4 10-29-2011 #5 10-29-2011 #6 10-30-2011 #7
{"url":"http://cboard.cprogramming.com/cplusplus-programming/142654-why-doesn%27t-work.html","timestamp":"2014-04-16T20:44:58Z","content_type":null,"content_length":"73439","record_id":"<urn:uuid:afa798e8-c00b-4af3-8512-74e55ccfd94a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
First-Order Derivative I'm not sure if i'm doing this right, it's been awhile since i've had to deal with calculus! So it says calculate the first-order derivative with respect to x: a. y = 3x^2 Ans: F(x) = 9X^2 b. y = x^3 Ans: F(x) 3x^2 c. y=mx + b Ans: Not sure on this one .. d. y = 1/x Ans: Also not sure e. y = 37x^4 Ans: F(x) = 148x^3 f. y = x^a/b ANS: F(x) = a/b x^a/b - 1 Okay. maybe i'm completely wrong with some of these! any help is greatly appreciated!
{"url":"http://mathhelpforum.com/calculus/157632-first-order-derivative.html","timestamp":"2014-04-17T08:44:56Z","content_type":null,"content_length":"42712","record_id":"<urn:uuid:608dd2f5-5639-4e0d-8a74-bdb07cfe1eb2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Eastchester Prealgebra Tutor ...Most high school students have already been exposed to all of the math covered in the SAT. The difficulty comes from 2 sources, time limits and the unusual form in which the questions are posed. One of the keys to excelling on the SAT is getting comfortable with the way Math topics are tested, and knowing how to approach the different questions. 10 Subjects: including prealgebra, geometry, algebra 1, precalculus ...I have taught at the local college. I have tutored at least 10 students over the last 5 years in geometry and have had very good results on the geometry regents. I have been teaching algebra and advanced algebra and trigonometry over the last 10 years. 20 Subjects: including prealgebra, geometry, algebra 1, GRE ...Use Jamie if you want an excellent tutor. I love tutoring! Working with a student and creating a customized learning curriculum makes every new student a challenge and opportunity. 16 Subjects: including prealgebra, geometry, finance, algebra 1 ...I try to guide students to understanding the material by trying to ground problems in real life situations: you can see whether an answer makes sense based on some sort of intuition, rather than just going through the algorithm and hoping you don't mess up. I'm a big fan of unit analysis, where ... 18 Subjects: including prealgebra, calculus, trigonometry, SAT math ...I am honest and encouraging in my feedback to high schoolers: I loved college and want everyone to have a great experience, but I also realize that for some, taking some time off first, to work or find themselves, is a smarter idea. It will enable them to mature. I am particularly qualified to ... 32 Subjects: including prealgebra, English, Spanish, reading
{"url":"http://www.purplemath.com/Eastchester_Prealgebra_tutors.php","timestamp":"2014-04-21T04:40:32Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:5cc4b393-efff-4138-938d-8d1472f99b82>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: alright this is a math/chemistry question: change 20.0 ounces (not fluid ounces just ounces) to mL • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50622cdbe4b0da5168bd38e5","timestamp":"2014-04-17T12:38:53Z","content_type":null,"content_length":"106371","record_id":"<urn:uuid:899e6bb7-bf28-45d9-9a85-8ea9f4771449>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
What is zero content? I know the definition, But I'm looking for an explanation.. anyone? I have: for every epsilon>0 there is a finite collection of intervals I1...IL st: A) Z⊂U IL,and B) The sum of the lengths of the I l's is less than epsilon. But how can the sum of the intervals be less than epsilon, does it just mean that zero content are intervals that are arbitrary small..? For a start, any finite set of points has zero content. Given a set with n points, you can surround each of them by an interval of length $\varepsilon/n$. For an example of an infinite set with zero content, let $S = \{1/n:n=1,2,3,\ldots\}$. Given $\varepsilon>0$, the interval $(0,\varepsilon/2)$ contains all but finitely many elements of S, and you can use the remaining $\varepsilon/2$ of length to put little intervals around those finitely many points. Suppose that $\varepsilon > 0$ the open interval $O_n = \left( {a - \frac{\varepsilon }{{2^{n + 1} }},a + \frac{\varepsilon }{{2^{n + 1} }}} \right)$ has length $\ell (O_n ) = \frac{\varepsilon }{{2^ n }}$. The sum of all those is just $\sum\limits_{n = 1}^\infty {\ell (O_n )} = \varepsilon$. Thus $\{a\}$ has content zero. Now that is a simple-minded example that has a natural extension to larger sets. Essentially the set has ‘zero area’. Think about any open interval say $[a,b]$. If $\varepsilon = \frac{{b - a}}{2} > 0$ it would be impossible to cover $[a,b]$ with set whose total length is less that $\varepsilon$. Edit: I did not see reply #4 before posting, sorry. content is supposed to correspond intuitively to length (in 1 dimension), area (in 2 dimensions) or volume (in higher dimensions). i say "supposed" because it turns out that historically, this has been a hard notion to pin down. the problem isn't with "normal" things like lines, triangles, circles, squares, etc. it has to do with the fact that there are lots and lots of kinds of sets, and some of them are pretty weird. consider this set: {x in [0,1] : x is rational}. what is the "length" of this set? how would we even begin to measure it? it turns out that different notions of measurement, give different answers.
{"url":"http://mathhelpforum.com/differential-geometry/177635-what-zero-content.html","timestamp":"2014-04-18T03:55:44Z","content_type":null,"content_length":"51832","record_id":"<urn:uuid:785e7c1a-65f1-4df4-9831-927764ab2941>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
This module is a library of useful generators for structural types. Additional generator modules should be added to the Generator directory. \begin{code} {-# LANGUAGE FlexibleInstances #-} module Test.GenCheck.Generator.StructureGens ( genListOf , genListAll , listStdGens , genTplAll ) where import Test.GenCheck.Base.Base (Rank) import Test.GenCheck.Generator.Enumeration (Label(..)) import Test.GenCheck.Generator.Generator (Generator, StandardGens(..), Testable(..)) import Test.GenCheck.Generator.BaseGens() -- Testable instances import Test.GenCheck.Generator.Substitution (Structure(..), Structure2(..)) \end{code} Lists are a special kind of structure to generate, since there is only one possible list for each rank, and the substitution values are already in a list. The genListOf combinator turns a generator of a type a into a generator of lists of a's of the specified rank. The rank of the resulting generator defines the length of the lists of a's it generates. The basic list generator, with unit as the sort, is given as genListAll. The Structure class is provided for completeness, but there is only one possible list for any given rank. \begin{code} genListOf :: Generator a -> Rank -> Generator [a] genListOf g r l = let xs = g r in subLs xs where subLs [] = [] subLs xs@(_:_) = let (ys',yss) = splitAt l xs in ys' : (subLs yss) instance Structure [] where substitute lxs ys = lsub lxs ys lsub [] zs = (Just [], zs) lsub (_:_) [] = (Nothing, []) lsub (_:xs) (z:zs) = let (mlys', ys') = lsub xs zs mlys = maybe Nothing (\lys -> Just (z:lys)) mlys' in (mlys, ys') -- remember that generating a list of rank 1 gives the empty list, not a list of one element genListAll :: Generator [Label] genListAll r = [take (r-1) (repeat A)] listStdGens :: StandardGens [Label] listStdGens = StdGens g g (\_ -> g) (\_ -> g) where g = genListAll instance Testable [Label] where stdTestGens = listStdGens listStdSub :: (Testable a) => StandardGens [a] listStdSub = let stdg = stdTestGens in StdGens (vector (genAll stdg 1)) (vector (genXtrm stdg 1)) (\k -> (vector (genUni stdg k 1))) (\s -> (vector (genRand stdg s 1))) where vector xs r = let (x, xs') = splitAt r xs in x : (vector xs' r) instance Testable [Int] where stdTestGens = listStdSub :: StandardGens [Int] \end{code} A pair generator is two sorted, so is an instance of Structure2. Pairs are always of rank 2. \begin{code} instance Structure2 (,) where substitute2 _ [] ys = (Nothing, [], ys) substitute2 _ xs [] = (Nothing, xs, []) substitute2 _ (x:xs) (y:ys) = (Just (x,y), xs, ys) genTplAll :: Generator (Label, Label) genTplAll r | r == 2 = [(A,B)] genTplAll _ | otherwise = [] tplStdGens :: StandardGens (Label, Label) tplStdGens = StdGens g g (\_ -> g) (\_ -> g) where g = genTplAll instance Testable (Label,Label) where stdTestGens = tplStdGens
{"url":"http://hackage.haskell.org/package/gencheck-0.1.1/docs/src/Test-GenCheck-Generator-StructureGens.html","timestamp":"2014-04-20T21:55:41Z","content_type":null,"content_length":"20914","record_id":"<urn:uuid:7537dd2f-be36-4b69-8a54-ab2ee76a37e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Bergen Point, NJ Math Tutor Find a Bergen Point, NJ Math Tutor ...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. 11 Subjects: including geometry, physical science, algebra 1, algebra 2 ...I have successfully tutored several students in honors and AP chemistry in the past few years. I have a broad knowledge of chemistry and passed the Praxis test 4 years ago. I really feel I can help your student succeed. 7 Subjects: including algebra 1, algebra 2, geometry, prealgebra Greetings! I am a certified Math teacher grade (5-9), with over 3 years of successful teaching experience. I am confident in my ability and passion to become helpful for your child. 4 Subjects: including trigonometry, algebra 1, prealgebra, precalculus ...I will assign homework after each session, and require students to perform periodic assessments in order to help me gauge their progress. In order to cultivate an environment most conducive to learning, I want my students to feel completely comfortable working with me. Therefore, I'm happy to offer a free consultation meeting for our first session. 27 Subjects: including SAT math, ACT Math, geometry, reading ...After four years teaching, I returned to grad school to pursue a degree in geography and now work as an environmental consultant. In college I balanced academics with fun by rowing crew, working as EMT, teaching skiing, and working in the stadium. Sharing my time between academics, team athleti... 9 Subjects: including SAT math, chemistry, physics, Microsoft PowerPoint Related Bergen Point, NJ Tutors Bergen Point, NJ Accounting Tutors Bergen Point, NJ ACT Tutors Bergen Point, NJ Algebra Tutors Bergen Point, NJ Algebra 2 Tutors Bergen Point, NJ Calculus Tutors Bergen Point, NJ Geometry Tutors Bergen Point, NJ Math Tutors Bergen Point, NJ Prealgebra Tutors Bergen Point, NJ Precalculus Tutors Bergen Point, NJ SAT Tutors Bergen Point, NJ SAT Math Tutors Bergen Point, NJ Science Tutors Bergen Point, NJ Statistics Tutors Bergen Point, NJ Trigonometry Tutors Nearby Cities With Math Tutor Chestnut, NJ Math Tutors Greenville, NJ Math Tutors Maplecrest, NJ Math Tutors Midtown, NJ Math Tutors North Elizabeth, NJ Math Tutors Pamrapo, NJ Math Tutors Parkandbush, NJ Math Tutors Peterstown, NJ Math Tutors Townley, NJ Math Tutors Tremley Point, NJ Math Tutors Tremley, NJ Math Tutors Union Square, NJ Math Tutors Weequahic, NJ Math Tutors West Carteret, NJ Math Tutors Winfield Park, NJ Math Tutors
{"url":"http://www.purplemath.com/Bergen_Point_NJ_Math_tutors.php","timestamp":"2014-04-19T06:55:08Z","content_type":null,"content_length":"24028","record_id":"<urn:uuid:c7bc41e0-e701-4361-8169-5aa1e815908f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Developing Formula Evaluation This document is for developers wishing to contribute to the FormulaEvaluator API functionality. When evaluating workbooks you may encounter a org.apache.poi.ss.formula.eval.NotImplementedException which indicates that a function is not (yet) supported by POI. Is there a workaround? Yes, the POI framework makes it easy to add implementation of new functions. Prior to POI-3.8 you had to checkout the source code from svn and make a custom build with your function implementation. Since POI-3.8 you can register new functions in run-time. Currently, contribution is desired for implementing the standard MS excel functions. Place holder classes for these have been created, contributors only need to insert implementation for the individual "evaluate()" methods that do the actual evaluation. Overview of FormulaEvaluator Briefly, a formula string (along with the sheet and workbook that form the context in which the formula is evaluated) is first parsed into RPN tokens using the FormulaParser class . (If you dont know what RPN tokens are, now is a good time to read this.) The big picture RPN tokens are mapped to Eval classes. (Class hierarchy for the Evals is best understood if you view the class diagram in a class diagram viewer.) Depending on the type of RPN token (also called as Ptgs henceforth since that is what the FormulaParser calls the classes) a specific type of Eval wrapper is constructed to wrap the RPN token and is pushed on the stack.... UNLESS the Ptg is an OperationPtg. If it is an OperationPtg, an OperationEval instance is created for the specific type of OperationPtg. And depending on how many operands it takes, that many Evals are popped of the stack and passed in an array to the OperationEval instance's evaluate method which returns an Eval of subtype ValueEval.Thus an operation in the formula is evaluated. An Eval is of subinterface ValueEval or OperationEval. Operands are always ValueEvals, Operations are always OperationEvals. OperationEval.evaluate(Eval[]) returns an Eval which is supposed to be of type ValueEval (actually since ValueEval is an interface, the return value is instance of one of the implementations of ValueEval). The valueEval resulting from evaluate() is pushed on the stack and the next RPN token is evaluated.... this continues till eventually there are no more RPN tokens at which point, if the formula string was correctly parsed, there should be just one Eval on the stack - which contains the result of evaluating the formula. Of course I glossed over the details of how AreaPtg and ReferencePtg are handled a little differently, but the code should be self explanatory for that. Very briefly, the cells included in AreaPtg and RefPtg are examined and their values are populated in individual ValueEval objects which are set into the AreaEval and RefEval (ok, since AreaEval and RefEval are interfaces, the implementations of AreaEval and RefEval - but you'll figure all that out from the code) OperationEvals for the standard operators have been implemented and tested. What functions are supported? As of Feb 2012, POI supports about 140 built-in functions, see Appendix A for the full list. You can programmatically list supported / unsuported functions using the following helper methods: // list of functions that POI can evaluate Collection<String> supportedFuncs = WorkbookEvaluator.getSupportedFunctionNames(); // list of functions that are not supported by POI Collection<String> unsupportedFuncs = WorkbookEvaluator.getNotSupportedFunctionNames(); Two base interfaces to start your implementation All Excel formula function classes implement either org.apache.poi.hssf.record.formula.functions.Function or org.apache.poi.hssf.record.formula.functions.FreeRefFunction interface. Function is a commonn interface for the functions defined in the binary Excel format (BIFF8): these are "classic" Excel functions like SUM, COUNT, LOOKUP, etc. FreeRefFunction is a common interface for the functions from the Excel Analysis Toolpack and for User-Defined Functions. In the future these two interfaces are expected be unified into one, but for now you have to start your implementation from two slightly different roots. Which interface to start from? You are about to implement a function XXX and don't know which interface to start from: Function or FreeRefFunction. Use the following code to check whether your function is from the excel Analysis // the function implements org.apache.poi.hssf.record.formula.functions.Function } else { // the function implements org.apache.poi.hssf.record.formula.functions.FreeRefFunction Walkthrough of an "evaluate()" implementation. Here is the fun part: lets walk through the implementation of the excel function SQRT() AnalysisToolPack.isATPFunction("SQRTPI") returns false so the base interface is Function. There are sub-interfaces that make life easier when implementing numeric functions or functions with fixed number of arguments, 1-arg, 2-arg and 3-arg function: • org.apache.poi.hssf.record.formula.functions.NumericFunction • org.apache.poi.hssf.record.formula.functions.Fixed1ArgFunction • org.apache.poi.hssf.record.formula.functions.Fixed2ArgFunction • org.apache.poi.hssf.record.formula.functions.Fixed3ArgFunction • org.apache.poi.hssf.record.formula.functions.Fixed4ArgFunction Since SQRTPI takes exactly one argument we start our implementation from org.apache.poi.hssf.record.formula.functions.Fixed1ArgFunction: Function SQRTPI = new Fixed1ArgFunction() { public ValueEval evaluate(int srcRowIndex, int srcColumnIndex, ValueEval arg0) { try { // Retrieves a single value from a variety of different argument types according to standard // Excel rules. Does not perform any type conversion. ValueEval ve = OperandResolver.getSingleValue(arg0, srcRowIndex, srcColumnIndex); // Applies some conversion rules if the supplied value is not already a number. // Throws EvaluationException(#VALUE!) if the supplied parameter is not a number double arg = OperandResolver.coerceValueToDouble(ve); // this where all the heavy-lifting happens double result = Math.sqrt(arg*Math.PI); // Excel uses the error code #NUM! instead of IEEE NaN and Infinity, // so when a numeric function evaluates to Double.NaN or Double.Infinity, // be sure to translate the result to the appropriate error code if (Double.isNaN(result) || Double.isInfinite(result)) { throw new EvaluationException(ErrorEval.NUM_ERROR); return new NumberEval(result); } catch (EvaluationException e){ return e.getErrorEval(); Now when the implementation is ready we need to register it in the formula evaluator: WorkbookEvaluator.registerFunction("SQRTPI", SQRTPI); Voila! The formula evaluator now recognizes SQRTPI! Floating-point Arithmetic in Excel Excel uses the IEEE Standard for Double Precision Floating Point numbers except two cases where it does not adhere to IEEE 754: 1. Positive/Negative Infinities: Infinities occur when you divide by 0. Excel does not support infinities, rather, it gives a #DIV/0! error in these cases. 2. Not-a-Number (NaN): NaN is used to represent invalid operations (such as infinity/infinity, infinity-infinity, or the square root of -1). NaNs allow a program to continue past an invalid operation. Excel instead immediately generates an error such as #NUM! or #DIV/0!. Be aware of these two cases when saving results of your scientific calculations in Excel: “where are my Infinities and NaNs? They are gone!” Testing Framework Automated testing of the implemented Function is easy. The source code for this is in the file: o.a.p.h.record.formula.GenericFormulaTestCase.java This class has a reference to the test xls file (not /a/ test xls, /the/ test xls :) which may need to be changed for your environment. Once you do that, in the test xls, locate the entry for the function that you have implemented and enter different tests in a cell in the FORMULA row. Then copy the "value of" the formula that you entered in the cell just below it (this is easily done in excel as: [copy the formula cell] > [go to cell below] > Edit > Paste Special > Values > "ok"). You can enter multiple such formulas and paste their values in the cell below and the test framework will automatically test if the formula evaluation matches the expected value (Again, hard to put in words, so if you will, please take time to quickly look at the code and the currently entered tests in the patch attachment "FormulaEvalTestData.xls" file). Appendix A Functions supported by POI ( as of Feb 2012)
{"url":"http://poi.apache.org/spreadsheet/eval-devguide.html","timestamp":"2014-04-16T19:58:40Z","content_type":null,"content_length":"20944","record_id":"<urn:uuid:4c456212-99e8-46cf-bc31-c23e127c7e84>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix Algebra Useful for Statistics 1st edition by Searle | 9780470009611 | Chegg.com Matrix Algebra Useful for Statistics 1st edition Details about this item Matrix Algebra Useful for Statistics: WILEY-INTERSCIENCE PAPERBACK SERIES The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This book is intended to teach useful matrix algebra to 'students, teachers, consultants, researchers, and practitioners' in 'statistics and other quantitative methods'.The author concentrates on practical matters, and writes in a friendly and informal style . . . this is a useful and enjoyable book to have at hand." This book is an easy-to-understand guide to matrix algebra and its uses in statistical analysis. The material is presented in an explanatory style rather than the formal theorem-proof format. This self-contained text includes numerous applied illustrations, numerical examples, and exercises. Back to top Rent Matrix Algebra Useful for Statistics 1st edition today, or search our site for Shayle R. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Wiley-Interscience.
{"url":"http://www.chegg.com/textbooks/matrix-algebra-useful-for-statistics-1st-edition-9780470009611-0470009616","timestamp":"2014-04-24T13:42:31Z","content_type":null,"content_length":"20973","record_id":"<urn:uuid:4398173b-a617-45d9-82b6-7f29eaeb9629>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
s are Created How the Auroral Activity Patterns are Created The Total Energy Detector (TED) is an instrument in the Space Environment Monitor (SEM) that has been routinely flown on the NOAA/POES (formerly TIROS) series of polar orbiting meteorological satellites since TIROS-N was launched in November of 1978. The instruments in the SEM, now the second-generation SEM-2, were significantly upgraded beginning with NOAA-15. The upgraded TED, which is designed to monitor the power flux carried into the Earths's atmosphere by precipitating auroral charged particles, now covers particle energies from 50 to 20,000 electron volts (eV) as compared to the earlier TED that extended in energy to only 300 eV. These measurements are made continually as the satellite passes over the polar aurora regions twice each orbit. Since 1978, observations from almost 300,000 transits over the auroral regions have been gathered under a variety of auroral activity conditions ranging from very quiet to extremely active. Power flux observations accumulated during a single transit over the polar region (which requires about 25 minutes as the satellite moves along its orbit) are used to estimate the total power input by auroral particles to a single polar region. This estimate, which is corrected to take into account how the satellite passes over a statistical auroral oval, is a measure of the level of auroral activity, much as K[p] or A[p] are measures of magnetic activity. A particle power input of less than 10 gigawatts (10,000,000,000 watts) to a single polar region, either in the North or the South, represents a very low level of auroral activity. A power input of more than 100 gigawatts represents a very high level of Auroral activity. Estimated power inputs as high as 500 gigawatts have been recorded into a single auroral region. In order to create statistical patterns of auroral power flux, estimated power inputs were computed using observations obtained from more than 100,000 passes over both the northern and southern polar regions. These passes encompassed a wide range of local times and a variety of auroral activity conditions. These polar passes were then sorted into ten auroral activity levels, depending upon the power input estimate. The upper bounds of the first nine levels were defined by a geometric progression of power levels beginning at 2.5 gigawatts up to 96 gigawatts; the tenth level contained estimated power inputs greater than 96 gigawatts. Power flux observations--averaged over one degree of magnetic latitude--from all polar passes with a given activity level were then merged to produce a statistical pattern (a map) of auroral particle power deposition for an entire polar region. Because data were gathered from several satellites, in differing orbits, data were available for almost all local times at latitudes above 45 degrees geomagnetic. In this fashion ten statistical patterns of auroral particle power input were created, one for each level of auroral activity. The statistical patterns show particle power flux to the atmosphere as a function of magnetic latitude and magnetic local time; coordinates that best order auroral phenomena. Estimated hemispheric power estimates are computed for each pass over the polar regions as data arrive at the Space Weather Prediction Center from the satellite tracking stations. Once the power input is estimated, the corresponding statistical pattern of auroral power input is selected. Using the Universal Time of the satellite pass, the magnetic latitude and magnetic local time coordinates of the statistical pattern are converted to geographic coordinates; the pattern is then superimposed upon a geographic polar map of either the northern or southern hemisphere. Normalization factor (n) A normalization factor of less than 2.0 indicates a reasonable level of confidence in the estimate of power. The more the value of n exceeds 2.0, the less confidence should be placed in the estimate of hemispheric power and the activity level. The process to estimate the hemispheric power, and the level of auroral activity, involves using this normalization factor which takes into account how effective the satellite was in sampling the aurora during its transit over the polar region. A large (> 2.0) normalization factor indicates that the transit through the aurora was not very effective and the resulting estimate of auroral activity has a lower confidence. In order for users to assess the confidence in a given estimate of auroral power, we now report the numerical value of the normalization factor in our web pages.
{"url":"http://www.swpc.noaa.gov/pmap/BackgroundInfo.html","timestamp":"2014-04-16T04:17:22Z","content_type":null,"content_length":"6588","record_id":"<urn:uuid:ef59e61b-a796-4f4e-8bd0-95009804adbb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Lagrangian Mechanics Made Simple Part 4 Orbital Motion and Kepler's Equation Ron Steinke <rsteinke@w-link.net> Last month we introduced the problem of a system two particles whose potential energy where the quantities is the reduced mass. For the force of gravity, ^1the potential is ) in ( ) gives The solution to this equation is This solution is unique up to a constant shift in the angle Eq. (5) describes a conic section, The quantity is called the of the orbit. An eccentricity of 0 gives a circular orbit, between 0 and 1 gives an ellipse, 1 gives a parabola, and greater than 1 gives a hyperbola. Notice that ) gives a constant value for The three body problem is not solvable in the general case. However, some approximate solutions are quite interesting. Consider the case of a small body of mass The two larger bodies lie at positions where we have used the Lagrangian for a rotating coordinate system, which we derived in Part 2 of this series. To simplify this a bit, we use the fact that ( ) and ( ) tell us that This substitution gives the Lagrangian the simpler form Notice that all terms are now proportional to To find metastable points, we look for solutions where effective potential, which is just There exist points for which the centripetal acceleration is balanced by the attraction of the larger bodies. At these points, the force, is zero. The where we have dropped the common factor of It is possible to solve the second equation in (15) by setting 13). There are also two solutions where These are what are commonly known as the fourth and fifth Lagrange points, and are known to be stable. However, a quick examination of the force ( ) shows that they do not derive this stability from the effective potential. Let ), and examine the force at a point The force in the ). The eigenvalues of this submatrix are which are both positive. Therefore, for small perturbations in the ) tends to push particles away from the Lagrange points. The stability of orbits at these points is due to the Coriolis force, which we have neglected. We'll discuss how that works next month. Lagrangian Mechanics Made Simple Part 4 Orbital Motion and Kepler's Equation This section leans heavily on J. Marion and S. Thornton, Classical Dynamics of Particles and Systems, 3rd ed., pp. 256-260 Ron Steinke 2002-08-11
{"url":"http://www.worldforge.org/project/newsletters/September2002/LagrangianP4","timestamp":"2014-04-17T15:46:45Z","content_type":null,"content_length":"34209","record_id":"<urn:uuid:86175907-8818-4490-a66e-359c2f89f685>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: percent symbols in catplot Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: percent symbols in catplot From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: percent symbols in catplot Date Sun, 28 Feb 2010 17:39:36 -0000 Eric started a thread under this heading which featured various exchanges with Martin Weiss. Replying to the original question: I am the author of -catplot-. The main issue is that -catplot- is purely a wrapper for -graph bar-, or -graph hbar- or -graph dot-, one of which is called depending on whatever is specified. In Eric's example -graph bar- is invoked. I confirm that I don't know a way to do this with high-level commands, as -graph bar- doesn't offer any such handle. A similar graph is possible with -tabplot- from SSC, at present with a sysuse auto, clear contract rep78, perc(_perc) nomiss gen toshow = string(_perc, "%3.2f") + "%" tabplot rep78 [aw=_perc] , showval(toshow) That kind of thing is much easier as -tabplot- is based on -twoway-. I have previously resisted the idea of being able to add "%" signs everywhere. Isn't a specification of percent on the graph margins sufficient? However, I'd add such an option to -tabplot- if it were clear that there were people who really wanted it. Eric Booth I am using -catplot- (from SSC) on Stata 11 MP for Mac OSX. I'd like to show the percent sign in the label for each of the category bars when using the "percent" option. For example, webuse auto, clear catplot bar rep78, percent blabel(bar, position(outside) format(%9.1f)) catplot bar rep78, by( for) percent blabel(bar, position(outside) shows the percent of each rep78 category out of 100, but I can't get it to show the % sign, so it could say "43.5%", etc. Using graph editor, I found that I can add a % sign to the bar label text manually (though I'd rather not have to do that for many graphs), but after looking through the barlabel options help documentation, I couldn't figure out how to change the bar label automatically. (I had the idea that if I could override the bar labels like you can the text for a key in a legend then I could calculate and substitute these values into the -catplot- command in a loop, but I haven't found a way to do this using the barlabel option) Another option might be to write something that automates those graph recorder grec file changes (ex: when I add the "%" by hand, it issues the command: .plotregion1.barlabels[3].text.Arrpush 43.5% in the recording), but this is a pain. Any suggestions? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-02/msg01380.html","timestamp":"2014-04-21T07:27:39Z","content_type":null,"content_length":"9924","record_id":"<urn:uuid:dc453e2a-d90b-48d2-b97f-cb2509251757>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/wio/medals","timestamp":"2014-04-21T15:31:58Z","content_type":null,"content_length":"121956","record_id":"<urn:uuid:c0afe4e8-ed4b-41a5-b78b-4e245952f896>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
pre-algebra fraction help Author Message Ten Posted: Thursday 28th of Dec 08:02 Hello, I have been trying to solve problems related to pre-algebra fraction help but I don’t seem to be getting anywhere with it . Does any one know about resources that might help From: UK Jahm Xjardx Posted: Thursday 28th of Dec 11:51 Have you checked out Algebrator? This is a great help tool and I have used it several times to help me with my pre-algebra fraction help problems. It is really very straightforward -you just need to enter the problem and it will give you a detailed solution that can help solve your assignment . Try it out and see if it helps . From: Odense, Denmark, EU Techei-Mechial Posted: Saturday 30th of Dec 07:34 I am a student turned professor ; I give classes to junior school children. Along with the traditional mode of teaching , I use Algebrator to solve questions practically in front of the learners . Matdhejs Posted: Saturday 30th of Dec 16:00 I am a regular user of Algebrator. It not only helps me finish my assignments faster, the detailed explanations provided makes understanding the concepts easier. I strongly suggest using it to help improve problem solving skills. From: The
{"url":"http://www.algebra-help.org/basic-algebra-help/angle-suplements/pre-algebra-fraction-help.html","timestamp":"2014-04-21T07:20:51Z","content_type":null,"content_length":"46137","record_id":"<urn:uuid:af896ea4-7579-44a7-8a0d-78efd5034ec0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
COMPASS Math Study Guide for COMPASS Math Prep What Should a COMPASS Math Study Guide Contain Taking the COMPASS Math Test Now that you have decided that you want to study a Math course in college, it is important that you take the COMPASS test because the result of this exam will help you find out whether you have the skills to successfully complete the course or not. When it comes to the Math placement exam, it is important for you to refer to a good study guide with the help of which you can obtain a high score. There is no passing score in the exam but there is a cut-off mark that every college will have and if you score below this mark, you will not be given admission. Thus, you need to make sure that you prepare for the test properly. Before you start with your preparation, you should know that there are five areas in which you will be tested and these are: 1. Pre-Algebra 2. Algebra 3. Geometry 4. College Algebra 5. Trigonometry Contents of a Test Guide You should look for a study guide that is written in an easy to understand language, so that you will be able to understand it well. Moreover, if you are looking forward to buying a book, there are certain features or contents that you should be looking for. These are as follows: • Math area coverage: First of all, it is important that the guide you buy should cover all the areas of the Math test. After all, you will only be able to score high if you know each of the content areas well. • Formulas: Make sure that the guide you buy has mathematical formulas relevant to the content areas. This way, it will be easy for you to refer to the same and also memorize them for better result in the actual test. • Illustrations: It is important that the book has illustrations for you to refer to because you will truly be able to understand how to work on the problems when you see the illustrations. If there is no illustration, you may not know how to solve a specific problem. Thus, the guide will not be useful to you as such. So make sure you buy a guide that has clear and easy to understand • Practice tests: This is one other content that you should look for in a study guide. Make sure that the book offers sample tests so that you will get a good idea about the questions that you can expect and you will also be able to improve your knowledge of mathematical concepts by working on these questions. The book should also contain explanations to the correct answers because this way, you will be able to truly understand how the problems are solved. Review of a Few Math Guides Given below are the study guides that you can refer to prepare for the test: a. Compass Math Test Success by Academic Success Media(http://www.amazon.com/Compass-Math-Test-Success-Solutions/dp/1453634789/ref=pd_sim_sbs_b_3/184-4363943-4840941): If you are looking for a book that will cover each and every question that may come in the Math test, then this is one book that you should consider buying. It covers all the areas of the exam and it also has examples and practice tests that you can work on. Many test takers who have bought this book have found it to be useful so you can consider buying it to help you with your preparation. b. Bob Miller's Math Prep for the COMPASS Exam(http://www.amazon.com/COMPASS-Exam-Millers-Accuplacer-Preparation/dp/073861002X/ref=sr_1_2?s=books&ie=UTF8&qid=1356190451&sr=1-2&keywords= Compass+Math+Test): This is a book you should buy because you will be able to find out everything that you ought to know about the Math Placement Test. The review chapters for each of the test areas are easy to understand and you will find two sample tests that resemble the actual exam. c. Answers Explained by Jill Hacker (http://www.amazon.com/Answers-Explained-Arriving-Placement-Questions/dp/1450505139/ref=sr_1_3?s=books&ie=UTF8&qid=1356190451&sr=1-3&keywords=Compass+Math+Test): To score high in the COMPASS math test, you need to understand how to solve the problems that are presented to you. This is where you will find this study guide to be of great help to you. It will prepare you for the test in the most effective manner so that you can score high in the exam. Terms and Conditions Information published in TestPrepPractice.net is provided for informational and educational purpose alone for deserving students, researchers and academicians. Though our volunteers take great amount of pain and spend significant time in validating the veracity of the information or study material presented here, we cannot be held liable for any incidental mistakes. All rights reserved. No information or study material in this web site can be reproduced or transmitted in any form, without our prior consent. However the study materials and web pages can be linked from your web site or web page for • Research • Education • Academic purposes No permission is required to link any of the web page with educational information available in this web site from your web site or web page
{"url":"http://www.testpreppractice.net/COMPASS/compass-math-study-guide.html","timestamp":"2014-04-18T08:04:08Z","content_type":null,"content_length":"20271","record_id":"<urn:uuid:344981ec-0180-4913-a851-555e2d2825eb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonparametric Tests Applications of Generating Functions in Nonparametric Tests Nonparametric Tests Nonparametric methods were developed to be used when the researcher knows nothing about the distribution of the variable of interest (hence the name nonparametric). Basically, there is at least one nonparametric equivalent for each parametric type of test. When you want to compare the distribution of two independent samples, usually you would use the Student Nonparametric tests are robust (they need no assumptions about the distribution of the background population), efficient (in early days it was believed that a heavy price in loss of efficiency would have to be paid for robustness, but [2, 3] and several other authors showed clearly that the efficiency is comparable with classical tests using the assumption of normality), and easy to handle using Nevertheless, they are not widely accepted. One possible reason is that the application of a nonparametric test requires the use of (and the confidence in) a table (see, for example, [4] or [5]) of the distribution of its test statistics. Generating such tables requires heavy algebraic manipulations and is therefore mostly beyond the scope of introductory textbooks. Published tables are sometimes inaccurate or not extensive enough (especially in the case of ties). Moreover, Mitic [6] pointed out that the entries of some published tables differ, depending on their source. Finally, if the sample size is large, it is possible to approximate most of these distributions by standard distributions, but little is known about the quality of these approximations for small sample sizes. Thus, I believe, Mathematica users are interested in procedures that allow them to generate accurate and extensive tables of the distributions of nonparametric test statistics and plot the distribution functions of these statistics. With these procedures, Mathematica users are independent of published tables and can investigate the quality of the approximation by standard
{"url":"http://www.mathematica-journal.com/issue/v9i4/contents/GeneratingFunctions/GeneratingFunctions_2.html","timestamp":"2014-04-17T13:32:27Z","content_type":null,"content_length":"8655","record_id":"<urn:uuid:1b45809a-2e40-435d-bc48-723581938706>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
COHERENCE OF Proof-Net CATEGORIES Kosta Dosen and Zoran Petric Matematicki institut SANU, Beograd, Serbia and Montenegro Abstract: The notion of proof-net category defined in this paper is closely related to graphs implicit in proof nets for the multiplicative fragment without constant propositions of linear logic. Analogous graphs occur in Kelly's and Mac Lane's coherence theorem for symmetric monoidal closed categories. A coherence theorem with respect to these graphs is proved for proof-net categories. Such a coherence theorem is also proved in the presence of arrows corresponding to the mix principle of linear logic. The notion of proof-net category catches the unit free fragment of the notion of star-autonomous category, a special kind of symmetric monoidal closed category. Keywords: generality of proofs; linear logic; mix; proof nets; linear distribution; dissociativity; categorial coherence; Kelly-Mac Lane graphs; Brauerian graphs; split equivalences; symmetric monoidal closed category; star-autonomous category Classification (MSC2000): 03F07; 03F52; 18D10; 18D15; 19D23 Full text of the article: (for faster download, first choose a mirror) Electronic fulltext finalized on: 2 Mar 2006. This page was last modified: 27 Oct 2006. © 2006 Mathematical Institute of the Serbian Academy of Science and Arts © 2006 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/092/1.html","timestamp":"2014-04-16T04:23:59Z","content_type":null,"content_length":"5049","record_id":"<urn:uuid:7fa9f469-d32e-4fab-8fb8-f0eab3680527>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Crofton, MD Algebra 1 Tutor Find a Crofton, MD Algebra 1 Tutor ...I am currently a graduate student at Capella University seeking my Masters degree in Educational Psychology 2015. I am also an admissions counselor for The Chicago School of Professional Psychology in Washington, D.C. I am an experienced Counselor and truly love what I do. 20 Subjects: including algebra 1, English, writing, reading ...I have great intuition in solving the issues children have in learning, especially in attention spans. I have been a GED teacher for 2 years, a tutor on WyzAnt for 2+ years and have successfully taught students in taking the SAT and GED. I have 6 students taught by me who have earned their GEDs and have raised the scores of 3 students in their SATs. 31 Subjects: including algebra 1, English, writing, reading John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for t... 18 Subjects: including algebra 1, statistics, geometry, algebra 2 ...I would ask the student to give me weekly feedback on how such techniques are assisting him/her to master the study process. As a former teacher of learning disabilities and the parent of a child diagnosed with ADD, I have had to learn adaptive strategies to help students and my son achieve succ... 23 Subjects: including algebra 1, English, reading, writing ...I specialize in Music and Mathematics.1) Music a) Trombone (beginning, intermediate, and advanced) b) Piano (beginning level, classical and jazz) c) Music Theory2) Mathematics - all mathematics up to and including calculus and linear algebra. I served in the Navy, performing in the Pacifi... 15 Subjects: including algebra 1, calculus, geometry, statistics Related Crofton, MD Tutors Crofton, MD Accounting Tutors Crofton, MD ACT Tutors Crofton, MD Algebra Tutors Crofton, MD Algebra 2 Tutors Crofton, MD Calculus Tutors Crofton, MD Geometry Tutors Crofton, MD Math Tutors Crofton, MD Prealgebra Tutors Crofton, MD Precalculus Tutors Crofton, MD SAT Tutors Crofton, MD SAT Math Tutors Crofton, MD Science Tutors Crofton, MD Statistics Tutors Crofton, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/crofton_md_algebra_1_tutors.php","timestamp":"2014-04-20T04:20:28Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:f7f0dabb-5b6a-4f0b-b3bd-aa62c44f44d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
When can a family of polynomials get a weight function to be made orthogonal? up vote 3 down vote favorite Let $\lbrace P_n(z)\rbrace_{n\in\mathbb N_0}$ be a family of polynomials defined by a generating function $g(t,z)=\sum\limits_{n=0}^\infty P_n(z)t^n$ or by a contour integral $P_n(z)=\frac1{2\pi i}\ oint\frac{g(t,z)}{t^{n+1}}dt$. Are there known sufficient conditions on $g$ or on the $P_n$ themselves that guarantee the existence of a weight function $w:I\to \mathbb R^+_0$ (where $I\subset\mathbb R$ is an appropriate interval) such that the $P_n$ are orthogonal w.r.t. $w$? 2 Orthogonal polynomials must satisfy a three-term recursion $x P_{n+1}(x) = A_n P_n(x) - B_n P_{n-1}(x)$ and have real interlaced roots. Not clear how to test these necessary conditions from a generating function or contour integral, let alone give sufficient conditions. – Noam D. Elkies Jan 27 '12 at 23:06 2 Oops, I started writing my answer before this comment appeared. The three-term recurrence should be $xP_{n}(x) = P_{n+1}(x) + A_n P_n(x) - B_n P_{n-1}(x)$ with $B_n>0$, and then it actually implies that the roots are interlaced. – Henry Cohn Jan 27 '12 at 23:18 (But you're right that this isn't a particularly useful condition for proving that polynomials are orthogonal. I view Favard's theorem as having mainly psychological value: if you observe a suitable recurrence experimentally, then you really ought to look for an orthogonality proof to explain it. It's sometimes possible to prove the recurrence directly and work from there, but this is generally not the most flexible or illuminating approach.) – Henry Cohn Jan 27 '12 at 23:21 Henry is right, and I apologize for the typo. – Noam D. Elkies Jan 27 '12 at 23:56 add comment 1 Answer active oldest votes Favard's theorem characterizes this in terms of the three-term recurrence. Suppose the polynomials $P_n$ are normalized so that they are monic. Then they are orthogonal polynomials with respect to some Borel measure if and only if there are constants $\alpha_n$ and $\beta_n$ such that $P_n(x) = (x+\alpha_n) P_{n-1}(x) + \beta_n P_{n-2}(x)$ and $\beta_n < 0$. (The sign condition on $\beta_n$ is needed to get a positive measure. I think you still get a signed measure if you have a three-term recurrence with $\beta_n \ge 0$, but I'm not certain up vote 8 down vote This is pretty easy to test for in practice if you are given a sequence of polynomials numerically. Strictly speaking, it doesn't guarantee a weight function as specified in your accepted question, since the measure may not be absolutely continuous with respect to Lebesgue measure, but I assume that's not what you really care about. If it is, then I'm not sure offhand how to characterize that case. Thank you. That's about what I expected: that one of the necessary conditions is essentially sufficient. Like Noam D. Elkies, I didn't really expect there to be any hope of deriving anything directly from a generating function. Let alone the even more hopeless question: If only $g(t,z)$ is given, is there even a way to know, without doing a Taylor expansion, that the coefficients will actually be polynomials in $z$? – Wolfgang Jan 28 '12 at 10:24 add comment Not the answer you're looking for? Browse other questions tagged orthogonal-polynomials or ask your own question.
{"url":"https://mathoverflow.net/questions/86864/when-can-a-family-of-polynomials-get-a-weight-function-to-be-made-orthogonal","timestamp":"2014-04-18T18:32:10Z","content_type":null,"content_length":"57307","record_id":"<urn:uuid:0afb9581-33f7-4853-9762-860d816f7b58>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
PGM-class and MRF parameter learning I’m taking Stanford CS 228 (a.k.a. ) on Coursera. The class is great, I guess it provides close to the maximum one can do under the constraints of remoteness and bulkness. The thing I miss is theoretical problems, which were taken aside from the on-line version because they could not be graded automatically. There is an important thing about graphical models I fully realized only recently (partly due to the class). This thing should be articulated clearly in every introductory course, but is often just mentioned, probably because lecturers consider it obvious. The thing is there is no probabilistic meaning of MRF potentials whatsoever . The partition function is there not only for amenity: in contrast to Bayesian networks, there is no general way to assign potentials of an graphical model to avoid normalization. The loops make it impossible. The implication is one should not assign potentials by estimating frequencies of assignments to factors (possibly conditioned on features) like I did earlier . This is quite a bad heuristic because it is susceptible to overcounting. Let me give an example. For the third week programming assignment we needed to implement a Markov network for handwriting OCR. The unary and pairwise potentials are somewhat obvious, but there was also a task to add ternary factors. The accuracy of the pairwise model is 26%. Mihaly Barasz tried to add ternary factors with values proportional to frequencies in English, which decreased performance to 21% ( link for those who have access ). After removing pairwise factors, the performance rose to 38%. Why has the joint model failed? The reason is overcounting evidence: different factor types enforce the same co-occurrences, thus creating bias towards more frequent assignments, and this shows it can be significant. Therefore, we should train models with cycles discriminatively. One more thought I’d like to share: graphical model design is similar to software engineering in the way that the crucial thing for the both is eliminating insignificant dependencies on the architecture design stage. 1 Response to "PGM-class and MRF parameter learning" 1. Here's an interesting side point -- if you could assign potentials to avoid global normalization then computing the partition function could be done efficiently with dynamic programming. And vica versa, if you can compute the partition function efficiently, then you could rewrite your model as a product of factors which can be estimated using simple counting.
{"url":"http://computerblindness.blogspot.com/2012/04/pgm-class-and-mrf-parameter-learning.html","timestamp":"2014-04-16T07:22:29Z","content_type":null,"content_length":"71086","record_id":"<urn:uuid:9ff08ee5-e4dc-44d8-b5dc-02180e71115a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
radical calculator Author Message cherlejcaad Posted: Saturday 30th of Dec 07:54 I am taking an online radical calculator course. For me it’s a bit hard to study this subject all by myself. Is there some one studying online? I really need some help. Vofj Timidrov Posted: Saturday 30th of Dec 20:20 Hello Dude How are you? . Well I’ve been reading your post and believe me : I know what it feels like. Some time ago I was in the same problem, but before you get a professor, I will like to recommend you one program that’s really helpful: Algebrator. I really tried a lot of other ways but that one it’s definitely the the real deal! The best luck with that! Tell me what you think!. From: Bulgaria Noddzj99 Posted: Monday 01st of Jan 13:42 I found out a some software programs that are appropriate. I tried them out. The Algebrator turned out to be the most suited one for simplifying fractions, converting decimals and gcf. It was also effortless to operate . It took me step by step towards the solution rather than just giving the answer . That way I got to learn how to explain the problems too. By the time I was finished, I had learnt how to work out the problems. I found them practical with Algebra 2, Basic Math and Remedial Algebra which helped me in my algebra classes. May be, this is just what you are looking for. Why not try this out? From: the 11th ZaleviL Posted: Wednesday 03rd of Jan 10:45 dividing fractions, radical expressions and system of equations were a nightmare for me until I found Algebrator, which is really the best math program that I have come across. I have used it frequently through several algebra classes – Intermediate algebra, Pre Algebra and College Algebra. Simply typing in the algebra problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I really recommend the program. From: floating in the light, never JONMAM Posted: Thursday 04th of Jan 11:34 I want it NOW! Somebody please tell me, how do I order it? Can I do so over the internet? Or is there any contact number where we can place an order? From: Columbus Ohio cmithy_dnl Posted: Friday 05th of Jan 08:54 You can download this program from http://www.algebra-equation.com/solving-quadratic-equation.html. There are some demos available to see if it is really what want and if you find it useful, you can get a licensed version for a nominal amount. From: Australia
{"url":"http://www.algebra-equation.com/solving-algebra-equation/graphing-inequalities/radical-calculator.html","timestamp":"2014-04-18T21:58:33Z","content_type":null,"content_length":"25410","record_id":"<urn:uuid:a0428249-1cb9-4051-9389-04e258eee1d2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
My Account Solutions will be available for download STAT 167 Week 4 immediately upon purchase. Price: $25.00 Additionally, you will receive the solutions via email. List All Products Advanced Search Forgot your password? Forgot your username? No account yet? Download Area Show Cart Your Cart is currently empty. Sample Solutions Using Power Series to Solve an Equation Circuit Diagrams Spelling and Usage Business and Finance You are protected with STAT 167 Week 4 Week 4 Individual Assignment Distribution, Hypothesis Testing, and Error Worksheet 1. Describe a normal distribution in no more than 100 words (0.5 point). 2. Construct a normal quantile plot in Statdisk, show the regression line, and paste the image into your response. Based on the normal quantile plot, does the data above appear to come from a population of Bigcone Douglas-fir tree ages that has a normal distribution? Explain (0.5 point). 3. Calculate the sample mean and standard deviation for the age of Bigcone Douglas-fir trees based on the data above. If this accurately represents the population mean and standard deviation for the age of surviving Bigcone Douglas-fir trees in the burn area, what is the probability that a randomly selected surviving Bigcone Douglas-fir tree from the burn area will be 111 years old or less? Round to the nearest hundredth (0.5 point). 4. Calculate the sample mean and standard deviation for the age of Bigcone Douglas-fir trees based on the data above. If this accurately represents the population mean and standard deviation for the age of surviving Bigcone Douglas-fir trees in the burn area, what is the probability that a randomly selected surviving Bigcone Douglas-fir tree from the burn area will be less than 0 years old? Round to the nearest hundredth. Based on this result, is it logical to assume that the population of age of surviving Bigcone Douglas-fir trees in the burn area is normally distributed with the parameters identified. Why or why not (0.5 point)? 5. Describe a standard normal distribution in no more than 100 words (0.5 point). 6. In a standard normal distribution, what is the probability of randomly selecting a value between -2.555 and -0.745? Round to four decimal places (0.5 point). 7. Describe a uniform distribution in no more than 100 words (0.5 point). 8. Describe the sampling distribution of the mean in no more than 100 words (1 point). 9. Explain the central limit theorem in no more than 150 words (1 point). 10. Describe the z distribution in no more than 100 words (0.5 point). 11. Explain when the z distribution can be used in no more than 150 words (0.5 point). 12. Describe the t distribution in no more than 100 words (0.5 point). 13. Explain when the t distribution can be used in no more than 150 words (0.5 point). 14. Describe the chi-square distribution in no more than 100 words (0.5 point). 15. Explain when the chi-square distribution can be used in no more than 150 words (0.5 point). 16. Determine the appropriate approach to conduct a hypothesis test for this claim: Fewer than 5% of patients experience negative treatment effects. Sample data: Of 500 randomly selected patients, 2.2% experience negative treatment effects (0.5 point). 17. Determine the appropriate approach to conduct a hypothesis test for this claim: The systolic blood pressure of men who run at least five miles each week varies less than does the systolic blood pressure of all men. Sample data: n = 100 randomly selected men who run at least five miles each week, sample mean = 108.4, and s = 20.3 (0.5 point). 18. Determine the appropriate approach to conduct a hypothesis test for this claim: The mean sodium content of a 30 g serving of snack crackers is 2,200 mg. Sample data: n = 130 snack crackers, sample mean = 3,100 mg, and s = 570. The sample data appear to come from a normally distributed population (0.5 point). 19. Describe a type I error in no more than 100 words (0.5 point). 20. List two strategies that can minimize the likelihood of a type I error (0.5 point). 21. Describe a type II error in no more than 100 words (0.5 point). 22. List two strategies that can minimize the likelihood of a type II error (0.5 point). 23. In a 250- to 350-word essay, compare type I and type II errors and explain the possible negative effects of each error type in the life sciences (2 points). Team Assignment Confidence Intervals in the Life Sciences Presentation Discussion Questions Explain, as if to a high school student, what it means to make a type I error. Then, in the same way, explain what it means to make a type II error. Can you find a real-world example where a type I or type II error would most likely skew the interpretations of a study? Is there a way for scientists to correct for these errors? Why or why not? Next, reply to a classmate’s response and ask a question about the response that you think a high school student would ask. Explain the circumstances under which a z distribution should be constructed. Under what circumstances should a t distribution be constructed? When can neither a z nor a t distribution be constructed? Provide an example from a particular life science for each of these instances. Next, reply to the response of a classmate with examples from a different life science that you provided and comment on the similarities and differences in the data from different life sciences. You may also be interested in this/these product(s) $10.00 $5.33 Chi Square tests $10.00 $7.50 $10.00 $3.00 $20.00 HLT 362 A3 $15.00 $15.00 $4.26 $8.54 Last Updated: Saturday, 19 April 2014 07:12
{"url":"http://www.lil-help.com/index.php?page=shop.product_details&category_id=25&flypage=flypage_images.tpl&product_id=2545&vmcchk=1&option=com_virtuemart&Itemid=153&pop=1","timestamp":"2014-04-19T07:12:34Z","content_type":null,"content_length":"49258","record_id":"<urn:uuid:9e54371d-55c4-4bc7-9aa3-e5106d33af62>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Path through a 3x3x3 grid Replies: 3 Last Post: Feb 26, 2013 5:01 PM Messages: [ Previous | Next ] Re: Path through a 3x3x3 grid Posted: Feb 26, 2013 4:59 PM On Feb 20, 3:46 pm, Clive Tooth <cli...@gmail.com> wrote: > There is a (well known) continuous path, made of four straight > sections, which passes exactly once through each of 9 points arranged > in a square 3x3 array. > Using three of these paths, plus two plane-to-plane straight sections, > it is clearly possible to make a continuous path, made of 14 straight > sections, which passes exactly once through each of 27 points arranged > in a 3x3x3 grid. > However, there is at least one such path made up of only 13 straight > sections. I believe that there are exactly 26 essentially distinct paths through the 27 points of the 3x3x3 grid. Here are images of all of them: Clive Tooth Date Subject Author 2/20/13 Path through a 3x3x3 grid The Last Danish Pastry 2/23/13 Re: Path through a 3x3x3 grid The Last Danish Pastry 2/26/13 Re: Path through a 3x3x3 grid The Last Danish Pastry 2/26/13 Re: Path through a 3x3x3 grid The Last Danish Pastry
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2436430&messageID=8423943","timestamp":"2014-04-20T01:21:26Z","content_type":null,"content_length":"20358","record_id":"<urn:uuid:5df5092e-f559-4502-b00a-dcc4e17c22a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
number theory (mathematics) number theory Article Free Pass number theory, branch of mathematics concerned with properties of the positive integers (1, 2, 3, …). Sometimes called “higher arithmetic,” it is among the oldest and most natural of mathematical Number theory has always fascinated amateurs as well as professional mathematicians. In contrast to other branches of mathematics, many of the problems and theorems of number theory can be understood by laypersons, although solutions to the problems and proofs of the theorems often require a sophisticated mathematical background. Until the mid-20th century, number theory was considered the purest branch of mathematics, with no direct applications to the real world. The advent of digital computers and digital communications revealed that number theory could provide unexpected answers to real-world problems. At the same time, improvements in computer technology enabled number theorists to make remarkable advances in factoring large numbers, determining primes, testing conjectures, and solving numerical problems once considered out of reach. Modern number theory is a broad subject that is classified into subheadings such as elementary number theory, algebraic number theory, analytic number theory, geometric number theory, and probabilistic number theory. These categories reflect the methods used to address problems concerning the integers. From prehistory through Classical Greece The ability to count dates back to prehistoric times. This is evident from archaeological artifacts, such as a 10,000-year-old bone from the Congo region of Africa with tally marks scratched upon it—signs of an unknown ancestor counting something. Very near the dawn of civilization, people had grasped the idea of “multiplicity” and thereby had taken the first steps toward a study of numbers. It is certain that an understanding of numbers existed in ancient Mesopotamia, Egypt, China, and India, for tablets, papyri, and temple carvings from these early cultures have survived. A Babylonian tablet known as Plimpton 322 (c. 1700 bc) is a case in point. In modern notation, it displays number triples x, y, and z with the property that x^2 + y^2 = z^2. One such triple is 2,291, 2,700, and 3,541, where 2,291^2 + 2,700^2 = 3,541^2. This certainly reveals a degree of number theoretic sophistication in ancient Babylon. Despite such isolated results, a general theory of numbers was nonexistent. For this—as with so much of theoretical mathematics—one must look to the Classical Greeks, whose groundbreaking achievements displayed an odd fusion of the mystical tendencies of the Pythagoreans and the severe logic of Euclid’s Elements (c. 300 bc). According to tradition, Pythagoras (c. 580–500 bc) worked in southern Italy amid devoted followers. His philosophy enshrined number as the unifying concept necessary for understanding everything from planetary motion to musical harmony. Given this viewpoint, it is not surprising that the Pythagoreans attributed quasi-rational properties to certain numbers. For instance, they attached significance to perfect numbers—i.e., those that equal the sum of their proper divisors. Examples are 6 (whose proper divisors 1, 2, and 3 sum to 6) and 28 (1 + 2 + 4 + 7 + 14). The Greek philosopher Nicomachus of Gerasa (flourished c. ad 100), writing centuries after Pythagoras but clearly in his philosophical debt, stated that perfect numbers represented “virtues, wealth, moderation, propriety, and beauty.” (Some modern writers label such nonsense numerical theology.) In a similar vein, the Greeks called a pair of integers amicable (“friendly”) if each was the sum of the proper divisors of the other. They knew only a single amicable pair: 220 and 284. One can easily check that the sum of the proper divisors of 284 is 1 + 2 + 4 + 71 + 142 = 220 and the sum of the proper divisors of 220 is 1 + 2 + 4 + 5 + 10 + 11 + 20 + 22 + 44 + 55 + 110 = 284. For those prone to number mysticism, such a phenomenon must have seemed like magic. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/422325/number-theory","timestamp":"2014-04-20T06:13:30Z","content_type":null,"content_length":"85587","record_id":"<urn:uuid:0a3d25e9-8895-4adb-b381-1a6e7cf0e750>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the domain of a function January 29th 2008, 07:08 AM #1 Jan 2008 Finding the domain of a function For clarification purposes, in word form this would be: the fraction of x over the square root of 39 x squared minus seven. It would help alot if someone could help me find and write the answer Set Builder Notation (ex: [-5, infinity)). Thanks in advance for your assistance. For clarification purposes, in word form this would be: the fraction of x over the square root of 39 x squared minus seven. It would help alot if someone could help me find and write the answer Set Builder Notation (ex: [-5, infinity)). Thanks in advance for your assistance. The domain is all values of $x$ that the function takes on. In this scenario, not only must the square root not be negative, but it cannot be zero, so: $x: \left(-\infty, -\sqrt{\frac{7}{39}}\right) \cup \left(\sqrt{\frac{7}{39}}, \infty \right)$ Last edited by colby2152; January 30th 2008 at 04:42 AM. The domain is all values of $x$ that the function takes on. In this scenario, not only must the square root not be negative, but it cannot be zero, so: $x: (-\infty, -\sqrt{\frac{7}{39}}) \cap (\sqrt{\frac{7}{39}}, \infty)$ use \left( and \right) so that the brackets encompass everything. so you would get: $x: \left(-\infty, -\sqrt{\frac{7}{39}} \right) \cap \left(\sqrt{\frac{7}{39}}, \infty \right)$ which looks nicer ...well, sort of, that answer's ugly either way... it should be $x: \left(-\infty, -\sqrt{\frac{7}{39}} \right) \cup \left(\sqrt{\frac{7}{39}}, \infty \right)$ ? January 29th 2008, 07:26 AM #2 January 29th 2008, 04:16 PM #3 January 29th 2008, 08:51 PM #4 May 2007 January 29th 2008, 09:15 PM #5 January 30th 2008, 04:42 AM #6 January 30th 2008, 03:51 PM #7
{"url":"http://mathhelpforum.com/pre-calculus/27045-finding-domain-function.html","timestamp":"2014-04-20T20:25:27Z","content_type":null,"content_length":"59069","record_id":"<urn:uuid:87b9f0a6-9c8d-4f43-914a-b35455616dcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Conway's Game of Life The Game of Life (an example of a cellular automaton) is played on an infinite two-dimensional rectangular grid of cells. Each cell can be either alive or dead. The status of each cell changes each turn of the game (also called a generation) depending on the statuses of that cell's 8 neighbors. Neighbors of a cell are cells that touch that cell, either horizontal, vertical, or diagonal from that cell. The initial pattern is the first generation. The second generation evolves from applying the rules simultaneously to every cell on the game board, i.e. births and deaths happen simultaneously. Afterwards, the rules are iteratively applied to create future generations. For each generation of the game, a cell's status in the next generation is determined by a set of rules. These simple rules are as follows: • If the cell is alive, then it stays alive if it has either 2 or 3 live neighbors • If the cell is dead, then it springs to life only in the case that it has 3 live neighbors There are, of course, as many variations to these rules as there are different combinations of numbers to use for determining when cells live or die. Conway tried many of these different variants before settling on these specific rules. Some of these variations cause the populations to quickly die out, and others expand without limit to fill up the entire universe, or some large portion thereof. The rules above are very close to the boundary between these two regions of rules, and knowing what we know about other chaotic systems, you might expect to find the most complex and interesting patterns at this boundary, where the opposing forces of runaway expansion and death carefully balance each other. Conway carefully examined various rule combinations according to the following three criteria: • There should be no initial pattern for which there is a simple proof that the population can grow without limit. • There should be initial patterns that apparently do grow without limit. • There should be simple initial patterns that grow and change for a considerable period of time before coming to an end in the following possible ways: 1. Fading away completely (from overcrowding or from becoming too sparse) 2. Settling into a stable configuration that remains unchanged thereafter, or entering an oscillating phase in which they repeat an endless cycle of two or more periods. Example Patterns Using the provided game board(s) and rules as outline above, the students can investigate the evolution of the simplest patterns. They should verify that any single living cell or any pair of living cells will die during the next iteration. Some possible triomino patterns (and their evolution) to check: Here are some tetromino patterns (NOTE: The students can do maybe one or two of these on the game board and the rest on the computer): Some example still lifes: Square : Boat : Loaf : Ship : The following pattern is called a "glider." The students should follow its evolution on the game board to see that the pattern repeats every 4 generations, but translated up and to the left one square. A glider will keep on moving forever across the plane. Another pattern similar to the glider is called the "lightweight space ship." It too slowly and steadily moves across the grid. Early on (without the use of computers), Conway found that the F-pentomino (or R-pentomino) did not evolve into a stable pattern after a few iterations. In fact, it doesn't stabilize until generation The F-pentomino stabilizes (meaning future iterations are easy to predict) after 1,103 iterations. The class of patterns which start off small but take a very long time to become periodic and predictable are called Methuselahs. The students should use the computer programs to view the evolution of this pattern and see how/where it becomes stable. The "acorn" is another example of a Methuselah that becomes predictable only after 5206 generations. Alan Hensel compiled a fairly large list of other common patterns and names for them, available at radicaleye.com/lifepage/picgloss/picgloss.html. Life32 is a full-featured and fast Game of Life simulator for Windows. You can download the Life32 program here. There are initial patterns that can be used only with Life32 that you can download here. Another extraordinarily fast program that can be installed on Windows, OS X, and Linux is Golly, which uses hashing for truly amazing speedups. Golly can be found at http://sourceforge.net/ project/showfiles.php?group_id=139354. There is a brief description of how Golly achieves such amazing speed here. There are also many Java implementations of The Game that can be run under in most modern web browsers, though they are usually slower. One of these can be found at http://www.ibiblio.org/lifepatterns/. Jason Summers has compiled a very interesting collection of life patterns that can be run with either Life32 or Golly, which can be downloaded here. If you're using Life32, then after installing, the students should navigate to the directory containing the initial patterns linked to above. In this directory are files with standard Life patterns predefined in them. The following patterns are provided (and the students should run the files in this order): a standard glider, a Queen shuttle bee, a lasting Queen shuttle bee, a Gosper glider gun (first example of a pattern growing indefinitely, won the creator $50), a LWSS (light-weight space ship), a pulsar, and a pentadecathlon. After looking at (and trying to understand) the easier examples, the students can play around with some of the files in this compilation by Jason Summers of popular and look at other interesting patterns. Some of the better files are located in the "applications" and "guns" directories. If you're using Golly, then another list of initial patterns is prominently located on the left-hand side of the window. Activity - Two-Player Game of Life To call Conway's Game of Life a game is to stretch the meaning of the word "game", but there is an fun adaptation that can produce a competitive and strategic activity for multiple players. The modification made is that now the live cells come in two colors (one associated with each player). When a new cell comes to life, the cell takes on the color of the majority of its neighbors. (Since there must be three neighbors in order for a cell to come to life, there cannot be a tie. There must be a majority) Players alternate turns. On a player's turn, he or she must kill one enemy cell and must change one empty cell to a cell of their own color. They are allowed to create a new cell at the location in which they killed an enemy cell. After a player's turn, the Life cells go through one generation, and the play moves to the next player. There is always exactly one generation of evolution between separate players' actions. The initial board configuration should be decided beforehand and be symmetric. A player is eliminated when they have no cells remaining of their color. This variant of life can well be adapted to multiple players. However, with more than two players, it is possible that a newborn cell will have three neighbors belonging to three separate players. In that case, the newborn cell is neutral, and does not belong to anyone. It's possible even, to create patterns which emulate logic gates (and, not, or, etc.) and counters. Building up from these, it was proved that the Game of Life is Turing Complete, which means that with a suitable initial pattern, one can do any computation that can be done on any computer. Later, Paul Rendell actually constructed a simple Turing Machine as a proof of concept, which can be found here. Although Rendell's Turing Machine is fairly small, it contains all of the ideas necessary to create larger machines that could actually do meaningful calculations. One of the patterns in Jason Summers' collection will compute prime numbers, and another will compute twin primes (two primes that only differ by adding or subtracting 2). A very far zoom out of Paul Rendell's Turing Machine:
{"url":"http://www.math.cornell.edu/~lipa/mec/lesson6.html","timestamp":"2014-04-17T03:59:26Z","content_type":null,"content_length":"16246","record_id":"<urn:uuid:a4077afb-e80f-46e1-adeb-1fcea2214fbd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Are black holes explained by complex analysis? Exactly. P=m/V, so finite mass and zero volume gives infinite density. Why do you think infinite mass is required? But, my point is that there is with zero volumn! If zero-volumn is considered, then whatever it may be called just doesn't exist. This is "General Astronomy" so go to some other more advanced forums, or many recent (last 5 years) papers published by "well-known" astrophysicists and see what they have to say about "zero-point" singularities This seems to tie in with your being wrong about infinities not working in math, which they do. Do you not get that 3/0 = infinity? Yes, I get that one. How about ∞/∞? Is that = to 1? Is that a prime number? How about ∞-1/∞? Same number? Mathematicians can can play around with many "infinity tricks" but as once said, Infinities might work in math but not in physics. Black Holes are physical entities with only 4 "detectible" properties, as opposed to Wheeler's original 3 "no-hair" properties. I have (long ago) posted at least 3-4 separate posts regarding the four properties, yet still today books, and PF posts, ignore that. And the gravity of a BH is proportional to its mass. If we have to use any infinities to describe any properties of a BH, then all that can be said is that " we don't know ". (I saw that on TV)
{"url":"http://www.physicsforums.com/showthread.php?t=684184","timestamp":"2014-04-20T00:56:12Z","content_type":null,"content_length":"81993","record_id":"<urn:uuid:fed26f76-2c8f-4945-9988-ee541fe6c42d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Space efficient learning algorithms - MACHINE LEARNING , 1995 "... Within the framework of pac-learning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function r ..." Cited by 61 (3 self) Add to MetaCart Within the framework of pac-learning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class C ` 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of fixed-size for a class C is sufficient to ensure that the class C is pac-learnable. Previous work has shown that a class is pac-learnable if and only if the Vapnik-Chervonenkis (VC) dimension of the class i... , 1995 "... This paper addresses the problem of learning boolean functions in query and mistake-bound ..." "... In this paper we analyze the PAC learning abilities of several simple iterative algorithms for learning linear threshold functions, obtaining both positive and negative results. We show that Littlestone’s Winnow algorithm is not an efficient PAC learning algorithm for the class of positive linear th ..." Cited by 20 (9 self) Add to MetaCart In this paper we analyze the PAC learning abilities of several simple iterative algorithms for learning linear threshold functions, obtaining both positive and negative results. We show that Littlestone’s Winnow algorithm is not an efficient PAC learning algorithm for the class of positive linear threshold functions. We also prove that the Perceptron algorithm cannot efficiently learn the unrestricted class of linear threshold functions even under the uniform distribution on boolean examples. However, we show that the Perceptron algorithm can efficiently PAC learn the class of nested functions (a concept class known to be hard for Perceptron under arbitrary distributions) under the uniform distribution on boolean examples. Finally, we give a very simple Perceptron-like algorithm for learning origin-centered halfspaces under the uniform distribution on the unit sphere in R^n. Unlike the Perceptron algorithm, which cannot learn in the presence of classification noise, the new algorithm can learn in the presence of monotonic noise (a generalization of classification noise). The new algorithm is significantly faster than previous algorithms in both the classification and monotonic noise settings. - Journal of the ACM , 1993 "... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..." Cited by 10 (3 self) Add to MetaCart this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540. - In Proceedings of COLT , 2006 "... We consider two well-studied problems regarding attribute efficient learning: learning decision lists and learning parity functions. First, we give an algorithm for learning decision lists of length k over n variables using 2 . This is the first algorithm for learning decision lists that h ..." Cited by 9 (1 self) Add to MetaCart We consider two well-studied problems regarding attribute efficient learning: learning decision lists and learning parity functions. First, we give an algorithm for learning decision lists of length k over n variables using 2 . This is the first algorithm for learning decision lists that has both subexponential sample complexity and subexponential running time in the relevant parameters. Our approach establishes a relationship between attribute efficient learning and polynomial threshold functions and is based on a new construction of low degree, low weight polynomial threshold functions for decision lists. For a wide range of parameters our construction matches a lower bound due to Beigel for decision lists and gives an essentially optimal tradeoff between polynomial threshold function degree and weight. , 1992 "... as a testbed for designing intelligent agents. The system consists of an overall agent architecture and five components within the architecture. The five components are: 1. Goal-Directed Learning (GDL), a decision-theoretic method for selecting learning goals. 2. Probabilistic Bias Evaluation (PBE) ..." Cited by 6 (2 self) Add to MetaCart as a testbed for designing intelligent agents. The system consists of an overall agent architecture and five components within the architecture. The five components are: 1. Goal-Directed Learning (GDL), a decision-theoretic method for selecting learning goals. 2. Probabilistic Bias Evaluation (PBE), a technique for using probabilistic background knowledge to select learning biases for the learning goals. 3. Uniquely Predictive Theories (UPTs) and Probability Computation using Independence (PCI), a probabilistic representation and Bayesian inference method for the agent's theories. 4. A probabilistic learning component, consisting of a heuristic search algorithm and a Bayesian method for evaluating proposed theories. 5. A decision-theoretic probabilistic planner, which searches through the probability space defined by the agent's current theory to select the best action. PAGODA is given as input an initial planning goal (its ove , 1993 "... A pac-learning algorithm is d-space bounded, if it stores at most d examples from the sample at any time. We characterize the d-space learnable concept classes. For this purpose we introduce the compression parameter of a concept class C and design our Trial and Error Learning Algorithm. We show ..." Cited by 1 (0 self) Add to MetaCart A pac-learning algorithm is d-space bounded, if it stores at most d examples from the sample at any time. We characterize the d-space learnable concept classes. For this purpose we introduce the compression parameter of a concept class C and design our Trial and Error Learning Algorithm. We show : C is d-space learnable if and only if the compression parameter of C is at most d. This learning algorithm does not produce a hypothesis consistent with the whole sample as previous approaches e.g. by Floyd, who presents consistent space bounded learning algorithms, but has to restrict herself to very special concept classes. On the other hand our algorithm needs large samples; the compression parameter appears as exponent in the sample size. We present several examples of polynomial time space bounded learnable concept classes: ffl all intersection closed concept classes with finite VC--dimension. ffl convex n-gons in IR 2 . ffl halfspaces in IR n . ffl unions of "... Abstract We make the first progress on two important problems regarding attribute efficient learnability. First, we give an algorithm for learning decision lists of length k over n variables using ..." Add to MetaCart Abstract We make the first progress on two important problems regarding attribute efficient learnability. First, we give an algorithm for learning decision lists of length k over n variables using - in \Proc. 13th Conf. on Comp. Learning Theory , 2000 "... We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these w ..." Add to MetaCart We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these well-studied online algorithms in some sense correspond to instances of boosting. We show that the new algorithms can be viewed as natural PAC analogues of the online p-norm algorithms which have recently been studied by Grove, Littlestone, and Schuurmans [16] and Gentile and Littlestone [15]. As special cases of the algorithm, by taking p = 2 and p = 1 we obtain natural boostingbased PAC analogues of Perceptron and Winnow respectively. The p = 1 case of our algorithm can also be viewed as a generalization (with an improved sample complexity bound) of Jackson and Craven's PAC-model boosting-based algorithm for learning "sparse perceptrons" [20]. The analysis of the generalizatio... "... Abstract. We consider two well-studied problems regarding attribute efficient learning: learning decision lists and learning parity functions. First, we give an algorithm for learning decision lists of length k over n variables using 2 Õ(k1/3) log n examples and time n Õ(k 1/3). This is the first al ..." Add to MetaCart Abstract. We consider two well-studied problems regarding attribute efficient learning: learning decision lists and learning parity functions. First, we give an algorithm for learning decision lists of length k over n variables using 2 Õ(k1/3) log n examples and time n Õ(k 1/3). This is the first algorithm for learning decision lists that has both subexponential sample complexity and subexponential running time in the relevant parameters. Our approach is based on a new construction of low degree, low weight polynomial threshold functions for decision lists. For a wide range of parameters our construction matches a lower bound due to Beigel for decision lists and gives an essentially optimal tradeoff between polynomial threshold function degree and weight. Second, we give an algorithm for learning an unknown parity function on k out of n variables using O(n 1−1/k) examples in poly(n) time. For k = o(log n) this yields the first polynomial time algorithm for learning parity on a superconstant number of variables with sublinear sample complexity. We also give a simple algorithm for learning an unknown size-k parity using O(k log n) examples in n k/2 time, which improves on the naive n k time bound of exhaustive search. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=927873","timestamp":"2014-04-17T01:22:30Z","content_type":null,"content_length":"36701","record_id":"<urn:uuid:d57af404-b7c6-4f20-b2aa-644a320c6f42>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Correction for Multiplicity in Genetic Association Studies of Triads: The Permutational TDT • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Ann Hum Genet. Author manuscript; available in PMC Mar 1, 2012. Published in final edited form as: PMCID: PMC3117224 NIHMSID: NIHMS302257 Correction for Multiplicity in Genetic Association Studies of Triads: The Permutational TDT The publisher's final edited version of this article is available at Ann Hum Genet See other articles in PMC that the published article. New technology for large-scale genotyping has created new challenges for statistical analysis. Correcting for multiple comparison without discarding true positive results and extending methods to triad studies are among the important problems facing statisticians. We present a one-sample permutation test for testing transmission disequilibrium hypotheses in triad studies, and show how this test can be used for multiple single nucleotide polymorphism (SNP) testing. The resulting multiple comparison procedure is shown in the case of the transmission disequilibrium test to control the familywise error. Furthermore, this procedure can handle multiple possible modes of risk inheritance per SNP. The resulting permutational procedure is shown through simulation of SNP data to be more powerful than the Bonferroni procedure when the SNPs are in linkage disequilibrium. Moreover, permutations implicitly avoid any multiple comparison correction penalties when the SNP has a rare allele. The method is illustrated by analyzing a large candidate gene study of neural tube defects and an independent study of oral clefts, where the smallest adjusted p-values using the permutation procedure are approximately half those of the Bonferroni procedure. We conclude that permutation tests are more powerful for identifying disease-associated SNPs in candidate gene studies and are useful for analysis of triad studies. Keywords: Exchangeable, familywise error rate, linkage disequilibrium, power Advances in technology have led to an increase in large genetic association studies of disease. Along with the ability to look at large numbers of single nucleotide polymorphisms (SNPs) has come the need for improved methods of statistical correction for multiplicity. The Bonferroni procedure is the simplest and most often used method of correction. However, the Bonferroni procedure is well known to be overly conservative in the presence of correlation (Han et al., 2009). Permutation tests implicitly account for correlation through the use of the data vectors, thereby improving power over Bonferroni-type methods. Permutation tests are also the only multiple comparison procedures capable of exact error control for small or moderate sample sizes. Multiple comparison procedures based on permutations have several other attractive features. One is that they implicitly reduce the penalty for comparisons when the events are rare (Westfall & Troendle, 2008), such as when a SNP is too uncommon to produce a small enough p-value to affect the permutational correction for multiplicity. The permutational procedure will essentially consider that SNP not tested, effectively reducing the correction factor. This is an important advantage when some of the SNPs under study have rare alleles. Another advantage of permutation tests is their ability to handle multiple tests of the same hypothesis easily. It is quite common in genetic association studies for several different modes of inheritance to be considered, typically dominant, recessive, and multiplicative. Each of these modes of inheritance leads to different tests of the null hypothesis. Permutational procedures need only consider the minimum p-value across all tests and SNPs to produce adjusted p-values that account for both the multiple SNPs and multiple tests applied to each SNP. Another statistical challenge is providing improved methods for family-based studies. Some diseases, like birth defects, are well suited to collection of genetic information on triads (case child, mother, father). The use of triads avoids two problems: (1) ascertainment bias inherent in control selection (Schlesselman, 1982) and (2) population stratification where the case groups may contain different proportions of an ethnic group than the control group. Both of these lead to excess type I errors in tests of association using case–control designs (Lee & Wang, 2008). Triads allow methodology conditioned on the parental genotypes that is robust to population stratification. A very common genetic association test for a single bi-allelic locus (e.g., SNP), based only on triad data, is the transmission disequilibrium test (TDT) (Spielman et al., 1993). This test can be obtained as a likelihood ratio test for the child's genotype in a multiplicative model, conditioned on the parental genotype. In this report, we describe a one-sample permutational approach for the inheritance-association hypothesis, and show how it can be used when correcting for multiplicity. The resulting multiple comparison procedure permits testing multiple SNPs and multiple tests of each SNP to allow for different inheritance models. We show that the method strongly controls the familywise error rate (FWE), regardless of sample size. Simulations show the permutational procedure to have more power than the Bonferroni procedure under varying inheritance modes, risk allele proportions, and SNP correlations. We analyze a study of candidate genes in neural tube defect (NTD [MIM #182940]) triads as well as a study of oral cleft (OFC1 [MIM #119530]) triads in Ireland, showing that the smallest adjusted p-values from the permutational procedure can be approximately half that of the Bonferroni procedure. Materials and Methods One-Sample Permutation Test for Pre-Test/Post-Test Design We start with the classical one-sample permutation test that arises from a pre-test post-test design. In this design a single sample of subjects is observed before and after some intervention. Let (Z [i], W[i]) be the measured variable on subject i (pre-intervention, post-intervention), i = 1,...,n. A test to see if the intervention led to different values of the measured variable is based on the values of the differences, D[i] = W[i] − Z[i]. A permutation test of the hypothesis that the distributions of pre-intervention and post-intervention values are identical is motivated by the realization that under the null hypothesis the random variable (Z[i], W[i]), conditioned on the two values {z[i] and w[i]}, takes value (z[i], w[i]) with probability 1/2 and value (w[i], z[i]) with } probability 1/2. In other words to get the distribution of D[i] under the null hypothesis, the labels that tell you the value z[i] represents the pre-intervention value and w[i] represents the post-intervention value can be permuted. A full permutation test considers each of the possible datasets (Z[1], W[1]), … (Z[n], W[n]) that could have been obtained given the observed z and w values. For each such dataset, a test statistic, TS is computed and compared to the observed test statistic, TS^obs. Let TS^r be the test statistic computed from the rth permuted dataset, r = 1, …, M. A p-value is then obtained as the proportion of the M permuted datasets for which TS^r ≥ TS^obs. One-Sample Permutations for Triads Consider the case of testing for genetic association using genotype data on triads. Designate the allele of interest A, and let the other allele be denoted G (it is not important that the alleles actually be A and G). The data from n triads regarding the transmission of these alleles is shown in Table 1. The TDT is based on whether or not the designated allele is transmitted by each heterozygous parent to the case child (Spielman et al., 1993). The null hypothesis is that there is no association between disease and transmission of the designated allele. The standard χ^2 test for this hypothesis has test statistic TS = (b − c )^2/(b + c), which under the null hypothesis has a $χ(1)2$ distribution. Frequencies of allele transmission The data for the ith triad can be represented as (Z[i], W[i]), where $Zit=(zi1zi2)$, z[i1] is an indicator that the first heterozygous parent transmits an A allele to the case child (if there is no heterozygous parent then z[i1] = 0), z[i2] is an indicator that the second heterozygous parent transmits an A allele to the case child (if there is no second heterozygous parent then z[i2] = 0), and the superscript t stands for matrix transpose. The vector W[i] is defined analogously to Z[i], but indicating transmission of G alleles. The null hypothesis is that transmission of the designated allele has no effect on the risk of being a case. This implies that the alleles are exchangeable, or that conditioned on the two values {z[i] and w[i]},(Z[i], W[i]) takes value (z[i], w[i]) with probability 1/2 and value (w[i], z[i]) with probability 1/2. In other words to get the distribution of TS under the null hypothesis, z[i] and w[i] can be permuted. Equivalently, an exact version of the TDT can be obtained by using the Binomial distribution with p = 0.5 as null distribution for the number of transmitted A alleles from heterozygous parents. A full permutation test of the null hypothesis considers each of the possible permuted datasets (Z[1], W[1]), … (Z[n], W[n]), where n is the number of triads. For each such dataset, the test statistic, TS is computed and compared to the observed test statistic, TS^obs. Let TS^r be the test statistic computed from the rth permuted dataset, r = 1, …, M. A p-value is then obtained as the proportion of the M permuted datasets for which TS^r ≥ TS^obs. A level α test rule would then be to reject the null hypothesis if the p-value was ≤ α. A full permutation test using this rule has type I error ≤ α, regardless of the sample size (Pesarin, 2001). This is in contrast to the ordinary TDT using the $χ(1)2$ null distribution, which only controls the type I error asymptotically. This is the primary advantage of permutation tests in general, although not necessarily in this case. In our experience the $χ(1)2$ TDT has reasonable control of the type I error for moderate or larger sampled studies, and so using a permutational (or exact) TDT is not a significant improvement. However, there are other advantages of the permutational version that will be described in the following In most applications, a full permutation test is not computationally feasible. For example, there are 2^n different permutation datasets for a study with n triads. In these cases a random sample of permuted datasets are selected. For a random permutation test, let TS^r be the test statistic computed from the rth randomly permuted dataset, r = 1, …, M. A p-value is then obtained as the proportion of the M+1 permuted datasets for which TS^r ≥ TS^obs, r = 0, 1, …, M, where TS^0 = TS^obs. The p-value from a random permutation test is an approximation to the p-value from a full permutation test, where we can control the likely error of the approximation by adjusting M, the number of random permutations used. Multivariate Permutations for Triads Suppose now that there are data from n triads on k SNPs. One might want to test the null hypothesis that there is no genetic association of disease with any of the k SNPs. This leads quite naturally to a multivariate permutation test. Suppose for each SNP we have a designated allele, which we denote A, and the other allele is denoted G. The data for the ith triad on the jth SNP can now be represented as (Z[ij], W[ij]), where $Zijt=(zij1zij2)$, z[ij1] is the indicator of A allele transmission from the first heterozygous parent to the case child (if there is no heterozygous parent then z[ij1] = 0), z[ij2] is the indicator of A allele transmission from the second heterozygous parent to the case child (if there is no second heterozygous parent then z[ij2] = 0). The vector W[ij] is defined analogously to Z[ij], but indicating G allele transmission. Let The null hypothesis is that transmission of the designated allele on any SNP has no effect on the risk of being a case. Many test statistics could be chosen to test this composite or overall null hypothesis. However, a simple and very effective choice is the maximum of the TDT test statistics from each SNP individually. Therefore, denote TS = max{TS[j] : j = 1, …, k}, where TS[j] is the TDT test statistic on SNP j. The null hypothesis implies that the alleles are exchangeable, or that conditioned on the two values { z[i] and w[i]},(Z[i], W[i]) takes value (z[i], w[i]) with probability 1 /2 and value (w[i], z[i]) with probability 1/2. In other words to get the distribution of TS under the null hypothesis, the labels that tell you the value z[i] represents the A allele transmission indicators and w[i] represents the G allele transmission indicators can be permuted. Note that the names A and G are arbitrary so that the method does not depend on the alleles actually being A and G for each SNP, or that the same two alleles are being noticed by different SNPs. Multivariate permutation tests work exactly like univariate permutation tests except that one is permuting a vector (in the case of the TDT even in the single SNP case we were permuting a vector of two indicators, in the multi-SNP case we are permuting a vector of two indicators for each of k SNPs). The multi-SNP case illustrates one of the advantages of permutations. Permutations estimate the exact conditional distribution of the TS under the null hypothesis. In contrast, a parametric approach leaves one trying to estimate the distribution of the maximum of k correlated $χ(1)2$ variables. This is not a problem that can be solved without resorting to asymptotics or approximations. Neither asymptotics nor approximations work well as k increases. An Example Suppose there are n triads on two SNPs. Here we will show in detail what the permutations might look like and how they are obtained. Figure 1 shows the first three triads from a hypothetical dataset. According to the notation given in the previous section, the data vectors for the first three triads are The first two rows of the Z and W vectors correspond to the information from the parents about SNP1 transmission, whereas the later two rows correspond to SNP2 transmission. Notice that rows of the Z and W vectors for which both the Z and W component is 0 correspond to nonheterozygous parents. Permutations are made to the corresponding Z and W pairs. Thus, if we let $Z1∗$ be the permuted Z[1] vector, $Z1∗$ will either be $(1110)$ or $(0000)$ with equal probability. After each Z and W pair are independently permuted, one has a new dataset $Z1∗,W1∗,…,Zn∗,Wn∗$, which is then used to obtain TS^1. The test statistic for the TDT, given previously and expressed in terms of the Z and W vectorsis (b − c )^2/(b + c), where b is the sum of the Z components (over all of the triads) and c is the sum of the W components. This process is repeated M times to obtain TS^r for r = 1, …, M (Figure 2). Two SNPs of three hypothetical triads are shown. The first triad has two heterozygous parents for the first SNP with the child inheriting A alleles from each and one heterozygous parent for the second SNP with the child inheriting the A allele from that ... Observed and permuted data for the example in Figure 1 are shown. The first row of vectors represent the observed Z and W vectors for the three triads. The arrows represent permuting of the vectors, resulting in either the same order as the observed data ... Multiple SNPs Consider again the situation described in the previous section where we have data from n triads on k SNPs. Let p[j] be the exact binomial TDT p-value for SNP j, j = 1, …, k. Suppose now that we want to test each of the null hypotheses that the jth SNP has no genetic association with disease for j = 1, …, k. We wish to control the FWE. First we will get adjusted p-values for each hypothesis. A multiple comparison procedure test rule would then be to reject a null hypothesis if the adjusted p-value was ≤ α. The simplest and most commonly used correction for multiple comparison is the Bonferroni procedure. In this case the adjusted p-value from the Bonferroni procedure on the jth SNP is $pjB=k⋅pj$. p[j] if the result is less than 1 and equals 1 otherwise. The Bonferroni procedure controls the FWE strongly. Sequential versions of the Bonferroni procedure like the Holm procedure (Holm, 1976), provide improvements. However, in large genetic association studies there are typically few SNPs that survive correction and the improvement over the Bonferroni is small unless the fraction of rejected null hypotheses is substantial. In this paper, we only consider single step procedures that treat all hypotheses without regard to rejection of any other hypotheses. A permutational procedure is easily applied to this problem by slightly modifying the multivariate permutation version of the TDT described in the previous section. Let TS^r = max{TS[j] : j = 1, …, k } be the max test statistic computed from the rth randomly permuted dataset, r = 0, …, M(r = 0 is the observed). this case one obtains adjusted p-values for the jth SNP hypothesis as $pjP$ = the proportion of the M+1 permuted datasets for which TS^r ≥ TS[j], r = 0, 1, …, M. For this procedure to control the FWE strongly, we need a joint distributional condition (JDC)(Westfall & Troendle, Joint Distributional Condition Let H[j] be the null hypothesis for SNP j. Suppose H[r1], …, H[rt] are true null hypotheses for r[1], …, r[t] k}. Then under H[r1] … H[r[t]] the joint distribution of {TS[r1], …, TS[r[t]] is obtained by the multivariate permutational distribution. The JDC here says essentially that if the null hypothesis holds for individual SNPs, then it holds in a multivariate sense for those same SNPs taken as a collection. The advantage of the JDC condition is that with the condition, multivariate permutation gives the exact joint distribution of test statistics under the null hypothesis. Without the JDC, the multivariate distribution of test statistics would be unknown even though each one would have a known marginal distribution. When the JDC holds, it is easy to see that the procedure controls the FWE regardless of the true hypotheses and regardless of sample size. For the genetic association tests of multiple SNPs we are considering, we show now that the JDC does hold. The reasoning is that because each H[j], j r[1], …, r[t]} is true we have that the marginal distribution of each TS[j], j r[1], …, r[t]} is given by the permutational distribution. For ease of notation} Z[i] will now represent the subvector consisting of only those components corresponding to SNPs with indicies r[1], …, r[t] (W[i] is analogously defined). The only way that the joint distribution of the test statistics would then differ from the multivariate permutational distribution is if the correlation of the subvectors Z[i] and W[i] were different. However, Z[i] consists of Bernoulli components with probability of success 0.5 (along with some components that are constants equal to 0 when there is no corresponding heterozygous parent for that particular SNP; these components are ignored because they have no correlation with any other component). Moreover, W[i] = 1 − Z[i]. One can then see that cov(Z[i]) = cov(W[i]), and thus Z[i] and W[i] are exchangeable. This implies that the multivariate permutation distribution is the joint distribution of the test statistics, and so the JDC holds. The JDC does not always hold. In fact, in the usual two-sample case of case–control comparisons on multiple SNPs, it would not be expected to hold in general. In that case, correction for multiplicity based on multivariate permutation of the cases and controls does not strongly control the familywise error without assuming the JDC. If for any reason the covariance structure of the SNPs was different for cases than controls, the JDC would not hold. One way in which such a differential correlation might arise would be in a particular type of interaction between SNPs. However, in the case of interaction it might be seen as a benefit rather than a drawback that the method might lead to rejection of the null hypothesis for certain SNPs that are part of an interaction, although this would technically be a familywise error for the multiple comparison procedure. Multiple Tests per Hypothesis Often in genetic association studies one would like to use several tests of the same null hypothesis. Typically, dominant, recessive, and multiplicative inheritance models for a given SNP are assumed, leading to different tests of the no association null hypothesis. Regardless of what inheritance models one might decide to use in testing the hypothesis of no association for the jth SNP with disease, the only necessary modification of the permutational procedure is that the max in the definition of TS^r extends also over the different tests applied to the jth SNP. Then the procedure will adjust for all tests on all SNPs, where one can reject any hypothesis for which any test has adjusted p-value ≤ α. Monte Carlo simulations were used to assess the FWE and power of the permutational multiple testing procedure, and compare it with use of the Bonferroni procedure. A genotype relative risk model was assumed at each SNP j,where ψ[1] represents the risk of disease with one copy of the allele of interest divided by the risk of disease with no copies. Similarly, ψ[2] represents the risk of disease with two copies of the allele of interest divided by the risk of disease with no copies. Correlated multivariate genotype data for triads was generated by first obtaining haplotype data in linkage disequilibrium (LD) for the parents and then applying an inheritance model. To generate haplotype data in LD for the parents of cases, SNPs on the same strand were assumed to be in linked blocks of length n[b], with SNPs between blocks independent. For the purpose of the simulations the proportion of the allele of interest in the population, f, is assumed to be the same for each SNP. Based on the probabilities of each mating type given a diseased case (given in Table 1 of Schaid & Sommer, 1993), the proportion of the allele of interest for parents of cases, p, is calculated. A multivariate haplotype is formed one SNP at a time, starting by determining that the first SNP contains the allele of interest with probability p. Consecutive SNPs in the same block are assumed to have LD parameter, D[00] = p[00] − p[0· ]p[·0], where p[ij] represents (for i, j = 0, 1, or ·) the proportion of haplotypes that have i alleles of interest for the first SNP and j alleles of interest for the second SNP and where if i or j equals · then the proportion is evaluated for the corresponding SNP to have either allele. Subsequent SNPs in the block are then evaluated conditionally on the previous SNP, based on the probabilities $Pr{allele of interest on current SNP∣allele of interest on previous SNP}=D00+p2p$ $Pr{allele of interest on current SNP∣no allele of interest on previous SNP}=p(1−p)−D001−p.$ Each parental chromosome is generated independently. Once haplotypes for the parents are generated, genotypes for the children are generated using a random crossover model. In this model, the child inherits from the same strand until a random crossover event occurs with probability p[c], where inheritance is then from the other strand. A total of 500 triads were simulated for each replication of the simulation experiment, with 100 SNPs in blocks of size n[b] = 5. The LD parameter was D[00] = 0.5 for the parental haplotypes within a block. The LD for the case children was controlled by the value of p[c]. Table 2 shows the results of the null simulations for testing at level 0.05. Each simulation consisted of 100,000 replications. Both procedures control the FWE at the desired level of 5%. However, we note that the Bonferroni procedure becomes overly conservative when the correlation between SNPs within blocks is increased (correlation increases as p[c] decreases). Familywise error (%) of the tests under complete null configurations^^a Table 3 shows the results of simulations under three different nonnull models. The nonnull models correspond to dominant, recessive, and multiplicative disease inheritance patterns. Each simulation consisted of 10,000 replications. In each simulation, there were 5 nonnull SNPs out of 100. The average power over the nonnull SNPs is reported. In the cases of low correlation, there is very little difference in power between the methods. However, in the cases of high correlation, the power is substantially higher in the permutational method. This is in agreement with similar comparisons between the Bonferroni and two-sample permutational procedures for controlling the FWE. Power (%) of the tests under alternative configurations^^a Neural Tube Defects and Clefts in Ireland As part of a candidate gene study, we analyzed 1339 SNPs from 93 genes on 277 complete NTD triads from the Republic of Ireland. Birth defects are ideal candidates for triad studies because the parents are usually easily identified and likely to be willing to agree to participate. Using multiplicity correction is extremely harsh and the Bonferroni corrected p-values are all 1.0 after truncation, despite the smallest unadjusted p-value being 0.002233. The Bonferroni multiplier is 1339, so the Bonferroni adjusted p-value for the most significant SNP is not even close to being below 1.0 as 1339 × 0.002233 = 3.0. In contrast the permutational procedure gives adjusted p-values below 1.0 (smallest permutational adjusted p-value was 0.74), much smaller than the Bonferroni. Thus, although none of the adjusted p-values are close to being significant, it is clear that the adjusted p-values from the Bonferroni procedure are extremely conservative when compared to those from the permutational procedure. As an example to see how the permutational adjustment compares to the Bonferroni when some of the adjusted p-values are relatively small, we present a subanalysis of the above experiment. This is presented as an example to examine the relative size of the adjusted p-values, and not to represent what we consider appropriate control for multiplicity. We consider 18 SNPs from a single gene on 277 complete NTD triads. The p-values adjusted only for the 18 SNPs are presented in Table 4. One sees again that the permutational procedure gives smaller adjusted p-values than the Bonferroni. Moreover, the improvement is quite large when considered as a proportion of the Bonferroni adjusted p-value. For the smallest unadjusted p-value, the permutational procedure yields an adjusted p-value more than 40% smaller than the corresponding Bonferroni adjusted p-value. Unadjusted and adjusted p-values for SNPs from a single gene on NTD triads A final example is given from an independent study. As part of an analysis of oral clefts in Ireland (Carter et al., 2010), 31 SNPs on 250 complete cleft palate only case triads were analyzed. Again, this is presented as an example to compare the adjusted p-values of the procedures, and not to represent what we consider appropriate control for multiplicity, or to represent a complete analysis of cleft cases on these SNPs. The p-values corresponding to the 10 most significant SNPs, adjusted for all 31 SNPs are presented in Table 5. In this case, the smallest adjusted p-value of the permutational procedure is almost 50% smaller than the corresponding Bonferroni adjusted p-value. Unadjusted and adjusted p-values for SNPs from a single gene on cleft triads We have shown that permutations can be used to approximate the null distribution under the TDT null hypothesis, and that this leads to a one-sample permutational test. This extends to tests of multiple SNPs by permuting vectors of genotypes. Furthermore, we have shown how an FWE-controlling multiple comparison procedure can be obtained quite simply and have proven strong control of the FWE. The methodology extended easily to allow for multiple tests per hypothesis, which accommodates testing via multiple inheritance models in the same study. The permutational approach given here may also be used in more complex family-based designs that include either affected or unaffected siblings. Simulations show that the permutational procedure has the desired FWE level, and that it has a substantial power advantage over the Bonferroni procedure when the SNPs are in LD. This is important when a segment of a gene with multiple SNPs in LD is being examined. Although the power advantages of our approach compared to the Bonferroni procedure were most notable in cases of LD, there is an additional, perhaps more important, reason why this approach may be valuable in genome-wide association studies (GWAS). Our simulations did not contain rare SNPs, a situation where permutational adjustments are more powerful compared to the Bonferroni. In the future technology will doubtless become available to examine more genetic variants than can be studied currently. This advance will create more problems for statisticians dealing with multiple comparisons. The permutation procedure described here will aid in dealing with these problems because of its ability to handle rare alleles and LD more efficiently than currently used methods (e.g., Bonferroni). We conclude that using a permutational version of the TDT is feasible, and leads to more powerful detection of associated SNPs in candidate gene studies of triads. This research was supported in part by the Intramural Research Program of the NIH, NICHD. We thank Dr. Lawrence Brody for his insightful advice. • Carter TC, Molloy AM, Pangilinan F, Troendle JF, Kirke PN, Conley MR, Orr DJA, Earley M, McKiernan E, Lynn EC, Doyle A, Scott JM, Brody LC, Mills JL. Testing reported associations of genetic risk factors for oral clefts in a large Irish study population. Birth Defects Res A. 2010;88:84–93. [PMC free article] [PubMed] • Han B, Kang HM, Eskin E. Rapid and accurate multiple testing correction and power estimation for millions of correlated markers. PLoS Genet. 2009;5:e1000456. 1–13. [PMC free article] [PubMed] • Holm S. A simple sequentially rejective multiple test procedure. Scand J Stat. 1976;6:65–70. • Lee W-C, Wang L-Y. Simple formulas for gauging the potential impact of population stratification bias. Am J Epi. 2008;167:86–89. [PubMed] • Pesarin F. Multivariate Permutation Tests: With Applications in Biostatistics. Wiley; Chichester: 2001. • Schaid DJ, Sommer SS. Genotype relative risks: Methods for design and analysis of candidate-gene association studies. Am J Hum Genet. 1993;53:1114–1126. [PMC free article] [PubMed] • Schlesselman JJ. Case-Control Studies: Design, Conduct, Analysis. Oxford University Press; New York: 1982. • Spielman RS, McGinnis RE, Ewens WJ. Transmission test for linkage disequilibrium: The insulin gene region and insulin-dependent diabetes mellitus (IDDM) Am J Hum Genet. 1993;52:506–516. [PMC free article] [PubMed] • Westfall PH, Troendle JF. Multiple testing with minimal assumptions. Biometr J. 2008;50:745–755. [PMC free article] [PubMed] • Evaluation of common genetic variants in 82 candidate genes as risk factors for neural tube defects[BMC Medical Genetics. ] Pangilinan F, Molloy AM, Mills JL, Troendle JF, Parle-McDermott A, Signore C, O’Leary VB, Chines P, Seay JM, Geiler-Samerotte K, Mitchell A, VanderMeer JE, Krebs KM, Sanchez A, Cornman-Homonoff J, Stone N, Conley M, Kirke PN, Shane B, Scott JM, Brody LC. BMC Medical Genetics. 1362 • Genotyping of a tri-allelic polymorphism by a novel melting curve assay in MTHFD1L: an association study of nonsyndromic Cleft in Ireland[BMC Medical Genetics. ] Minguzzi S, Molloy AM, Peadar K, Mills J, Scott JM, Troendle J, Pangilinan F, Brody L, Parle-McDermott A. BMC Medical Genetics. 1329 See all... • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3117224/?tool=pubmed","timestamp":"2014-04-24T08:48:43Z","content_type":null,"content_length":"90388","record_id":"<urn:uuid:b1f88642-c252-46bd-ad83-2d116e969cbd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Seekonk Algebra Tutor Find a Seekonk Algebra Tutor ...I have excellent qualifications for tutoring ACT Math. I have spent several years teaching high school Math and tutoring students in Math at the middle school, high school, and college level. I have also tutored many students privately to prepare them for the ACT. 25 Subjects: including algebra 1, algebra 2, geometry, statistics ...I always felt that history not only told us a story of the past, but was a direct connection with helping us understand our present and projecting into our future. It represents a panorama of great people and great events. That is how I teach history, as a story of great people and great events and its relevance to the present. 31 Subjects: including algebra 2, English, algebra 1, reading ...As far as my tutoring background, I started in high school when I spent my study halls tutoring student peers that needed the extra help. Then spent after school volunteering at an elementary school to help out children that were falling behind class. Though I did not tutor much during college,... 17 Subjects: including algebra 1, algebra 2, calculus, statistics ...I also scored very high in the areas of Development of Reading Comprehension, Mathematics, Science and Technology/Engineering, and Child Development. I have also recently completed the Communication and Literacy Skills Assessment and scored quite high. These tests satsfy the prerequisites for application for my Massachusetts teaching license. 13 Subjects: including algebra 1, English, writing, grammar ...Of course all of these things require much more labor than memorization or rule-following. But, as the exercise gurus like to say, "no pain, no gain." And that's why we call memorization and rule-following "mechanical," after all: they can be done by a machine. True learning, and true teaching, is always humane, always exciting, always transformative - never mechanical. 47 Subjects: including algebra 2, algebra 1, chemistry, English
{"url":"http://www.purplemath.com/Seekonk_Algebra_tutors.php","timestamp":"2014-04-16T16:48:38Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:a3e0cdd2-2493-41e6-bedd-69e0598154fc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Random Signal Processing Random Signal Processing, 1/e Dwight F. Mix, University of Arkansas Published August, 1995 by Prentice Hall Engineering/Science/Mathematics Copyright 1996, 450 pp. ISBN 0-02-381852-2 Sign up for future mailings on this Beginning with excellent background material, this book makes the study of random signal analysis manageable and easily understandable. See other books about: With comprehensive and detailed coverage of Wiener filtering and Kalman filtering, this book presents a coherent treatment of estimation theory and an in-depth look at detection Signal Processing (or template matching) theory for communication and pattern recognition. 1. Introduction. 2. Probability. 3. Random Variables. 4. Random Vectors. 5. Signal Analysis Techniques. 6. Stochastic Processes. 7. Least-Square Techniques. 8. Optimum Filtering. 9. Template Matching. Appendix A: Table of Normal Curve Areas. Appendix B: Gauss-Jordan Matrix Inversion. Appendix C: Symbolic Differentiation. © Prentice-Hall, Inc. A Pearson Education Company Comments To webmaster@prenhall.com
{"url":"http://www.prenhall.com/allbooks/ptr_0134889673.html","timestamp":"2014-04-18T23:42:50Z","content_type":null,"content_length":"8583","record_id":"<urn:uuid:da0aca19-4998-4d51-b312-fc7e51a463c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x D.T. Harper, III, D.A. Linebarger, "Conflict-Free Vector Access Using a Dynamic Storage Scheme," IEEE Transactions on Computers, vol. 40, no. 3, pp. 276-283, March, 1991. BibTex x @article{ 10.1109/12.76404, author = {D.T. Harper, III and D.A. Linebarger}, title = {Conflict-Free Vector Access Using a Dynamic Storage Scheme}, journal ={IEEE Transactions on Computers}, volume = {40}, number = {3}, issn = {0018-9340}, year = {1991}, pages = {276-283}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.76404}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Conflict-Free Vector Access Using a Dynamic Storage Scheme IS - 3 SN - 0018-9340 EPD - 276-283 A1 - D.T. Harper, III, A1 - D.A. Linebarger, PY - 1991 KW - conflict free vector access; dynamic storage; constant stride; accessing patterns; row rotation; memory; parallel architectures; performance evaluation; storage management. VL - 40 JA - IEEE Transactions on Computers ER - An approach whereby conflict-free access of any constant stride can be made by selecting a storage scheme for each vector based on the accessing patterns used with that vector is considered. By factoring the stride into two components, one a power of 2 and the other relatively prime to 2, a storage scheme that allows conflict-free access to the vector using the specified stride can be synthesized. All such schemes are based on a variation of the row rotation mechanism proposed by P. Budnik and D. Kuck. Each storage scheme is based on two parameters, one describing the type of rotation to perform and the other describing the amount of memory to be rotated as a single block. The performance of the memory under access strides other than the stride used to specify the storage scheme is also considered. Modeling these other strides represents a vector being accessed with multiple strides as well as situations when the stride cannot be determined prior to initializing the vector. Simulation results show that if a single buffer is added to each memory port, then the average performance of the dynamic scheme surpasses that of the interleaved scheme for arbitrary stride [1] P. Budnik and D. Kuck, "The organization and use of parallel memories,"IEEE Trans. Comput., vol. C-20, no. 12, pp. 1566-1569, Dec. 1971. [2] CRAY Research Inc., CRAY X-MP Computer System Functional Description Manual--HR-3005, 1987. [3] CONVEX Computer Corporation, CONVEX Architecture Reference, Oct. 1988. [4] D. Lawrie, "Access and alignment of data in an array processor,"IEEE Trans. Comput., vol. C-24, no. 12, pp. 1145-1155, Dec. 1975. [5] K. Batcher, "The multidimensional access memory in STARAN,"IEEE Trans. Comput., vol. C-26, pp. 174-177, Feb. 1977. [6] R. Swanson, "Interconnections for parallel memories to unscramblep-ordered vectors,"IEEE Trans. Comput., vol. C-23, pp. 1105-1115, Nov. 1974. [7] W. Oed and O. Lange, "On the effective bandwidth of interleaved memories in vector processing systems,"IEEE Trans. Comput., vol. C-34, no. 10, pp. 949-957, Oct. 1985. [8] H. Shapiro, "Theoretical limitations on the efficient use of parallel memories,"IEEE Trans. Comput., vol. C-27, no. 5, pp. 421-428, May 1978. [9] H. Wijshoff and J. van Leeuwen, "The structure of periodic storage schemes for parallel memories,"IEEE Trans. Comput., vol. C-34, no. 6, pp. 501-505, June 1985. [10] H. Wijshoff and J. van Leeuwen, "On linear skewing schemes andd-ordered vectors,"IEEE Trans. Comput., vol. C-36, no. 2, pp. 233-239, Feb. 1987. [11] D. Lawrie and C. Vora, "The prime memory system for array access,"IEEE Trans. Comput., vol. C-31, no. 5, pp. 435-442, May 1982. [12] D. T. Harper III and J. R. Jump, "Vector access performance in parallel memories using a skewed storage scheme,"IEEE Trans. Comput., vol. C-36, no. 12, pp. 1440-1449, 1987. [13] A. Ranade, "Interconnection networks and parallel memory organizations for array processing," inProc. Int. Conf. Parallel Processing, 1985, pp. 41-47. [14] I. Niven and H. S. Zuckerman,An Introduction to the Theory of Numbers. New York: Wiley, Dec. 1979. [15] J. B. Fraleigh,A First Course in Abstract Algebra, 3rd. ed. Reading, MA: Addison-Wesley, 1982. [16] A. Norton and E. Melton, "A class of boolean linear transformations for conflict-free power-of-two stride access," inProc. Int. Conf. Parallel Processing, 1987, pp. 247-254. [17] CONVEX Computer Corp., Convex C User's Guide, 3rd. ed., 1989. [18] R. Allen and K. Kennedy, "Automatic translation of FORTRAN to vector form,"ACM Trans. Programming Languages Syst., vol. 9, no. 4, pp. 491-524, 1987. [19] C. D. Polychronopolis,Dependence Analysis for Supercomputing. Boston, MA: Kluwer Academic, 1988. [20] W. R. Cowell and C. P. Thompson, "Transforming Fortran DO loops to improve performance on vector architectures,"ACM Trans. Math. Software, vol. 12, pp. 324-353, Dec. 1986. Index Terms: conflict free vector access; dynamic storage; constant stride; accessing patterns; row rotation; memory; parallel architectures; performance evaluation; storage management. D.T. Harper, III, D.A. Linebarger, "Conflict-Free Vector Access Using a Dynamic Storage Scheme," IEEE Transactions on Computers, vol. 40, no. 3, pp. 276-283, March 1991, doi:10.1109/12.76404 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1991/03/t0276-abs.html","timestamp":"2014-04-17T07:29:13Z","content_type":null,"content_length":"54019","record_id":"<urn:uuid:526c5aae-8296-4395-9952-376bde96811c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Contemporary Mathematics 1985; 528 pp; softcover Volume: 35 Reprint/Revision History: reprinted 1988 ISBN-10: 0-8218-5033-4 ISBN-13: 978-0-8218-5033-6 List Price: US$68 Member Price: US$54.40 Order Code: CONM/35 These are the proceedings of the Summer Research Conference on 4-manifolds held at Durham, New Hampshire, July 1982, under the auspices of the American Mathematical Society and National Science The conference was highlighted by the breakthroughs of Michael Freedman and S. K. Donaldson and by Frank Quinn's completion at the conference of the proof of the annulus conjecture. (We commend the AMS committee, particularly Julius Shaneson, who had the foresight in Spring 1981 to choose the subject, 4-manifolds, in which such remarkable activity was imminent.) Freedman and several others spoke on his work; some of their talks are represented by papers in this volume. Donaldson and Clifford H. Taubes gave surveys of their work on gauge theory and 4-manifolds and their papers are also included herein. There were a variety of other lectures, including Quinn's surprise, and a couple of problem sessions which led to the problem list. A background of basic differential topology is adequate for potential readers. • I. R. Aitchison and J. H. Rubinstein -- Fibered knots and involutions on homotopy spheres • S. Akbulut -- A fake \(4\)-manifold • F. D. Ancel -- Approximating cell-like maps of \(S^4\) by homeomorphisms • S. E. Cappell and J. L. Shaneson -- Linking numbers in branched covers • A. Casson and M. Freedman -- Atomic surgery problems • S. K. Donaldson -- Smooth \(4\)-manifolds with definite intersection form • R. D. Edwards -- The solution of the \(4\)-dimensional annulus conjecture (after Frank Quinn) • R. Fintushel and R. J. Stern -- A \(\mu\)-invariant one homology \(3\)-sphere that bounds an orientable rational ball • R. Fintushel and R. J. Stern -- Another construction of an exotic \(S^1\times S^3\,\#\,S^2\times S^2\) • R. E. Gompf and S. Singh -- On Freedman's reimbedding theorems • J. Harer -- The homology of the mapping class group and its connection to surface bundles over surfaces • A. Kawuchi -- Rochlin invariant and \(\alpha\)-invariant • R. A. Litherland -- Cobordism of satellite knots • R. Mandelbaum -- Complex structures on \(4\)-manifolds • Y. Matsumoto -- Good torus fibrations • P. Melvin -- \(4\)-dimensional oriented bordism • R. T. Miller -- A new proof of the homology torus and annulus theorem • S. P. Plotnick -- Fibered knots in \(S^4\)-twisting, spinning, rolling, surgery, and branching • F. Quinn -- The embedding theorem for towers • F. Quinn -- Smooth structures on \(4\)-manifolds • D. Ruberman -- Concordance of links in \(S^4\) • L. Rudolph -- Constructions of quasipositive knots and links, II • C. H. Taubes -- An introduction to self-dual connections • R. Kirby -- \(4\)-manifold problems
{"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-35","timestamp":"2014-04-19T22:35:34Z","content_type":null,"content_length":"16521","record_id":"<urn:uuid:66fb1d03-2e9f-404e-baf2-0718780b8753>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
2 pricing questions September 26th 2011, 02:03 PM 2 pricing questions Bob is preparing a new product for analysis for a product he has named seabass. he has decided seabass should sell at $129.95 retail, based on his market research. Retailers customarily expect a 40% markup and wholesalers a 20% markup (both expressed as a percentage of their selling price). Seabass's variable costs are $40.38 per unit estimated total added fixed costs are $100,000. At an anticipated sales volume of 5,000 units, will bob;s seabass make a profit? The sun hat company carries several product lines and wants to streamline their operation. they want to drop any non-performing products. the marketing manager is looking at two products and wants to determine which one is more profitable. Product A is a baseball hat. 2000 hats were sold last year at a price of $8. The variable costs were $4.50 Product B is a sun visor. 6000 visors were sold at $4 each with variable costs of $2. Total overhead was $13,000 of which $6,500 was allocated to the hat and 46,500 was allocated to visor. Which product was more profitable? September 27th 2011, 09:55 AM Re: 2 pricing questions what did you try?
{"url":"http://mathhelpforum.com/business-math/188918-2-pricing-questions-print.html","timestamp":"2014-04-18T22:11:21Z","content_type":null,"content_length":"4357","record_id":"<urn:uuid:ac3d06bc-d48d-490c-9920-e5604dc5d6ce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
%0 Journal Article %D 2007 %T A Computational Study of the Use of an Optimization-Based Method for Simulating Large Multibody Systems %A C. G. Petra %A B. I. Gavrea %A Mihai Anitescu %A F. A. Potra The present work aims at comparing the performance of several quadratic programming (QP) solvers for simulating large-scale frictional rigid-body systems. Traditional time-stepping schemes for simulation of multibody systems are formulated as linear complementarity problems (LCPs) with copositive matrices. Such LCPs are generally solved by means of Lemketype algorithms and solvers such as the PATH solver proved to be robust. However, for large systems, the PATH solver or any other pivotal algorithm becomes unpractical from a computational point of view. The convex relaxation proposed by one of the authors allows the formulation of the integration step as a quadratic program, for which a wide variety of state-of-the-art solvers are available. In what follows we report the results obtained solving that subproblem when using the QP solvers MOSEK, OOQP, TRON, and BLMVM. OOQP is presented with both the symmetric indefinite solver MA27 and our Cholesky reformulation using the CHOLMOD package. We investigate computational performance and address the correctness of the results from a modeling point of view. We conclude that the OOQP solver, particularly with the CHOLMOD linear algebra solver, has predictable performance and memory use patterns and is far more competitive for these problems than are the other solvers. %8 12/2007 %G eng %1 http://www.mcs.anl.gov/papers/P1495.pdf
{"url":"http://www.anl.gov/publications/export/tagged/3658","timestamp":"2014-04-18T03:03:47Z","content_type":null,"content_length":"2279","record_id":"<urn:uuid:8e22e238-a13b-4095-9312-25bc02da4ab3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
ROC curve for classification from randomForest up vote 2 down vote favorite I am using randomForest package in R platform for classification task. rf_object<-randomForest(data_matrix, label_factor, cutoff=c(k,1-k)) where k ranges from 0.1 to 0.9. pred <- predict(rf_object,test_data_matrix) I have the output from the random forest classifier and I compared it with the labels. So, I have the performance measures like accuracy, MCC, sensitivity, specificity, etc for 9 cutoff points. Now, I want to plot the ROC curve and obtain the area under the ROC curve to see how good the performance is. Most of the packages in R (like ROCR, pROC) require prediction and labels but I have sensitivity (TPR) and specificity (1-FPR). Can any one suggest me if the cutoff method is correct or reliable to produce ROC curve? Do you know any way to obtain ROC curve and area under the curve using TPR and FPR? I also tried to use the following command to train random forest. This way the predictions were continuous and were acceptable to ROCR and pROC packages in R. But, I am not sure if this is correct way to do. Can any one suggest me about this method? rf_object <- randomForest(data_matrix, label_vector) pred <- predict(rf_object, test_data_matrix) Thank you for your time reading my problem! I have spent long time surfing for this. Thank you for your suggestion/advice. r random-forest roc add comment 1 Answer active oldest votes Why don't you output class probabilities ? This way, you have a ranking of your predictions and you can directly input that to any ROC package. m = randomForest(data_matrix, labels) up vote 3 down vote accepted Note that, to use randomForest as a classification tool, labels must be a vector of factor. Thank you jey1401. I figured it out and did it. – James Nov 7 '12 at 8:02 add comment Not the answer you're looking for? Browse other questions tagged r random-forest roc or ask your own question.
{"url":"http://stackoverflow.com/questions/12370670/roc-curve-for-classification-from-randomforest","timestamp":"2014-04-17T04:37:32Z","content_type":null,"content_length":"64667","record_id":"<urn:uuid:2adcef60-c551-4507-b95d-c671d1c7fe67>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Pigeonhole Principle Question November 5th 2010, 12:11 PM #1 Oct 2010 Pigeonhole Principle Question I need help solving this question using the Pigeonhole Principle: How many trees can Farmer Ferd plant within 100 foot square field if they are to be no closer than 10 feet apart? Neglect the thickness of the trees and assume that trees may be planted on the boundary of the field. (1) Draw a square and partition it ever 10 ft on the x and y axises (2) Start putting dots(trees) in the squares making sure the dots are 10ft a part. November 6th 2010, 11:50 AM #2 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/discrete-math/162200-pigeonhole-principle-question.html","timestamp":"2014-04-19T23:49:34Z","content_type":null,"content_length":"31812","record_id":"<urn:uuid:38884166-2e74-43ab-b722-10877bc5ec62>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
From W3C Wiki RDF has explicit support for unary and binary relations, but some propositions, for example Chicago is between New York and Los Angeles don't fit naturally in either of those forms. However, NaryRelations can be reduced to binary using various patterns. We can write this as a RecordDescription. For example, in NotationThree: [ :betweenMiddle :Chicago; :betweenThisEnd :NewYork; :betweenThatEnd :LosAngeles ]. This harks somewhat to use of columns in SQL or selector arguments in smalltalk, or scheme records. If you're an ML programmer, you might prefer the CurriedFunction approach. C/lisp programmers would probably prefer an ArgumentList. • Hmm... how much is it like UML composition arrows?* discussion: Boley in www-rdf-logic, Hayes in scl I read this twice now and I don't see the point. -- DanConnolly Taking an example that is likely to have information added imagine modeling "Joe teaches math." :joe :teaches0 :math. to a record description stating "Joe teaches math to grade 3." [ a teaches1; :teacher :joe; :subject :math; :grade :3 ]. In n-ary predicates (for prolog or relational databases), we could write the teaches relationship between joe, math and 3 as: teaches(joe, math, 3) yes, but why would we? This leaves the relationships between joe/math, joe/3 and math/3 unexpressed. Expressing this relation as a RecordDescription forces all those relationships to be explicit because the relationship to the record is named (teacher, subject, grade). Now we have more opportunities to re-use existing properties (because the set of assertions about a teacher and a grade is a subset of assertions about all three). We can also ask questions about a subset of the relation: [ :teacher ?teacher . :grade 3 ] If we extend the teaches relation to include a school: teaches(joe, math, 3, Sunnydale Elementary) we define a new relation that is backwords compatible with the old relation. Assertions in the new relation could answer questions asked of the old relation. In prolog, there is no automatic relationship between the 3-ary teaches and the 4-ary teaches. quite; so? Does the next sentence not say why this matters? Thus, the query teaches(WHO, _, grade3) would not match the 4-ary relation that included a school. A sublanguage could be defined where this "backward compatibility" was assumed and the 4-ary teaches would imply the 3-ary teaches when compiled. In relational databases, this is often the case. An administrator will often add a field to a table without having to re-visit all of the SQL queries that use the previous fields. @@ do relational calculus specifications encourage this? @@ The extension of the teaches1 record to include extra properties ArgumentList and CurriedFunction approaches. Expressing this as an ArgumentList, we'd :joe :teaches ( :math :grade3 :sunnydale ). Once again, the relationships between math/grade3, math/sunnydale and grade3/sunnydale are not explicit. However, 3-ary queries will work on 4-ary data so long as they are left sufficiently open: ?who :teaches ?list. ?list first ?subject. ?list rest ?a. ?a first grade3. will match the 4-ary data as it doesn't expect ?a to have a rest of nil. However, the terse expression of this query: ?who :teaches ( ?subject ?grade ). will fail to match the 4-ary data as it does expect the cddr to be nil. Expressing this data as a CurriedFunction explodes with unintuitive verbosity: :joe :teaches :joeteach2. :math :joeteach2 :joeteach3. :grade3 :joeteach3 :sunnydale. which requires someknowlege of the answer joe as the predicate joeteach2 is already tailored to its subject, or a very loose query :who :teaches ?p1. ?subject ?p1 ?grade. which gives an unfortunate (inaccurate) answer that ?grade is joeteach3 when applied to the 4-ary data. (This is true of a naive implementation, but see my discussion about generalizing relations for an example that avoids this problem. -- Dave Menendez) It as safer to express data that may evolve in a RecordDescription. How can we say what data won't evolve?
{"url":"http://www.w3.org/wiki/RecordDescription","timestamp":"2014-04-19T22:14:10Z","content_type":null,"content_length":"19911","record_id":"<urn:uuid:50bec103-4991-4beb-944c-cfbeba0e8b60>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
function problem November 11th 2012, 10:23 AM #1 Nov 2012 function problem Hi, I have a function question may you help me? Which one is increasing function for all x values? (a) y = |x - 7| (b) y = 2x^2 + 9 (c) 3x^3 -11 (d) 4x^4 + 2 how can i prove? Thank you Re: function problem C (although it is neither increasing nor decreasing at x = 0) All the other ones are decreasing for part of the domain. Re: function problem You posted this in the pre-calculus forum. Thus there is no way to prove this one way or the other. You can simply draw the graphs and see the answer. But that is hardly a poof. On the other hand, with calculus we can see which one has a non-negative derivative. That would prove it. Re: function problem How you solved and thought that C is increasing? Should i give value to the x? Re: function problem Assigning a value to x has no indication of whether a function is increasing or not. Why don't you just graph the function? To rigorously prove it, we note that its derivative is 9x^2, which is always non-negative. Therefore the function in (C) is always increasing (except when x = 0, where the derivative is zero). Re: function problem Technically the function $f(x)=3x^3-11$is increasing everywhere. The the statement that $f$ is an increasing function means that if $a<b$ then $f(a)<f(b)$. That is clearly true in this case. When I made the remark reply #3 about proof, it was addressing the fact that this is a precalculus forum. There is of course a perfectly good way of proving this. Suppose that $a<b$ then $a^3<b^3$ then $a^3-11<b^3-11$. Proved. Re: function problem If we think from your idea, y = 2x^2 + 9 is also increasing isnt it? Re: function problem Re: function problem Now I understand thank you. November 11th 2012, 10:29 AM #2 Super Member Jun 2012 November 11th 2012, 10:31 AM #3 November 11th 2012, 10:58 AM #4 Nov 2012 November 11th 2012, 11:52 AM #5 Super Member Jun 2012 November 11th 2012, 12:13 PM #6 November 11th 2012, 12:14 PM #7 Nov 2012 November 11th 2012, 12:17 PM #8 November 11th 2012, 12:22 PM #9 Nov 2012
{"url":"http://mathhelpforum.com/pre-calculus/207262-function-problem.html","timestamp":"2014-04-16T04:13:42Z","content_type":null,"content_length":"56807","record_id":"<urn:uuid:10b09c1a-6ce9-4f5e-b51c-7e500d27be05>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory Seminar • April 30, 2010 - Ravi Sundaram (Northeastern University) Inspired by the recent "2-NASH is PPAD-complete" breakthrough we show that a number of other problems are hard as well. Here is a list of the problems along with their area of motivation/ 1. Fractional Stable Paths Problem - Internet Routing 2. Core of Balanced Games - Economics and Game Theory 3. Scarf's Lemma - Combinatorics 4. Hypergraph Matching - Social Choice and Preference Systems 5. Fractional Bounded Budget Connection Games - Social Networks 6. Strong Fractional Kernel - Graph Theory We will define these problems, state our results, give a flavor of our reductions and conclude with some open problems. Joint work with S. Kintali, L. Poplawski, R. Rajaraman and S. Teng. • April 23, 2010 - Andrei Lapets (Boston University) The complexity of a restricted variant of the stable paths problem The stable paths problem (SPP) is the problem of selecting next-hop paths (i.e. routes) to some destination for each node in a graph (i.e. network). A stable solution to this problem consists of a selection of routes from which no node would prefer to deviate according to its preferences. BGP can be viewed as a distributed algorithm for solving an SPP instance. We define an even more restricted variant of SPP, which we call f-SPP, and present a succinct proof that the problem of determining whether a stable solution for an f-SPP instance exists is NP-complete. This proof implies that even if each node in a network is restricted to one of only two policies, and each of these two policies is based on a monotonic path weight aggregation function, the problem of determining whether a stable solution exists is still NP-complete. Because f-SPP can also be viewed as a natural multi-dimensional generalization of the shortest path problem, this result may provide clues about the complexities of "multi-dimensional" variants of other efficiently solvable problems involving weighted graphs. These results are based on joint work with Kevin Donnelly and Assaf Kfoury. • April 16, 2010 - Alexander Shen (LIF Marseille, on leave from IITP RAS, Moscow) Everywhere complex sequences and the probabilistic method Let c<1 be some positive constant. One can show that there exists an ``everywhere complex'' bit sequence (=``Levin sequence'') W such that every substring x of W of length n has complexity at least cn-O(1). There are several ways to prove the existence of such a sequence (a complexity argument; application of Lovasz Local Lemma, and some others). S. Simpson asked whether there exist an algorithm that produces such a sequence when supplied with a Martin-Lof random oracle. Recently A.Rumyantsev gave a negative answer to this question; we discuss this and related results. • March 19, 2010 - Giorgios Zervas (Boston University) Information Asymmetries in Pay-Per-Bid Auctions Recently, some mainstream e-commerce web sites have begun using "pay-per-bid" auctions to sell items, from video games to bars of gold. In these auctions, bidders incur a cost for placing each bid in addition to (or sometimes in lieu of) the winner's final purchase cost. Thus even when a winner's purchase cost is a small fraction of the item's intrinsic value, the auctioneer can still profit handsomely from the bid fees. Our work provides novel analyses for these auctions, based on both modeling and datasets derived from auctions at Swoopo.com, the leading pay-per-bid auction While previous modeling work predicts profit-free equilibria, we analyze the impact of information asymmetry broadly, as well as Swoopo features such as bidpacks and the Swoop It Now option specifically. We find that even small asymmetries across players (cheaper bids, better estimates of other players' intent, different valuations of items, committed players willing to play "chicken") can increase the auction duration significantly and thus skew the auctioneer's profit disproportionately. We discuss our findings in the context of a dataset of thousands of live auctions we observed on Swoopo, which enables us also to examine behavioral factors, such as the power of aggressive bidding. Ultimately, our findings show that even with fully rational players, if players overlook or are unaware any of these factors, the result is outsized profits for pay-per-bid auctioneers. Joint work with John Byers and Michael Mitzenmacher • March 5, 2010 - Philipp Weis (UMass Amherst) Expressiveness and Succinctness of First-Order Logic on Words and Linear Orders Both expressiveness and succinctness are important characteristics of any logical language. While expressiveness simply asks for the properties that can be expressed in a given logic, succinctness is concerned with the relative size of the formulas of one logic compared to another logic. For this talk, we restrict our attention to the finite-variable fragments of first-order logic, and only consider word structures and linear orders. We present an expressiveness result for first-order logic with two variables on words, proving that there is a strict quantifier alternation hierarchy for this logic. As a consequence of our structural results, the satisfiability problem for this logic, which was previously only known to be NEXP-complete, is NP-complete for alphabets of a bounded size. In the second part of this talk, we will briefly survey known results on succinctness, point out a close connection to the complexity theoretic trade-off between parallel time and the number of processors, and present some open questions and ongoing research • April 10, 2009 - Lance Fortnow (Northwestern University) Some Recent Results on Structural Complexity Abstract: We will talk about some recent work on complexity classes. 1. For any constant c, nondeterministic exponential time (NEXP) cannot be computed in polynomial time with n^c queries to SAT and n^c bits of advice. No assumptions are needed for this theorem. 2. A relativized world where NEXP is contained in NP for infinitely many input lengths ? 3. If the polynomial-time hierarchy is infinite, one cannot compress the problem of determining whether a clique of size k exists in a graph of n vertices to solving clique problems of size polynomial in k. 4. If the polynomial-time hierarchy is infinite there are no subexponential-size NP-complete sets (Buhrnan-Hitchcock). 1&2 are from an upcoming ICALP paper by Buhrman, Fortnow and Santhanam. 3 is from a STOC '08 paper by Fortnow and Santhanam answering open questions of Bodlaender-Downey-Fellows-Hermelin and Harnik-Naor. 4 is from a CCC '08 paper by Buhrman and Hitchcock building on 3. • January 14 - Bodo Manthey (Saarland U.) Approximating Multi-Criteria Traveling Salesman Problems Abstract: In many optimization problems, there is more than one objective to be optimized. In this case, there is no natural notion of a best choice. Instead, the goal is to compute a so-called Pareto curve of solutions, which is the set of all solutions that can be considered optimal. Since computing Pareto curves is very often intractable, one has to be content with approximations to We present approximation algorithms for several variants of the multi-criteria traveling salesman problem (TSP), whose approximation performances are independent of the number k of criteria and come close to the approximation ratios obtained for TSP with a single objective function. First, we present a randomized approximation ratio for multi-criteria Min-ATSP with an approximation ratio of log n + eps, where eps is arbitrarily small. Second, we present a randomized 1/2 - eps approximation algorithm for multi-criteria Max-ATSP. This algorithms can be turned into a 2/3 - eps approximation for multi-criteria Max-STSP. Finally, we devise a deterministic 1/4 approximation algorithm for bi-criteria Max-STSP. • January 23 - Fred Green (Clark University) Uniqueness of Optimal Mod 3 Circuits for Parity Abstract: We prove that the quadratic polynomials modulo 3 with the largest correlation with parity are unique up to permutation of variables and constant factors. We thus completely characterize the MOD$_3 \circ {\rm AND}_2$ circuits that have the highest correlation with parity, where a MOD_3 \circ AND_2 circuit is one that has a MOD 3 gate as output, connected to AND gates of fan-in 2 which are in turn connected with the inputs. We also prove that the sub-optimal circuits of this type exhibit a ``stepped" behavior: any sub-optimal circuits of this class that compute parity must have a correlation at most $frac{\sqrt{3}}{2}$ times the optimal correlation. This verifies, for the special case of m=3, two conjectures made by Duenez, Miller, Roy and Straubing (Journal of Number Theory, 2006) for general MAJ~$circ mathrm{MOD}_m circ { m AND}_2$ circuits for any odd m. The correlation bounds are obtained by studying the associated exponential sums, based on some of the techniques developed by Green (JCSS, 2004). We regard this as a step towards obtaining tighter bounds both for the m not equal to 3 quadratic case as well as for higher degrees. This is joint work with Amitabha Roy. • January 30 - Arkadev Chattopadhyay (IAS) Part of BU/NEU Theory of Computation seminar. This talk will take place in Boston University. Mutliparty Communication Lower Bounds by the Generalized Discrepancy Method Abstract: Obtaining strong lower bounds for the `Number on the Forehead' model of multiparty communication has many applications. For instance, it yields lower bounds on the size of constant-depth circuits and the size of proofs in important proof-systems. Recently, Shi and Zhu (2008), and Sherstov (2008) independently introduced two closely related techniques for proving lower bounds on 2-party communication complexity of certain functions. We extend both techniques to the multiparty setting. Each of them yields a lower bound of n^{\Omega(1)} for the k-party communication complexity of Disjointness as long as k is a constant. No superlogarithmic lower bounds were known previously on the three-party communication complexity of Disjointness. Part of this is joint work with Anil Ada. • February 6 - Debajyoti Bera (BU) This is also the PhD proposal of Debajyoti Bera. Lower bound techniques for constant-depth quantum circuits Abstract: The circuit model of computing attracted theoretical computer scientists in the 1970s. There are usually two kind of questions for any model of computation, what can it efficiently compute and what it cannot. The initial decades of circuit theory were marked with a several new results and techniques and high expectations for the latter question. Opinions differ now about the viability of such techniques in the pursuit of the "ultimate answers" but nevertheless the line of research is active and interesting (and "practical", since real hardware is built using Quantum circuit is the same model extended to allow circuits with quantum gates and quantum information. In our work, we intend to gauge the power of of such circuits. We start with very simple circuits, limited in size and depth and number of extra fixed-values inputs and try to analyze them. We consider families of circuits with common gates and compare their power and their limits. In my proposal defense, I will mostly talk about a new technique to analyze constant-depth quantum circuits without ancillae. The technique is reminiscent of the algebraic lower bound technique for classical circuits. It gives us a new proof that the parity function cannot be computed by quantum circuits with limited resources. In the interest of the audience, I will also introduce the quantum circuit model, just the bits (or "qubit"s since this is a quantum talk) needed for this talk. • February 20 - Joshua Brody (Dartmouth University) Part of BU/NEU Theory of Computation seminar. This talk will take place in Northeastern University. Some Applications of Communication Complexity Abstract: Communication Complexity has been used to prove lower bounds in a wide variety of domains, from the theoretical (circuit lower bounds) to the practical (streaming algorithms, computer In this talk, we present new results for two communication problems. We begin with a new lower bound for the communication complexity of multiparty pointer jumping. In the second half of the talk, we give several new results for distributed functional monitoring problems. Part of this talk is joint work with Amit Chakrabarti and Chrisil Arackaparambil. • March 20 - Heiko Roeglin (MIT) k-Means has Polynomial Smoothed Complexity Abstract: The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. We settle the smoothed running time of the k-means method: We show that the smoothed number of iterations is bounded by a polynomial in n and 1/sigma, where sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set. This is joint work with David Arthur (Stanford University) and Bodo Manthey (Saarland University). • April 10 - Lance Fortnow (Northwestern University) Part of BU/NEU Theory of Computation seminar. This talk will take place in Boston University. Some Recent Results on Structural Complexity Abstract: We will talk about some recent work on complexity classes. 1. For any constant c, nondeterministic exponential time (NEXP) cannot be computed in polynomial time with n^c queries to SAT and n^c bits of advice. No assumptions are needed for this theorem. 2. A relativized world where NEXP is contained in NP for infinitely many input lengths ? 3. If the polynomial-time hierarchy is infinite, one cannot compress the problem of determining whether a clique of size k exists in a graph of n vertices to solving clique problems of size polynomial in k. 4. If the polynomial-time hierarchy is infinite there are no subexponential-size NP-complete sets (Buhrnan-Hitchcock). 1 and 2 are from an upcoming ICALP paper by Buhrman, Fortnow and Santhanam. 3 is from a STOC '08 paper by Fortnow and Santhanam answering open questions of Bodlaender-Downey-Fellows-Hermelin and Harnik-Naor. 4 is from a CCC '08 paper by Buhrman and Hitchcock building on 3.
{"url":"http://www.cs.bu.edu/groups/theory/talk.html","timestamp":"2014-04-18T00:48:19Z","content_type":null,"content_length":"23095","record_id":"<urn:uuid:6b2f0c57-718b-495a-90fa-0543b26d861d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Maris makes and sells purses. She makes $18 on each purse she sells. Which expression shows the amount of money Maris will make from selling (p ) purses? 18 + p . 18p. 18 - p 18 + p + p • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c4ebe1e4b066f22e10d412","timestamp":"2014-04-21T15:34:14Z","content_type":null,"content_length":"25342","record_id":"<urn:uuid:d4d14686-4474-4fb4-b645-0b84b02cf707>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
The Water Jugs Problem From This Prolog Life This classic AI problem is described in Artificial Intelligence as follows: "You are given two jugs, a 4-gallon one and a 3-gallon one. Neither has any measuring markers on it. There is a tap that can be used to fill the jugs with water. How can you get exactly 2 gallons of water into the 4-gallon jug?". E. Rich & K. Knight, Artificial Intelligence, 2nd edition, McGraw-Hill, 1991 This program implements an "environmentally responsible" solution to the water jugs problem. Rather than filling and spilling from an infinite water resource, we conserve a finite initial charge with a third jug: (reservoir). This approach is simpler than the traditional method, because there are only two actions; it is more flexible than the traditional method, because it can solve problems that are constrained by a limited supply from the reservoir. To simulate the infinite version, we use a filled reservoir with a capacity greater than the combined capacities of the jugs, so that the reservoir can never be emptied. "Perfection is achieved not when there is nothing more to add, but when there is nothing more to take away." is the entry point. The solution is derived by a simple, breadth-first, state-space search; and translated into a readable format by a DCG. water_jugs :- SmallCapacity = 3, LargeCapacity = 4, Reservoir is SmallCapacity + LargeCapacity + 1, volume( small, Capacities, SmallCapacity ), volume( large, Capacities, LargeCapacity ), volume( reservoir, Capacities, Reservoir ), volume( small, Start, 0 ), volume( large, Start, 0 ), volume( reservoir, Start, Reservoir ), volume( large, End, 2 ), water_jugs_solution( Start, Capacities, End, Solution ), phrase( narrative(Solution, Capacities, End), Chars ), put_chars( Chars ). water_jugs_solution( +Start, +Capacities, +End, ?Solution ) holds when Solution is the terminal 'node' in a state-space search - beginning with a 'start state' in which the water-jugs have Capacities and contain the Start volumes. The terminal node is reached when the water-jugs contain the End volumes. water_jugs_solution( Start, Capacities, End, Solution ) :- solve_jugs( [start(Start)], Capacities, [], End, Solution ). solve_jugs( +Nodes, +Capacities, +Visited, +End, ?Solution ) holds when Solution is the terminal 'node' in a state-space search, beginning with a first 'open' node in Nodes, and terminating when the water-jugs contain the End volumes. Capacities define the capacities of the water-jugs, while Visited is a list of expanded ('closed') node states. The 'breadth-first' operation of solve_jugs is due to the 'existing' Nodes being appended to the 'new' nodes. (If the 'new' nodes were appended to the 'existing' nodes, the operation would be solve_jugs( [Node|Nodes], Capacities, Visited, End, Solution ) :- node_state( Node, State ), ( State = End -> Solution = Node ; otherwise -> successor(Node, Capacities, Visited, Successor), append( Nodes, Successors, NewNodes ), solve_jugs( NewNodes, Capacities, [State|Visited], End, Solution ) successor( +Node, +Capacities, +Visited, ?Successor ) Successor is a successor of Node, for water-jugs with Capacities, if there is a legal 'transition' from Node's state to Successor's state, and Successor's state is not a member of the Visited states. successor( Node, Capacities, Visited, Successor ) :- node_state( Node, State ), Successor = node(Action,State1,Node), jug_transition( State, Capacities, Action, State1 ), \+ member( State1, Visited ). Transition Rules jug_transition( +State, +Capacities, ?Action, ?SuccessorState ) holds when Action describes a valid transition, from State to SuccessorState, for water-jugs with Capacities. There are two sorts of Action: • empty_into(Source,Target) valid if Source is not already empty and the combined contents from Source and Target, (in State), are not greater than the capacity of the Target jug. In SuccessorState: Source becomes empty, while the Target jug acquires the combined contents of Source and Target in State. • fill_from(Source,Target) valid if Source is not already empty and the combined contents from Source and Target, (in State), are greater than the capacity of the Target jug. In SuccessorState: the Target jug becomes full, while Source retains the difference between the combined contents of Source and Target, in State, and the capacity of the Target jug. In either case, the contents of the unused jug are unchanged. jug_transition( State0, Capacities, empty_into(Source,Target), State1 ) :- volume( Source, State0, Content0 ), Content0 > 0, jug_permutation( Source, Target, Unused ), volume( Target, State0, Content1 ), volume( Target, Capacities, Capacity ), Content0 + Content1 =< Capacity, volume( Source, State1, 0 ), volume( Target, State1, Content2 ), Content2 is Content0 + Content1, volume( Unused, State0, Unchanged ), volume( Unused, State1, Unchanged ). jug_transition( State0, Capacities, fill_from(Source,Target), State1 ) :- volume( Source, State0, Content0 ), Content0 > 0, jug_permutation( Source, Target, Unused ), volume( Target, State0, Content1 ), volume( Target, Capacities, Capacity ), Content1 < Capacity, Content0 + Content1 > Capacity, volume( Source, State1, Content2 ), volume( Target, State1, Capacity ), Content2 is Content0 + Content1 - Capacity, volume( Unused, State0, Unchanged ), volume( Unused, State1, Unchanged ). Data Abstraction volume( ?Jug, ?State, ?Volume ) holds when Jug ('large', 'small' or 'reservoir') has Volume in State. volume( small, jugs(Small, _Large, _Reservoir), Small ). volume( large, jugs(_Small, Large, _Reservoir), Large ). volume( reservoir, jugs(_Small, _Large, Reservoir), Reservoir ). jug_permutation( ?Source, ?Target, ?Unused ) holds when Source, Target and Unused are a permutation of 'small', 'large' and 'reservoir'. jug_permutation( Source, Target, Unused ) :- select( Source, [small, large, reservoir], Residue ), select( Target, Residue, [Unused] ). node_state( ?Node, ?State ) holds when the contents of the water-jugs at Node are described by State. node_state( start(State), State ). node_state( node(_Transition, State, _Predecessor), State ). Definite Clause Grammar is a DCG presenting water-jugs solutions in a readable format. The grammar is head-recursive, because the 'nodes list' describing the solution has the last node outermost. narrative( start(Start), Capacities, End ) --> "Given three jugs with capacities of:", newline, literal_volumes( Capacities ), "To obtain the result:", newline, literal_volumes( End ), "Starting with:", newline, literal_volumes( Start ), "Do the following:", newline. narrative( node(Transition, Result, Predecessor), Capacities, End ) --> narrative( Predecessor, Capacities, End ), literal_action( Transition, Result ). literal_volumes( Volumes ) --> indent, literal( Volumes ), ";", newline. literal_action( Transition, Result ) --> indent, "- ", literal( Transition ), " giving:", newline, indent, indent, literal( Result ), newline. literal( empty_into(From,To) ) --> "Empty the ", literal( From ), " into the ", literal( To ). literal( fill_from(From,To) ) --> "Fill the ", literal( To ), " from the ", literal( From ). literal( jugs(Small,Large,Reservoir) ) --> literal_number( Small ), " gallons in the small jug, ", literal_number( Large ), " gallons in the large jug and ", literal_number( Reservoir ), " gallons in the reservoir". literal( small ) --> "small jug". literal( large ) --> "large jug". literal( reservoir ) --> "reservoir". literal_number( Number, Plus, Minus ) :- number( Number ), number_chars( Number, Chars ), append( Chars, Minus, Plus ). indent --> " ". newline --> " Utility Predicates Load a small library of Puzzle Utilities. :- ensure_loaded( misc ). The code is available as plain text here. The output of the program is: ?- water_jugs. Given three jugs with capacities of: 3 gallons in the small jug, 4 gallons in the large jug and 8 gallons in the reservoir; To obtain the result: 0 gallons in the small jug, 2 gallons in the large jug and 6 gallons in the reservoir; Starting with: 0 gallons in the small jug, 0 gallons in the large jug and 8 gallons in the reservoir; Do the following: - Fill the small jug from the reservoir giving: 3 gallons in the small jug, 0 gallons in the large jug and 5 gallons in the reservoir - Empty the small jug into the large jug giving: 0 gallons in the small jug, 3 gallons in the large jug and 5 gallons in the reservoir - Fill the small jug from the reservoir giving: 3 gallons in the small jug, 3 gallons in the large jug and 2 gallons in the reservoir - Fill the large jug from the small jug giving: 2 gallons in the small jug, 4 gallons in the large jug and 2 gallons in the reservoir - Empty the large jug into the reservoir giving: 2 gallons in the small jug, 0 gallons in the large jug and 6 gallons in the reservoir - Empty the small jug into the large jug giving: 0 gallons in the small jug, 2 gallons in the large jug and 6 gallons in the reservoir
{"url":"http://www.binding-time.co.uk/wiki/index.php/The_Water_Jugs_Problem","timestamp":"2014-04-17T18:33:44Z","content_type":null,"content_length":"42806","record_id":"<urn:uuid:d94a3e15-8d10-454d-bb12-587c11f2e1f9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
High School Algebra Tutors In Las Vegas Algebra Tutors Pre-Algebra, Algebra 1, and Algebra 2 all require you to master new math skills. Do you find solving equations and word problems difficult in Algebra class? Are the exponents, proportions, and variables of Algebra keeping you up at night? Intercepts, functions, and expressions can be confusing to most Algebra students, but a qualified tutor can clear it all up! Our Algebra tutors are experts in math and specialize in helping students like you understand Algebra. If you are worried about an upcoming Algebra test or fear not passing your Algebra class for the term, getting an Algebra tutor will make all the difference. Pre-algebra - The goal of Pre-algebra is to develop fluency with rational numbers and proportional relationships. Students will: extend their elementary skills and begin to learn algebra concepts that serve as a transition into formal Algebra and Geometry; learn to think flexibly about relationships among fractions, decimals, and percents; learn to recognize and generate equivalent expressions and solve single-variable equations and inequalities; investigate and explore mathematical ideas and develop multiple strategies for analyzing complex situations; analyze situations verbally, numerically, graphically, and symbolically; and apply mathematical skills and make meaningful connections to life's experiences. Algebra I - The main goal of Algebra is to develop fluency in working with linear equations. Students will: extend their experiences with tables, graphs, and equations and solve linear equations and inequalities and systems of linear equations and inequalities; extend their knowledge of the number system to include irrational numbers; generate equivalent expressions and use formulas; simplify polynomials and begin to study quadratic relationships; and use technology and models to investigate and explore mathematical ideas and relationships and develop multiple strategies for analyzing complex situations. Algebra II - A primary goal of Algebra II is for students to conceptualize, analyze, and identify relationships among functions. Students will: develop proficiency in analyzing and solving quadratic functions using complex numbers; investigate and make conjectures about absolute value, radical, exponential, logarithmic and sine and cosine functions algebraically, numerically, and graphically, with and without technology; extend their algebraic skills to compute with rational expressions and rational exponents; work with and build an understanding of complex numbers and systems of equations and inequalities; analyze statistical data and apply concepts of probability using permutations and combinations; and use technology such as graphing calculators. College Algebra – Topics for this course include basic concepts of algebra; linear, quadratic, rational, radical, logarithmic, exponential, and absolute value equations; equations reducible to quadratic form; linear, polynomial, rational, and absolute value inequalities, and complex number system; graphs of linear, polynomial, exponential, logarithmic, rational, and absolute value functions; conic sections; inverse functions; operations and compositions of functions; systems of equations; sequences and series; and the binomial theorem. We make finding a qualified and experienced algebra tutor easy. Every algebra tutor we provide has a college degree in mathematics, science, or a related field of study like accounting. Our goal is to provide an expertly skilled math tutor that can make understanding algebra simple and straightforward. Las Vegas Tutors With one of the largest and fastest growing communities of school age children, we have been proudly helping students find academic success in Las Vegas for many years. Our reputation as a premium service is evident in the hundreds of testimonials we have received from parents, students, and schools across Las Vegas and the surrounding areas. Our highly customized service means that you determine exactly who your tutor will be, where the tutoring will take place, and for how long. We know that Las Vegas parents know their children best, so we work with you to insure the tutoring is enjoyable and efficient. Results-oriented and compassionate Las Vegas tutors are available now to help your child reach his or her full potential. Our Tutoring Service Every Advanced Learners tutor is a highly qualified, college-degreed, experienced, and fully approved educator. You can feel secure knowing that each tutor has been thoroughly pre-screened and approved. We have stringent requirements for all of our tutors. We require a national background check, a personal interview, and both personal and professional references of each applicant. We select only the very best tutors for our clients to choose from. Your personalized list of matched tutors will include professionals specifically suited to your child’s current academic needs. The backgrounds of our tutors are varied and their experience diverse, but the common factor is the passion for learning and education that they all share. As our client, you have the opportunity to review and speak with as many tutors as you wish until you find the right match for your student.
{"url":"http://www.advancedlearners.com/lasvegas/highschool/algebra/tutor/find.aspx","timestamp":"2014-04-18T18:10:35Z","content_type":null,"content_length":"30424","record_id":"<urn:uuid:bf8e2384-6963-4233-9fb2-0556403cb0da>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 15. Programming with monads Golfing practice: association lists Web clients and servers often pass information around as a simple textual list of key-value pairs. The encoding is named application/x-www-form-urlencoded, and it's easy to understand. Each key-value pair is separated by an “&” character. Within a pair, a key is a series of characters, followed by an “=”, followed by a value. We can obviously represent a key as a String, but the HTTP specification is not clear about whether a key must be followed by a value. We can capture this ambiguity by representing a value as a Maybe String. If we use Nothing for a value, then there was no value present. If we wrap a string in Just, then there was a value. Using Maybe lets us distinguish between “no value” and “empty value”. Haskell programmers use the name association list for the type [(a, b)], where we can think of each element as an association between a key and a value. The name originates in the Lisp community, where it's usually abbreviated as an alist. We could thus represent the above string as the following Haskell value. -- file: ch15/MovieReview.hs [("name", Just "Attila \"The Hun\""), ("occupation", Just "Khan")] In the section called “Parsing an URL-encoded query string”, we'll parse an application/x-www-form-urlencoded string, and represent the result as an alist of [(String, Maybe String)]. Let's say we want to use one of these alists to fill out a data structure. -- file: ch15/MovieReview.hs data MovieReview = MovieReview { revTitle :: String , revUser :: String , revReview :: String We'll begin by belabouring the obvious with a naive function. -- file: ch15/MovieReview.hs simpleReview :: [(String, Maybe String)] -> Maybe MovieReview simpleReview alist = case lookup "title" alist of Just (Just title@(_:_)) -> case lookup "user" alist of Just (Just user@(_:_)) -> case lookup "review" alist of Just (Just review@(_:_)) -> Just (MovieReview title user review) _ -> Nothing -- no review _ -> Nothing -- no user _ -> Nothing -- no title It only returns a MovieReview if the alist contains all of the necessary values, and they're all non-empty strings. However, the fact that it validates its inputs is its only merit: it suffers badly from the “staircasing” that we've learned to be wary of, and it knows the intimate details of the representation of an alist. Since we're now well acquainted with the Maybe monad, we can tidy up the staircasing. -- file: ch15/MovieReview.hs maybeReview alist = do title <- lookup1 "title" alist user <- lookup1 "user" alist review <- lookup1 "review" alist return (MovieReview title user review) lookup1 key alist = case lookup key alist of Just (Just s@(_:_)) -> Just s _ -> Nothing Although this is much tidier, we're still repeating ourselves. We can take advantage of the fact that the MovieReview constructor acts as a normal, pure function by lifting it into the monad, as we discussed in the section called “Mixing pure and monadic code”. -- file: ch15/MovieReview.hs liftedReview alist = liftM3 MovieReview (lookup1 "title" alist) (lookup1 "user" alist) (lookup1 "review" alist) We still have some repetition here, but it is dramatically reduced, and also more difficult to remove. Although using liftM3 tidies up our code, we can't use a liftM-family function to solve this sort of problem in general, because they're only defined up to liftM5 by the standard libraries. We could write variants up to whatever number we pleased, but that would amount to drudgery. If we had a constructor or pure function that took, say, ten parameters, and decided to stick with the standard libraries you might think we'd be out of luck. Of course, our toolbox isn't yet empty. In Control.Monad, there's a function named ap with an interesting type signature. ghci> :m +Control.Monad ghci> :type ap ap :: (Monad m) => m (a -> b) -> m a -> m b You might wonder who would put a single-argument pure function inside a monad, and why. Recall, however, that all Haskell functions really take only one argument, and you'll begin to see how this might relate to the MovieReview constructor. ghci> :type MovieReview MovieReview :: String -> String -> String -> MovieReview We can just as easily write that type as String -> (String -> (String -> MovieReview)). If we use plain old liftM to lift MovieReview into the Maybe monad, we'll have a value of type Maybe (String -> (String -> (String -> MovieReview))). We can now see that this type is suitable as an argument for ap, in which case the result type will be Maybe (String -> (String -> MovieReview)). We can pass this, in turn, to ap, and continue to chain until we end up with this definition. -- file: ch15/MovieReview.hs apReview alist = MovieReview `liftM` lookup1 "title" alist `ap` lookup1 "user" alist `ap` lookup1 "review" alist We can chain applications of ap like this as many times as we need to, thereby bypassing the liftM family of functions. Another helpful way to look at ap is that it's the monadic equivalent of the familiar ($) operator: think of pronouncing ap as apply. We can see this clearly when we compare the type signatures of the two functions. ghci> :type ($) ($) :: (a -> b) -> a -> b ghci> :type ap ap :: (Monad m) => m (a -> b) -> m a -> m b In fact, ap is usually defined as either liftM2 id or liftM2 ($). Here's a simple representation of a person's phone numbers. -- file: ch15/VCard.hs data Context = Home | Mobile | Business deriving (Eq, Show) type Phone = String albulena = [(Home, "+355-652-55512")] nils = [(Mobile, "+47-922-55-512"), (Business, "+47-922-12-121"), (Home, "+47-925-55-121"), (Business, "+47-922-25-551")] twalumba = [(Business, "+260-02-55-5121")] Suppose we want to get in touch with someone to make a personal call. We don't want their business number, and we'd prefer to use their home number (if they have one) instead of their mobile number. -- file: ch15/VCard.hs onePersonalPhone :: [(Context, Phone)] -> Maybe Phone onePersonalPhone ps = case lookup Home ps of Nothing -> lookup Mobile ps Just n -> Just n Of course, if we use Maybe as the result type, we can't accommodate the possibility that someone might have more than one number that meet our criteria. For that, we switch to a list. -- file: ch15/VCard.hs allBusinessPhones :: [(Context, Phone)] -> [Phone] allBusinessPhones ps = map snd numbers where numbers = case filter (contextIs Business) ps of [] -> filter (contextIs Mobile) ps ns -> ns contextIs a (b, _) = a == b Notice that these two functions structure their case expressions similarly: one alternative handles the case where the first lookup returns an empty result, while the other handles the non-empty ghci> onePersonalPhone twalumba ghci> onePersonalPhone albulena Just "+355-652-55512" ghci> allBusinessPhones nils Haskell's Control.Monad module defines a typeclass, MonadPlus, that lets us abstract the common pattern out of our case expressions. -- file: ch15/VCard.hs class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a The value mzero represents an empty result, while mplus combines two results into one. Here are the standard definitions of mzero and mplus for Maybe and lists. -- file: ch15/VCard.hs instance MonadPlus [] where mzero = [] mplus = (++) instance MonadPlus Maybe where mzero = Nothing Nothing `mplus` ys = ys xs `mplus` _ = xs We can now use mplus to get rid of our case expressions entirely. For variety, let's fetch one business and all personal phone numbers. -- file: ch15/VCard.hs oneBusinessPhone :: [(Context, Phone)] -> Maybe Phone oneBusinessPhone ps = lookup Business ps `mplus` lookup Mobile ps allPersonalPhones :: [(Context, Phone)] -> [Phone] allPersonalPhones ps = map snd $ filter (contextIs Home) ps `mplus` filter (contextIs Mobile) ps In these functions, because we know that lookup returns a value of type Maybe, and filter returns a list, it's obvious which version of mplus is going to be used in each case. What's more interesting is that we can use mzero and mplus to write functions that will be useful for any MonadPlus instance. As an example, here's the standard lookup function, which returns a value of type Maybe. -- file: ch15/VCard.hs lookup :: (Eq a) => a -> [(a, b)] -> Maybe b lookup _ [] = Nothing lookup k ((x,y):xys) | x == k = Just y | otherwise = lookup k xys We can easily generalise the result type to any instance of MonadPlus as follows. -- file: ch15/VCard.hs lookupM :: (MonadPlus m, Eq a) => a -> [(a, b)] -> m b lookupM _ [] = mzero lookupM k ((x,y):xys) | x == k = return y `mplus` lookupM k xys | otherwise = lookupM k xys This lets us get either no result or one, if our result type is Maybe; all results, if our result type is a list; or something more appropriate for some other exotic instance of MonadPlus. For small functions, such as those we present above, there's little benefit to using mplus. The advantage lies in more complex code and in code that is independent of the monad in which it executes. Even if you don't find yourself needing MonadPlus for your own code, you are likely to encounter it in other people's projects. The name mplus does not imply addition Even though the mplus function contains the text “plus”, you should not think of it as necessarily implying that we're trying to add two values. Depending on the monad we're working in, mplus may implement an operation that looks like addition. For example, mplus in the list monad is implemented as the (++) operator. ghci> [1,2,3] `mplus` [4,5,6] However, if we switch to another monad, the obvious similarity to addition falls away. ghci> Just 1 `mplus` Just 2 Just 1 Rules for working with MonadPlus Instances of the MonadPlus typeclass must follow a few simple rules, in addition to the usual monad rules. An instance must short circuit if mzero appears on the left of a bind expression. In other words, an expression mzero >>= f must evaluate to the same result as mzero alone. -- file: ch15/MonadPlus.hs mzero >>= f == mzero An instance must short circuit if mzero appears on the right of a sequence expression. -- file: ch15/MonadPlus.hs v >> mzero == mzero Failing safely with MonadPlus When we introduced the fail function in the section called “The Monad typeclass”, we took pains to warn against its use: in many monads, it's implemented as a call to error, which has unpleasant The MonadPlus typeclass gives us a gentler way to fail a computation, without fail or error blowing up in our faces. The rules that we introduced above allow us to introduce an mzero into our code wherever we need to, and computation will short circuit at that point. In the Control.Monad module, the standard function guard packages up this idea in a convenient form. -- file: ch15/MonadPlus.hs guard :: (MonadPlus m) => Bool -> m () guard True = return () guard False = mzero As a simple example, here's a function that takes a number x and computes its value modulo some other number n. If the result is zero, it returns x, otherwise the current monad's mzero. -- file: ch15/MonadPlus.hs x `zeroMod` n = guard ((x `mod` n) == 0) >> return x Adventures in hiding the plumbing In the section called “Using the state monad: generating random values”, we showed how to use the State monad to give ourselves access to random numbers in a way that is easy to use. A drawback of the code we developed is that it's leaky: someone who uses it knows that they're executing inside the State monad. This means that they can inspect and modify the state of the random number generator just as easily as we, the authors, can. Human nature dictates that if we leave our internal workings exposed, someone will surely come along and monkey with them. For a sufficiently small program, this may be fine, but in a larger software project, when one consumer of a library modifies its internals in a way that other consumers are not prepared for, the resulting bugs can be among the hardest of all to track down. These bugs occur at a level where we're unlikely to question our basic assumptions about a library until long after we've exhausted all other avenues of inquiry. Even worse, once we leave our implementation exposed for a while, and some well-intentioned person inevitably bypasses our APIs and uses the implementation directly, we create a nasty quandary for ourselves if we need to fix a bug or make an enhancement. Either we can modify our internals, and break code that depends on them; or we're stuck with our existing internals, and must try to find some other way to make the change we need. How can we revise our random number monad so that the fact that we're using the State monad is hidden? We need to somehow prevent our users from being able to call get or put. This is not difficult to do, and it introduces some tricks that we'll reuse often in day-to-day Haskell programming. To widen our scope, we'll move beyond random numbers, and implement a monad that supplies unique values of any kind. The name we'll give to our monad is Supply. We'll provide the execution function, runSupply, with a list of values; it will be up to us to ensure that each one is unique. -- file: ch15/Supply.hs runSupply :: Supply s a -> [s] -> (a, [s]) The monad won't care what the values are: they might be random numbers, or names for temporary files, or identifiers for HTTP cookies. Within the monad, every time a consumer asks for a value, the next action will take the next one from the list and give it to the consumer. Each value is wrapped in a Maybe constructor in case the list isn't long enough to satisfy the demand. -- file: ch15/Supply.hs next :: Supply s (Maybe s) To hide our plumbing, in our module declaration we only export the type constructor, the execution function, and the next action. -- file: ch15/Supply.hs module Supply , next , runSupply ) where Since a module that imports the library can't see the internals of the monad, it can't manipulate them. Our plumbing is exceedingly simple: we use a newtype declaration to wrap the existing State monad. -- file: ch15/Supply.hs import Control.Monad.State newtype Supply s a = S (State [s] a) The s parameter is the type of the unique values we are going to supply, and a is the usual type parameter that we must provide in order to make our type a monad. Our use of newtype for the Supply type and our module header join forces to prevent our clients from using the State monad's get and set actions. Because our module does not export the S data constructor, clients have no programmatic way to see that we're wrapping the State monad, or to access it. At this point, we've got a type, Supply, that we need to make an instance of the Monad type class. We could follow the usual pattern of defining (>>=) and return, but this would be pure boilerplate code. All we'd be doing is wrapping and unwrapping the State monad's versions of (>>=) and return using our S value constructor. Here is how such code would look. -- file: ch15/AltSupply.hs unwrapS :: Supply s a -> State [s] a unwrapS (S s) = s instance Monad (Supply s) where s >>= m = S (unwrapS s >>= unwrapS . m) return = S . return Haskell programmers are not fond of boilerplate, and sure enough, GHC has a lovely language extension that eliminates the work. To use it, we add the following directive to the top of our source file, before the module header. -- file: ch15/Supply.hs {-# LANGUAGE GeneralizedNewtypeDeriving #-} Usually, we can only automatically derive instances of a handful of standard typeclasses, such as Show and Eq. As its name suggests, the GeneralizedNewtypeDeriving extension broadens our ability to derive typeclass instances, and it is specific to newtype declarations. If the type we're wrapping is an instance of any typeclass, the extensions can automatically make our new type an instance of that typeclass as follows. -- file: ch15/Supply.hs deriving (Monad) This takes the underlying type's implementations of (>>=) and return, adds the necessary wrapping and unwrapping with our S data constructor, and uses the new versions of those functions to derive a Monad instance for us. What we gain here is very useful beyond just this example. We can use newtype to wrap any underlying type; we selectively expose only those typeclass instances that we want; and we expend almost no effort to create these narrower, more specialised types. Now that we've seen the GeneralizedNewtypeDeriving technique, all that remains is to provide definitions of next and runSupply. -- file: ch15/Supply.hs next = S $ do st <- get case st of [] -> return Nothing (x:xs) -> do put xs return (Just x) runSupply (S m) xs = runState m xs If we load our module into ghci, we can try it out in a few simple ways. ghci> :load Supply [1 of 1] Compiling Supply ( Supply.hs, interpreted ) Ok, modules loaded: Supply. ghci> runSupply next [1,2,3] Loading package mtl-1.1.0.0 ... linking ... done. (Just 1,[2,3]) ghci> runSupply (liftM2 (,) next next) [1,2,3] ((Just 1,Just 2),[3]) ghci> runSupply (liftM2 (,) next next) [1] ((Just 1,Nothing),[]) We can also verify that the State monad has not somehow leaked out. ghci> :browse Supply data Supply s a next :: Supply s (Maybe s) runSupply :: Supply s a -> [s] -> (a, [s]) ghci> :info Supply data Supply s a -- Defined at Supply.hs:17:8-13 instance Monad (Supply s) -- Defined at Supply.hs:17:8-13 If we want to use our Supply monad as a source of random numbers, we have a small difficulty to face. Ideally, we'd like to be able to provide it with an infinite stream of random numbers. We can get a StdGen in the IO monad, but we must “put back” a different StdGen when we're done. If we don't, the next piece of code to get a StdGen will get the same state as we did. This means it will generate the same random numbers as we did, which is potentially catastrophic. From the parts of the System.Random module we've seen so far, it's difficult to reconcile these demands. We can use getStdRandom, whose type ensures that when we get a StdGen, we put one back. ghci> :type getStdRandom getStdRandom :: (StdGen -> (a, StdGen)) -> IO a We can use random to get back a new StdGen when they give us a random number. And we can use randoms to get an infinite list of random numbers. But how do we get both an infinite list of random numbers and a new StdGen? The answer lies with the RandomGen type class's split function, which takes one random number generator, and turns it into two generators. Splitting a random generator like this is a most unusual thing to be able to do: it's obviously tremendously useful in a pure functional setting, but essentially never either necessary or provided by an impure language. Using the split function, we can use one StdGen to generate an infinite list of random numbers to feed to runSupply, while we give the other back to the IO monad. -- file: ch15/RandomSupply.hs import Supply import System.Random hiding (next) randomsIO :: Random a => IO [a] randomsIO = getStdRandom $ \g -> let (a, b) = split g in (randoms a, b) If we've written this function properly, our example ought to print a different random number on each invocation. ghci> :load RandomSupply [1 of 2] Compiling Supply ( Supply.hs, interpreted ) [2 of 2] Compiling RandomSupply ( RandomSupply.hs, interpreted ) Ok, modules loaded: RandomSupply, Supply. ghci> (fst . runSupply next) `fmap` randomsIO Ambiguous occurrence `next' It could refer to either `Supply.next', imported from Supply at RandomSupply.hs:4:0-12 (defined at Supply.hs:32:0) or `System.Random.next', imported from System.Random ghci> (fst . runSupply next) `fmap` randomsIO Ambiguous occurrence `next' It could refer to either `Supply.next', imported from Supply at RandomSupply.hs:4:0-12 (defined at Supply.hs:32:0) or `System.Random.next', imported from System.Random Recall that our runSupply function returns both the result of executing the monadic action and the unconsumed remainder of the list. Since we passed it an infinite list of random numbers, we compose with fst to ensure that we don't get drowned in random numbers when ghci tries to print the result. The pattern of applying a function to one element of a pair, and constructing a new pair with the other original element untouched, is common enough in Haskell code that it has been turned into standard code. In the Control.Arrow module are two functions, first and second, that perform this operation. ghci> :m +Control.Arrow ghci> first (+3) (1,2) ghci> second odd ('a',1) (Indeed, we already encountered second, in the section called “JSON typeclasses without overlapping instances”.) We can use first to golf our definition of randomsIO, turning it into a one-liner. -- file: ch15/RandomGolf.hs import Control.Arrow (first) randomsIO_golfed :: Random a => IO [a] randomsIO_golfed = getStdRandom (first randoms . split) Separating interface from implementation In the previous section, we saw how to hide the fact that we're using a State monad to hold the state for our Supply monad. Another important way to make code more modular involves separating its interface—what the code can do—from its implementation—how it does it. The standard random number generator in System.Random is known to be quite slow. If we use our randomsIO function to provide it with random numbers, then our next action will not perform well. One simple and effective way that we could deal with this is to provide Supply with a better source of random numbers. Let's set this idea aside, though, and consider an alternative approach, one that is useful in many settings. We will separate the actions we can perform with the monad from how it works using a typeclass. -- file: ch15/SupplyClass.hs class (Monad m) => MonadSupply s m | m -> s where next :: m (Maybe s) This typeclass defines the interface that any supply monad must implement. It bears careful inspection, since it uses several unfamiliar Haskell language extensions. We will cover each one in the sections that follow. Multi-parameter typeclasses How should we read the snippet MonadSupply s m in the typeclass? If we add parentheses, an equivalent expression is (MonadSupply s) m, which is a little clearer. In other words, given some type variable m that is a Monad, we can make it an instance of the typeclass MonadSupply s. unlike a regular typeclass, this one has a parameter. As this language extension allows a typeclass to have more than one parameter, its name is MultiParamTypeClasses. The parameter s serves the same purpose as the Supply type's parameter of the same name: it represents the type of the values handed out by the next function. Notice that we don't need to mention (>>=) or return in the definition of MonadSupply s, since the type class's context (superclass) requires that a MonadSupply s must already be a Monad. To revisit a snippet that we ignored earlier, | m -> s is a functional dependency, often called a fundep. We can read the vertical bar | as “such that”, and the arrow -> as “uniquely determines”. Our functional dependency establishes a relationship between m and s. The availability of functional dependencies is governed by the FunctionalDependencies language pragma. The purpose behind us declaring a relationship is to help the type checker. Recall that a Haskell type checker is essentially a theorem prover, and that it is conservative in how it operates: it insists that its proofs must terminate. A non-terminating proof results in the compiler either giving up or getting stuck in an infinite loop. With our functional dependency, we are telling the type checker that every time it sees some monad m being used in the context of a MonadSupply s, the type s is the only acceptable type to use with it. If we were to omit the functional dependency, the type checker would simply give up with an error message. It's hard to picture what the relationship between m and s really means, so let's look at an instance of this typeclass. -- file: ch15/SupplyClass.hs import qualified Supply as S instance MonadSupply s (S.Supply s) where next = S.next Here, the type variable m is replaced by the type S.Supply s. Thanks to our functional dependency, the type checker now knows that when it sees a type S.Supply s, the type can be used as an instance of the typeclass MonadSupply s. If we didn't have a functional dependency, the type checker would not be able to figure out the relationship between the type parameter of the class MonadSupply s and that of the type Supply s, and it would abort compilation with an error. The definition itself would compile; the type error would not arise until the first time we tried to use it. To strip away one final layer of abstraction, consider the type S.Supply Int. Without a functional dependency, we could declare this an instance of MonadSupply s. However, if we tried to write code using this instance, the compiler would not be able to figure out that the type's Int parameter needs to be the same as the typeclass's s parameter, and it would report an error. Functional dependencies can be tricky to understand, and once we move beyond simple uses, they often prove difficult to work with in practice. Fortunately, the most frequent use of functional dependencies is in situations as simple as ours, where they cause little trouble. If we save our typeclass and instance in a source file named SupplyClass.hs, we'll need to add a module header such as the following. -- file: ch15/SupplyClass.hs {-# LANGUAGE FlexibleInstances, FunctionalDependencies, MultiParamTypeClasses #-} module SupplyClass , S.Supply , S.runSupply ) where The FlexibleInstances extension is necessary so that the compiler will accept our instance declaration. This extension relaxes the normal rules for writing instances in some circumstances, in a way that still lets the compiler's type checker guarantee that it will terminate. Our need for FlexibleInstances here is caused by our use of functional dependencies, but the details are unfortunately beyond the scope of this book. How to know when a language extension is needed If GHC cannot compile a piece of code because it would require some language extension to be enabled, it will tell us which extension we should use. For example, if it decides that our code needs flexible instance support, it will suggest that we try compiling with the -XFlexibleInstances option. A -X option has the same effect as a LANGUAGE directive: it enables a particular extension. Finally, notice that we're re-exporting the runSupply and Supply names from this module. It's perfectly legal to export a name from one module even though it's defined in another. In our case, it means that client code only needs to import the SupplyClass module, without also importing the Supply module. This reduces the number of “moving parts” that a user of our code needs to keep in mind. Programming to a monad's interface Here is a simple function that fetches two values from our Supply monad, formats them as a string, and returns them. -- file: ch15/Supply.hs showTwo :: (Show s) => Supply s String showTwo = do a <- next b <- next return (show "a: " ++ show a ++ ", b: " ++ show b) This code is tied by its result type to our Supply monad. We can easily generalize to any monad that implements our MonadSupply interface by modifying our function's type. Notice that the body of the function remains unchanged. -- file: ch15/SupplyClass.hs showTwo_class :: (Show s, Monad m, MonadSupply s m) => m String showTwo_class = do a <- next b <- next return (show "a: " ++ show a ++ ", b: " ++ show b) The State monad lets us plumb a piece of mutable state through our code. Sometimes, we would like to be able to pass some immutable state around, such as a program's configuration data. We could use the State monad for this purpose, but we could then find ourselves accidentally modifying data that should remain unchanged. Let's forget about monads for a moment and think about what a function with our desired characteristics ought to do. It should accept a value of some type e (for environment) that represents the data that we're passing in, and return a value of some other type a as its result. The overall type we want is e -> a. To turn this type into a convenient Monad instance, we'll wrap it in a newtype. -- file: ch15/SupplyInstance.hs newtype Reader e a = R { runReader :: e -> a } Making this into a Monad instance doesn't take much work. -- file: ch15/SupplyInstance.hs instance Monad (Reader e) where return a = R $ \_ -> a m >>= k = R $ \r -> runReader (k (runReader m r)) r We can think of our value of type e as an environment in which we're evaluating some expression. The return action should have the same effect no matter what the environment is, so our version ignores its environment. Our definition of (>>=) is a little more complicated, but only because we have to make the environment—here the variable r—available both in the current computation and in the computation we're chaining into. How does a piece of code executing in this monad find out what's in its environment? It simply has to ask. -- file: ch15/SupplyInstance.hs ask :: Reader e e ask = R id Within a given chain of actions, every invocation of ask will return the same value, since the value stored in the environment doesn't change. Our code is easy to test in ghci. ghci> runReader (ask >>= \x -> return (x * 3)) 2 Loading package old-locale-1.0.0.0 ... linking ... done. Loading package old-time-1.0.0.0 ... linking ... done. Loading package random-1.0.0.0 ... linking ... done. The Reader monad is included in the standard mtl library, which is usually bundled with GHC. You can find it in the Control.Monad.Reader module. The motivation for this monad may initially seem a little thin, because it is most often useful in complicated code. We'll often need to access a piece of configuration information deep in the bowels of a program; passing that information in as a normal parameter would require a painful restructuring of our code. By hiding this information in our monad's plumbing, intermediate functions that don't care about the configuration information don't need to see it. The clearest motivation for the Reader monad will come in Chapter 18, Monad transformers, when we discuss combining several monads to build a new monad. There, we'll see how to gain finer control over state, so that our code can modify some values via the State monad, while other values remain immutable, courtesy of the Reader monad. A return to automated deriving Now that we know about the Reader monad, let's use it to create an instance of our MonadSupply typeclass. To keep our example simple, we'll violate the spirit of MonadSupply here: our next action will always return the same value, instead of always returning a different value. It would be a bad idea to directly make the Reader type an instance of the MonadSupply class, because then any Reader could act as a MonadSupply. This would usually not make any sense. Instead, we create a newtype based on Reader. The newtype hides the fact that we're using Reader internally. We must now make our type an instance of both of the typeclasses we care about. With the GeneralizedNewtypeDeriving extension enabled, GHC will do most of the hard work for us. -- file: ch15/SupplyInstance.hs newtype MySupply e a = MySupply { runMySupply :: Reader e a } deriving (Monad) instance MonadSupply e (MySupply e) where next = MySupply $ do v <- ask return (Just v) -- more concise: -- next = MySupply (Just `liftM` ask) Notice that we must make our type an instance of MonadSupply e, not MonadSupply. If we omit the type variable, the compiler will complain. To try out our MySupply type, we'll first create a simple function that should work with any MonadSupply instance. -- file: ch15/SupplyInstance.hs xy :: (Num s, MonadSupply s m) => m s xy = do Just x <- next Just y <- next return (x * y) If we use this with our Supply monad and randomsIO function, we get a different answer every time, as we expect. ghci> (fst . runSupply xy) `fmap` randomsIO ghci> (fst . runSupply xy) `fmap` randomsIO Because our MySupply monad has two layers of newtype wrapping, we can make it easier to use by writing a custom execution function for it. -- file: ch15/SupplyInstance.hs runMS :: MySupply i a -> i -> a runMS = runReader . runMySupply When we apply our xy action using this execution function, we get the same answer every time. Our code remains the same, but because we are executing it in a different implementation of MonadSupply, its behavior has changed. ghci> runMS xy 2 ghci> runMS xy 2 Like our MonadSupply typeclass and Supply monad, almost all of the common Haskell monads are built with a split between interface and implementation. For example, the get and put functions that we introduced as “belonging to” the State monad are actually methods of the MonadState typeclass; the State type is an instance of this class. Similarly, the standard Reader monad is an instance of the MonadReader typeclass, which specifies the ask method. While the separation of interface and implementation that we've discussed above is appealing for its architectural cleanliness, it has important practical applications that will become clearer later. When we start combining monads in Chapter 18, Monad transformers, we will save a lot of effort through the use of GeneralizedNewtypeDeriving and typeclasses. The blessing and curse of the IO monad is that it is extremely powerful. If we believe that careful use of types helps us to avoid programming mistakes, then the IO monad should be a great source of unease. Because the IO monad imposes no restrictions on what we can do, it leaves us vulnerable to all kinds of accidents. How can we tame its power? Let's say that we would like to guarantee to ourselves that a piece of code can read and write files on the local filesystem, but that it will not access the network. We can't use the plain IO monad, because it won't restrict us. Let's create a module that provides a small set of functionality for reading and writing files. -- file: ch15/HandleIO.hs {-# LANGUAGE GeneralizedNewtypeDeriving #-} module HandleIO , Handle , IOMode(..) , runHandleIO , openFile , hClose , hPutStrLn ) where import System.IO (Handle, IOMode(..)) import qualified System.IO Our first approach to creating a restricted version of IO is to wrap it with a newtype. -- file: ch15/HandleIO.hs newtype HandleIO a = HandleIO { runHandleIO :: IO a } deriving (Monad) We do the by-now familiar trick of exporting the type constructor and the runHandleIO execution function from our module, but not the data constructor. This will prevent code running within the HandleIO monad from getting hold of the IO monad that it wraps. All that remains is for us to wrap each of the actions we want our monad to allow. This is a simple matter of wrapping each IO with a HandleIO data constructor. -- file: ch15/HandleIO.hs openFile :: FilePath -> IOMode -> HandleIO Handle openFile path mode = HandleIO (System.IO.openFile path mode) hClose :: Handle -> HandleIO () hClose = HandleIO . System.IO.hClose hPutStrLn :: Handle -> String -> HandleIO () hPutStrLn h s = HandleIO (System.IO.hPutStrLn h s) We can now use our restricted HandleIO monad to perform I/O. -- file: ch15/HandleIO.hs safeHello :: FilePath -> HandleIO () safeHello path = do h <- openFile path WriteMode hPutStrLn h "hello world" hClose h To run this action, we use runHandleIO. ghci> :load HandleIO [1 of 1] Compiling HandleIO ( HandleIO.hs, interpreted ) Ok, modules loaded: HandleIO. ghci> runHandleIO (safeHello "hello_world_101.txt") Loading package old-locale-1.0.0.0 ... linking ... done. Loading package old-time-1.0.0.0 ... linking ... done. Loading package filepath-1.1.0.0 ... linking ... done. Loading package directory-1.0.0.0 ... linking ... done. Loading package mtl-1.1.0.0 ... linking ... done. ghci> :m +System.Directory ghci> removeFile "hello_world_101.txt" If we try to sequence an action that runs in the HandleIO monad with one that is not permitted, the type system forbids it. ghci> runHandleIO (safeHello "goodbye" >> removeFile "goodbye") Couldn't match expected type `HandleIO a' against inferred type `IO ()' In the second argument of `(>>)', namely `removeFile "goodbye"' In the first argument of `runHandleIO', namely `(safeHello "goodbye" >> removeFile "goodbye")' In the expression: runHandleIO (safeHello "goodbye" >> removeFile "goodbye") Designing for unexpected uses There's one small, but significant, problem with our HandleIO monad: it doesn't take into account the possibility that we might occasionally need an escape hatch. If we define a monad like this, it is likely that we will occasionally need to perform an I/O action that isn't allowed for by the design of our monad. Our purpose in defining a monad like this is to make it easier for us to write solid code in the common case, not to make corner cases impossible. Let's thus give ourselves a way out. The Control.Monad.Trans module defines a “standard escape hatch”, the MonadIO typeclass. This defines a single function, liftIO, which lets us embed an IO action in another monad. ghci> :m +Control.Monad.Trans ghci> :info MonadIO class (Monad m) => MonadIO m where liftIO :: IO a -> m a -- Defined in Control.Monad.Trans instance MonadIO IO -- Defined in Control.Monad.Trans Our implementation of this typeclass is trivial: we just wrap IO with our data constructor. -- file: ch15/HandleIO.hs import Control.Monad.Trans (MonadIO(..)) instance MonadIO HandleIO where liftIO = HandleIO With judicious use of liftIO, we can escape our shackles and invoke IO actions where necessary. -- file: ch15/HandleIO.hs tidyHello :: FilePath -> HandleIO () tidyHello path = do safeHello path liftIO (removeFile path) Automatic derivation and MonadIO We could have had the compiler automatically derive an instance of MonadIO for us by adding the type class to the deriving clause of HandleIO. In fact, in production code, this would be our usual strategy. We avoided that here simply to separate the presentation of the earlier material from that of MonadIO. The disadvantage of hiding IO in another monad is that we're still tied to a concrete implementation. If we want to swap HandleIO for some other monad, we must change the type of every action that uses HandleIO. As an alternative, we can create a typeclass that specifies the interface we want from a monad that manipulates files. -- file: ch15/MonadHandle.hs {-# LANGUAGE FunctionalDependencies, MultiParamTypeClasses #-} module MonadHandle (MonadHandle(..)) where import System.IO (IOMode(..)) class Monad m => MonadHandle h m | m -> h where openFile :: FilePath -> IOMode -> m h hPutStr :: h -> String -> m () hClose :: h -> m () hGetContents :: h -> m String hPutStrLn :: h -> String -> m () hPutStrLn h s = hPutStr h s >> hPutStr h "\n" Here, we've chosen to abstract away both the type of the monad and the type of a file handle. To satisfy the type checker, we've added a functional dependency: for any instance of MonadHandle, there is exactly one handle type that we can use. When we make the IO monad an instance of this class, we use a regular Handle. -- file: ch15/MonadHandleIO.hs {-# LANGUAGE FunctionalDependencies, MultiParamTypeClasses #-} import MonadHandle import qualified System.IO import System.IO (IOMode(..)) import Control.Monad.Trans (MonadIO(..), MonadTrans(..)) import System.Directory (removeFile) import SafeHello instance MonadHandle System.IO.Handle IO where openFile = System.IO.openFile hPutStr = System.IO.hPutStr hClose = System.IO.hClose hGetContents = System.IO.hGetContents hPutStrLn = System.IO.hPutStrLn Because any MonadHandle must also be a Monad, we can write code that manipulates files using normal do notation, without caring what monad it will finally execute in. -- file: ch15/SafeHello.hs safeHello :: MonadHandle h m => FilePath -> m () safeHello path = do h <- openFile path WriteMode hPutStrLn h "hello world" hClose h Because we made IO an instance of this type class, we can execute this action from ghci. ghci> safeHello "hello to my fans in domestic surveillance" Loading package old-locale-1.0.0.0 ... linking ... done. Loading package old-time-1.0.0.0 ... linking ... done. Loading package filepath-1.1.0.0 ... linking ... done. Loading package directory-1.0.0.0 ... linking ... done. Loading package mtl-1.1.0.0 ... linking ... done. ghci> removeFile "hello to my fans in domestic surveillance" The beauty of the typeclass approach is that we can swap one underlying monad for another without touching much code, as most of our code doesn't know or care about the implementation. For instance, we could replace IO with a monad that compresses files as it writes them out. Defining a monad's interface through a typeclass has a further benefit. It lets other people hide our implementation in a newtype wrapper, and automatically derive instances of just the typeclasses they want to expose. In fact, because our safeHello function doesn't use the IO type, we can even use a monad that can't perform I/O. This allows us to test code that would normally have side effects in a completely pure, controlled environment. To do this, we will create a monad that doesn't perform I/O, but instead logs every file-related event for later processing. -- file: ch15/WriterIO.hs data Event = Open FilePath IOMode | Put String String | Close String | GetContents String deriving (Show) Although we already developed a Logger type in the section called “Using a new monad: show your work!”, here we'll use the standard, and more general, Writer monad. Like other mtl monads, the API provided by Writer is defined in a typeclass, in this case MonadWriter. Its most useful method is tell, which logs a value. ghci> :m +Control.Monad.Writer ghci> :type tell tell :: (MonadWriter w m) => w -> m () The values we log can be of any Monoid type. Since the list type is a Monoid, we'll log to a list of Event. We could make Writer [Event] an instance of MonadHandle, but it's cheap, easy, and safer to make a special-purpose monad. -- file: ch15/WriterIO.hs newtype WriterIO a = W { runW :: Writer [Event] a } deriving (Monad, MonadWriter [Event]) Our execution function simply removes the newtype wrapper we added, then calls the normal Writer monad's execution function. -- file: ch15/WriterIO.hs runWriterIO :: WriterIO a -> (a, [Event]) runWriterIO = runWriter . runW When we try this code out in ghci, it gives us a log of the function's file activities. ghci> :load WriterIO [1 of 3] Compiling MonadHandle ( MonadHandle.hs, interpreted ) [2 of 3] Compiling SafeHello ( SafeHello.hs, interpreted ) [3 of 3] Compiling WriterIO ( WriterIO.hs, interpreted ) Ok, modules loaded: SafeHello, MonadHandle, WriterIO. ghci> runWriterIO (safeHello "foo") ((),[Open "foo" WriteMode,Put "foo" "hello world",Put "foo" "\n",Close "foo"]) The writer monad and lists The writer monad uses the monoid's mappend function every time we use tell. Because mappend for lists is (++), lists are not a good practical choice for use with Writer: repeated appends are expensive. We use lists above purely for simplicity. In production code, if you want to use the Writer monad and you need list-like behaviour, use a type with better append characteristics. One such type is the difference list, which we introduced in the section called “Taking advantage of functions as data”. You don't need to roll your own difference list implementation: a well tuned library is available for download from Hackage, the Haskell package database. Alternatively, you can use the Seq type from the Data.Sequence module, which we introduced in the section called “General purpose sequences”. If we use the typeclass approach to restricting IO, we may still want to retain the ability to perform arbitrary I/O actions. We might try adding MonadIO as a constraint on our typeclass. -- file: ch15/MonadHandleIO.hs class (MonadHandle h m, MonadIO m) => MonadHandleIO h m | m -> h instance MonadHandleIO System.IO.Handle IO tidierHello :: (MonadHandleIO h m) => FilePath -> m () tidierHello path = do safeHello path liftIO (removeFile path) This approach has a problem, though: the added MonadIO constraint loses us the ability to test our code in a pure environment, because we can no longer tell whether a test might have damaging side effects. The alternative is to move this constraint from the typeclass, where it “infects” all functions, to only those functions that really need to perform I/O. -- file: ch15/MonadHandleIO.hs tidyHello :: (MonadIO m, MonadHandle h m) => FilePath -> m () tidyHello path = do safeHello path liftIO (removeFile path) We can use pure property tests on the functions that lack MonadIO constraints, and traditional unit tests on the rest. Unfortunately, we've substituted one problem for another: we can't invoke code with both MonadIO and MonadHandle constraints from code that has the MonadHandle constraint alone. If we find that somewhere deep in our MonadHandle-only code, we really need the MonadIO constraint, we must add it to all the code paths that lead to this point. Allowing arbitrary I/O is risky, and has a profound effect on how we develop and test our code. When we have to choose between being permissive on the one hand, and easier reasoning and testing on the other, we usually opt for the latter. 1. Using QuickCheck, write a test for an action in the MonadHandle monad, to see if it tries to write to a file handle that is not open. Try it out on safeHello. 2. Write an action that tries to write to a file handle that it has closed. Does your test catch this bug? 3. In a form-encoded string, the same key may appear several times, with or without values, e.g. key&key=1&key=2. What type might you use to represent the values associated with a key in this sort of string? Write a parser that correctly captures all of the information.
{"url":"http://book.realworldhaskell.org/read/programming-with-monads.html","timestamp":"2014-04-19T10:08:34Z","content_type":null,"content_length":"88281","record_id":"<urn:uuid:f2dae7a7-e309-4c69-b7d9-7368ce636cd2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the sandpile torsor? up vote 29 down vote favorite Let G be a finite undirected connected graph. A divisor on G is an element of the free abelian group Div(G) on the vertices of G (or an integer-valued function on the vertices.) Summing over all vertices gives a homomorphism from Div(G) to Z which we call degree. For each vertex v, let D(v) be the divisor $d_v v - \sum_{w \sim v} w$ where $d_v$ is the valence of v and $v \sim w$ means "$v$ and $w$ are adjacent." Note that D(v) has degree 0. The subgroup of Div(G) generated by the D(v) is called the group of principal divisors. We denote by Pic(G) the quotient of Div(G) by the group generated by the principal divisors, and by Pic^0(G) the kernel of the degree map from Pic(G) to Z. The notation here suggests that I am really thinking about algebraic curves, not graphs; and that's in some part true! In the work of Matt Baker and his collaborators you can find a really beautiful translation of much of the foundational theory of algebraic curves (Riemann-Roch, Brill-Noether, etc.) into this language. But that's not really what this question is about. Lots of people study this abelian group, maybe most notably statistical physicists and probabilists who study dynamical processes on graphs. In those communities, Pic^0(G) is called the sandpile group, because of its relation with the abelian sandpile model. But that's also not really what this question is about. What this question is about is the following fact: by the matrix-tree theorem, the number of spanning trees of G is equal to |Pic^0(G)|. When one encounters a finite set that has the same cardinality as a finite group, but the set does not have any visible natural group structure, one's fancy lightly turns to thoughts of torsors. So: QUESTION: Is the set S of spanning trees of G naturally a torsor for the sandpile group Pic^0(G)? If so, how can we describe this "sandpile torsor?" (By "naturally" we mean "functorially" -- in particular, this torsor should be equivariant for the automorphism group of G.) That question is rather vague, so let me make it more precise, and at the same time try to argue that in at least some cases the question is not ridiculously speculative. The paper "Chip-Firing and Rotor-Routing on Directed Graphs," by (deep breath) Alexander E. Holroyd, Lionel Levine, Karola Meszaros, Yuval Peres, James Propp and David B. Wilson, contains a very interesting construction of a "local" torsor structure for the sandpile group. Suppose G is a planar graph -- or more generally any graph endowed with a cyclic ordering of the edges incident to each vertex. Then the "rotor-router process" described in Holroyd et al gives S the structure of a Pic^0(G)-torsor! (See Def 3.11 - Cor 3.18) This would seem to answer my question; except that the torsor structure they define depends, a priori, on the choice of a vertex of G. A better way to describe their result is as follows: for each v, let S_v be the set of oriented spanning trees of G with v as root. Then the rotor-router model realizes S_v as a torsor for the group Pic(G) / $\mathbf{Z}$ v. But S_v is naturally identified with S (just forget the orientation) and the natural map Pic^0(G) -> Pic(G) / $\mathbf{Z}$ v is an isomorphism. So for each choice of v, the rotor-router construction endows S with the structure of Pic^0(G)-torsor. Now one can ask: QUESTION (more precise): Are the torsor structures provided by the rotor-router model in fact independent of v? Do they in fact provide a Pic^0(G)-torsor structure on S which is functorial for maps compatible with the cyclic edge-orderings, and in particular for automorphisms of G as a planar graph? If this is false in general, is there some nice class of graphs G for which it's true? REMARK: If you are used to thinking about algebraic curves, like me, your first instinct might be "well, surely if the set of spanning trees is a torsor for Pic^0, it must be Pic^d for some d." But I don't think this can be right. Here's an example: let G be a 4-cycle, which we think of as embedded in the plane. Now the stabilizer of a vertex v in the planar automorphism group of the graph is a group of order 2, generated by a reflection of the square across the diagonal containing v. In particular, you can see instantly that no spanning tree in S is fixed by this group; the involution acts as a double flip on the four spanning trees in S. On the other hand, Pic^d(G) is always going to have a fixed point for this action: namely, the divisor d*v. REMARK 2: Obviously the correct thing to do is to compute a bunch of examples, which might instantly give negative answers to these questions! But it gets a bit tiring to do this by hand; I checked that everything is OK for the complete graph on 3 vertices (in which case the torsor actualy is Pic^1(G)) and then I ran out of steam. But sage has built-in sandpile routines..... I haven't made any progress on proving or disproving the more precise question --- but I sure had fun playing around with these permutations! I highly recommend it as a holiday diversion. If I haven't made any mistakes, the m.p.q. holds also for the 4-cycle C_4 and the square-with-diagonal K_4-minus-one-edge (at least with the usual cyclic ordering). – Tom Church Dec 16 '11 at 5:13 Cool -- I think I agree about C_4 (but didn't do it carefully enough to be sure) and then I didn't brave the 5-edge graph. The K_4 should be a very interesting example, since it has a big automorphism group to play with! – JSE Dec 16 '11 at 5:32 I thought it is known that the sandpile group acts f.p.f. on the set of spanning trees, thus making it into a principal homogeneous space. Are the torsors you talk about the same objects as principal homogeneous spaces? – Dima Pasechnik Mar 25 '12 at 5:37 something along these lines: each spanning tree $T$ defines a bijection $f_T$ from the set $\mathcal{T}$ of spanning trees to the sandpile group. Take the compositions $f_{T'}^{-1}\circ f_T$, for $T,T'\in\mathcal{T}$. They generate a permutation group on $\mathcal{T}$, isomorphic to the sandpile group. Maybe actually nobody proved anything this? – Dima Pasechnik Mar 25 '12 at 5:44 Dima -- yes, "torsor" and "principal homogeneous space" are synonyms, at least in my usage. – JSE Mar 25 '12 at 20:35 show 1 more comment 1 Answer active oldest votes Answer: The Pic^0(G)-torsor structure is independent of the vertex v if and only if G is a planar ribbon graph. This is the main theorem of "Rotor-routing and spanning trees on planar graphs", by Melody Chan, Thomas Church, and Joshua Grochow, which we just posted to the arXiv. Quoting from the The proof is based on three key ideas. First, the rotor-routing action of the sandpile group on spanning trees can be partially modeled via rotor-routing on unicycles ([HLMPPW, §3]). This is a related dynamical system with the property that rotor-routing becomes periodic, rather than terminating after finitely many steps. The second main idea is that the independence of the sandpile action on spanning trees can be described in terms of reversibility of cycles. We introduce the notion of up vote 16 down reversibility (previously considered in [HLMPPW] only for planar graphs), and prove that reversibility is a well-defined property of cycles in a ribbon graph. We also establish a vote accepted relation between reversibility and basepoint-independence. Third, reversibility is closely related to whether a cycle separates the surface corresponding to the ribbon graph into two components. We prove that these conditions a re almost equivalent. Moreover, although they are not equivalent for individual cycles, we prove that all cycles are reversible if and only if all cycles are separating, in which case the ribbon graph is planar. We're grateful to Jordan for the question, which turned out to have a much more interesting answer than we expected! We're also grateful to Math Overflow for providing a venue for this question. Beautiful answer, Tom! – Andy Putman Aug 14 '13 at 16:24 Rock and roll!! – JSE Aug 15 '13 at 3:25 1 and how about Eulerian digraphs? In this case the question about the torsor makes perfect sense too. – Dima Pasechnik Aug 26 '13 at 7:32 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics pr.probability ag.algebraic-geometry sandpile graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/83552/what-is-the-sandpile-torsor/139384","timestamp":"2014-04-19T17:31:52Z","content_type":null,"content_length":"66687","record_id":"<urn:uuid:135ed14e-b53b-4309-afc0-818c8d23c8a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursion problem 10-06-2010, 12:17 AM Recursion problem I want to practice want I learned/didn't learn about recursion. I think I didn't learn much but I don't give up. Here I want to calculate n*n + (n-1)*(n-1) + (n-2)*(n-2) + ... etc. using recursion. static long sum( int n ) { if( n == 0 ) return 0; //base case else { return (n*n) + sum(n-1)*(n-1); I thought the above code was correct in order to give me the desired result but it is not. When I input for example 5 result is 317. I tracked variable n. It does go 5-4-3-2-1-0-1-2-3-4-5 but the result is not 55. Any ideas would be greatly appreciated. 10-06-2010, 12:50 AM return (n*n) + sum(n-1)*(n-1); That line doesn't look right. It's saying "figure out n squared, then add that to the sum of the squares up to (n-1) multiplied by (n-1)". Why that last bit? 10-06-2010, 12:59 AM I thought it was supposed to be "n squared + (n-1) squared + (n-2) squared etc." Thanks, pbrockway2. I thought n*n is calculated once and doesn't go into the recursion. The recursion is little confusing to me but I'll keep practicing. 10-06-2010, 01:32 AM I'm not sure if you've got this problem solved or not. (And I am deliberately loathe to provide code as this would defeat the whole purpose of the question.) Here is your method rewritten with a more descriptive method name: static long sumOfSquaresUpTo(int n) { if(n == 0) return 0; //base case else { return (n*n) + sumOfSquaresUpTo(n-1) *(n-1); And I'm wondering if you can see why that last multiplication is wrong. After all the-sum-of-squares-up-to n is just n squared plus the-sum-of-squares-up-to (n-1). 10-06-2010, 03:02 AM I'm not sure if you've got this problem solved or not. (And I am deliberately loathe to provide code as this would defeat the whole purpose of the question.) Here is your method rewritten with a more descriptive method name: static long sumOfSquaresUpTo(int n) { if(n == 0) return 0; //base case else { return (n*n) + sumOfSquaresUpTo(n-1) *(n-1); And I'm wondering if you can see why that last multiplication is wrong. After all the-sum-of-squares-up-to n is just n squared plus the-sum-of-squares-up-to (n-1). My code is working with no problems now (thanks to your help). I understand why *(n-1) shouldn't be there. To me recursion is little tricky and that's why I need more practicing. 10-06-2010, 06:14 AM Great. Glad to help. 10-06-2010, 06:35 AM
{"url":"http://www.java-forums.org/new-java/33186-recursion-problem-print.html","timestamp":"2014-04-20T09:12:37Z","content_type":null,"content_length":"10444","record_id":"<urn:uuid:2a3ad0c3-44d4-42f5-935e-4417491169ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivatives question May 9th 2008, 03:55 AM #1 May 2008 Derivatives question Kindly help me with this questions. Obtain the derivatives for the following funtions using the delta method Obtain the derivatives below using rules of differentiation In both cases, your expected to use the definition of the derivative : $f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$. The first derivative should be quite easy to find and for the second one you'll need $(x+h)^3=x^3+3x^2h+3xh^2+h^3$ and $(x+h)^2=x^2+2xh+h^2$ Obtain the derivatives below using rules of differentiation What's the problem with these ones ? In both cases, your expected to use the definition of the derivative : $f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$. The first derivative should be quite easy to find and for the second one you'll need $(x+h)^3=x^3+3x^2h+3xh^2+h^3$ and $(x+h)^2=x^2+2xh+h^2$ What's the problem with these ones ? Thanks for your help.i am so new to this kind of mathematics.with the second ones we need to use some rules which i dont know Well, here are some rules that might be helpful : □ product rule : $\left(u(x)\cdot v(x)\right)'=u'(x)\cdot v(x)+v'(x)\cdot u(x)$ □ quotient rule : $\left(\frac{u(x)}{v(x)}\right)'=\frac{u'(x)\cdot v(x)-v'(x)\cdot u(x)}{v^2(x)}$ □ chain rule : $\left(u\circ v(x)\right)'=v'(x)\cdot u'\circ v(x)$ and some derivatives : □ $(x^n)'=n\cdot x^{n-1}$ □ $(\ln x)'=\frac{1}{x}$ □ $(\exp x)'=\exp x$ May 9th 2008, 04:10 AM #2 May 9th 2008, 05:49 AM #3 May 2008 May 9th 2008, 06:21 AM #4
{"url":"http://mathhelpforum.com/calculus/37746-derivatives-question.html","timestamp":"2014-04-18T09:22:42Z","content_type":null,"content_length":"42129","record_id":"<urn:uuid:669daacd-008c-4e1f-8c5b-f3d6703b5fb7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Find A Number II Source: http://www.afunzone.com/adailypuzzle/04-08-09.html Which three-digit number, made of consecutive digits, like 567, is 2 less than a cube and 2 more than a square? Finger pointing down from darrell94590 on 1/2/2006 Drawing from Ripleys-Believe It Or Not 123 is two more than 11x11 [121] and two less than 125 [5x5x5].
{"url":"http://www.jokelibrary.net/education/m2/m4cS2-find_a_number2.html","timestamp":"2014-04-20T16:18:00Z","content_type":null,"content_length":"6306","record_id":"<urn:uuid:991776ab-e486-4408-8fb8-2a64eb73c261>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tutors South San Francisco, CA 94080 Actuary/Former High School Math Teacher ...Sometimes material is taught in our schools in a way that expects the student to learn in a specific manner and speed. That's great when those expectations are met. But learning doesn't always work that way. tends to build on previously taught... Offering 10+ subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/geo_Pacifica_Math_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-19T12:29:10Z","content_type":null,"content_length":"61424","record_id":"<urn:uuid:cdff6e8e-fb43-44f1-a43c-0d6f954d07ab>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7188069 - Method for valuing intellectual property 1. Field of the Invention The present invention relates to methods for determining the value of an intellectual property asset, and more particularly to valuation methods which rely upon the competitive advantage offered by the asset as their foundation. 2. Description of Prior Art Intellectual property assets have become an increasingly important source of corporate wealth. Estimates of the fraction of total market capitalization in the U.S. comprising intellectual property assets range from fifty percent to over ninety percent, representing trillions of dollars of corporate assets. Valuation is a core task in the management of all assets. Management competency, however, has lagged behind the growing importance of intellectual property assets. Estimates of the underutilization of intellectual property assets in the U.S. range from $100 billion to nearly $1 trillion. There are many reasons for the gap between management competency and the growing importance of intellectual property assets to corporate value. One of the most important of these reasons is the lack of adequate methods for the valuation of intellectual property assets. Marketing, finance, sales and many other management functions depend upon a knowledge of an asset's value. There are a number of well developed and widely accepted methods available for valuing tangible assets. There is no such method available for valuing intangible assets. The lack of adequate valuation methods for intellectual property assets impedes their development, use and exchange. The proper valuation of intellectual property assets is a multi-billion dollar challenge in today's economy. Intellectual property assets should be managed to maximize shareholder value in the same way as tangible assets such as land, buildings, and equipment. Intellectual property assets also need to be valued in the same way as land, buildings, and equipment in order to be properly managed. Unfortunately, the valuation methods which have been developed for tangible property cannot be used with intellectual property because of the lack of established markets for intellectual property The “market method” is the most reliable measure of tangible property value when it can be used. The market method determines the value of a given tangible asset by the price paid for comparable assets. Use of the market method is dependent on four critical conditions: there must be an active market for substantially similar assets; the transactions must be substantially similar; the parties must deal at arm's length with one another; and the prices and terms of the transactions must be available to the public in some form. Unfortunately, the conditions required by the market method do not exist in the context of intellectual property. Intellectual property assets are required by law to be dissimilar; patents must be novel and non-obvious compared to prior art and copyrights must be original works. Although exchanges of patents, copyrights and trade secrets occur every day in every industry, these exchanges do not take place through established markets, but are sporadic and specialized. Intellectual property exchanges are generally motivated by strategic advantage, not by trading opportunities, and are unique to the firms involved. There is a wide variety of terms and conditions by which intellectual property can be transferred. Licensing professionals craft agreements to suit the special needs of their clients and rarely are two agreements identical. The greatest disadvantage in using the market method to value intellectual property, however, is the lack of publicly available information on the terms and conditions of exchanges. A number of different methods have been proposed specifically for the valuation of intangible property. Nevertheless, each of these methods has disadvantages. One “rule of thumb” method is commonly known as the “25% rule.” The 25% rule sets the licensor's royalties at 25% of the licensee's net profits derived from the license. The disadvantage of the 25% rule is that one rule cannot value all intellectual property, for all parties, in all situations. An even more simple rule of thumb is the “$50,000 rule.” The $50,000 rule states that the average patent in the average patent portfolio has a value of $50,000. The disadvantage of the $50,000 rule is that one value cannot be attributed to all patents, in all portfolios, at all times. Another method proposed for valuation of intangible assets is the “top-down” method. The top-down method begins by calculating the market value of a firm, either from the price of its outstanding common stock, in the case of a public company, or from substitute measures such as price-earnings ratios or net cash flows, in the case of a private company. The total value of the firm's tangible assets, including land, buildings, equipment and working capital, is then subtracted from the market value of the firm and the remainder is the total value of the firm's intangible assets. The primary disadvantage of the top-down method is that it cannot be used to value individual or distinct groups of intellectual property assets. The top-down method also does not provide a means for differentiating among different types of intangible assets, or apportioning market value among distinct sets of intangible assets. A variation of the top-down method is the known as the “tech factor method.” The tech factor method associates core patents with different business divisions of the firm, and then allocates the business divisions' total net present value among the core patents based on an industry specific standard percentage. One disadvantage of the tech factor method is that it does not account for the effect of intangible assets other than patents on a business division's total net present value. Another disadvantage is that the method cannot value individual patents, or groups of patents, differently. All patents associated with the same business division will have the same value. Another approach to valuing intangible property is the “knowledge capital scorecard.” The knowledge capital scorecard first subtracts from a firm's annual normalized earnings the earnings from tangible and financial assets. The remainder of the earnings, which are generated by “knowledge assets,” is divided by a knowledge capital discount rate to calculate the value of intellectual property assets. One disadvantage of the knowledge capital scorecard is that it does not separate the different types of “knowledge assets.” Another disadvantage is that the method cannot value individual knowledge assets, or groups of knowledge assets, differently. The most recent methods which have been proposed for valuation of intangible assets use mathematical or economic models. The “Monte Carlo analysis” attempts to value intangible assets based on a probability weighted distribution of alternative possible values. “Black-Scholes option analysis” attempts to value intangible assets based on the value of future strategic options which a firm possesses as a result of owning the intangible asset. The disadvantages of these methods are that they are very complex, require substantial technical and computing resources, and depend on large amounts of detailed data. Finally, the Technology Risk Reward Unit (TRRU) valuation method is a modification of Black-Scholes option pricing model. The TRRU method combines data on the cost of completing technology development, the time required to bring the technology to market, the date of patent expiration, the marketplace value of the technology, the variability of the value estimate, and a risk-free interest rate to determine a suggested value for an intellectual property asset. The TRRU method depends upon an Intangible Asset Market Index (IAM) to calculate the marketplace value of a technology and the variability of a value estimate. The IAM tracks broad industry segments, such as health care and automobiles, and specific technologies within each segment, such as imaging equipment and automotive glass, to measure the average value of comparable technology. The disadvantage of the TRRU method is that it does not account for different types of intellectual property assets and does not differentiate between intellectual property assets representing minor and major advances in a field. 3. Objects and Advantages It is therefore a principal object and advantage of the present invention to provide a methodology that can be used to distinguish among different types of intellectual property, to value individual or sets of related intellectual property assets, and to value intellectual property assets consisting of minor and major advances in a field differently. A further object and advantage of the present invention is to provide a methodology that can be implemented with a minimum amount of information generally available to the public and that can also be implemented with information generated through proprietary research and analysis thus allowing the user to decide the line between cost and relative accuracy of the valuation on a case-by-case basis. An additional object and advantage of the present invention is to provide a methodology that can be used for planning development of pre-market products, negotiating licensing transactions, and selecting among research and development investments. With regard to planning the development of pre-market products, the invention has the object and advantage of being able to value different product configurations to determine which provides the greatest return on total investment. With regard negotiating licensing transactions, the invention has the object and advantage of being able to calculate the value of a license to a licensor and licensee, and of calculating a license payment which provides a licensor and licensee an equal return on their respective investments in the license. With regard to selecting among research and development investments, the invention has the object and advantage of calculating the return on investment in the creation of new intellectual property assets incorporated in existing or new products. Other objects and advantages of the present invention will in part be obvious, and in part appear hereinafter. In accordance with the foregoing objects and advantages, the competitive advantage valuation method of this invention comprises a series of modular associations and calculations that ultimately determine the value of an intellectual property asset. By calculating the proportional contribution of an intellectual property asset to the competitive advantage of a related product in a real market, a discrete value can be placed on the intellectual property asset. The fundamental premise of the present invention is that the value of an intellectual property asset can be calculated from the competitive advantage that it contributes to a discrete tangible asset that competes in a marketplace. The methodology of the present invention first associates the intellectual property asset with a related tangible asset that embodies the intellectual property asset. After a set of parameters that define the tangible asset are identified, the tangible asset is quantitatively compared to competing assets in the marketplace to determine its overall competitive advantage over those competing assets. The competitive advantage of the intellectual property asset relative to the total competitive advantage of the tangible asset is calculated based upon a quantitative comparison to the other intellectual property assets that are embodied in competing assets and the tangible asset. Based upon the relative competitive advantage contribution of the intellectual property asset to the overall competitive advantage of the tangible asset, a percentage of the overall value of the tangible asset is assigned to the intellectual property asset. A determination of the competitive advantage that an intellectual property asset can contribute to a tangible asset in the market place can be used to calculate more than just the value of that asset. The methodology may also be used to predict the market share that a product embodying a specific set of intellectual property assets will eventually achieve once introduced into the marketplace. The competitive advantage methodology also forms the basis for calculation of the value of a license of an intellectual property asset to both the licensor and the licensee or licensees. The calculation of competitive advantage is also integral to valuing a new intellectual property asset that is an improvement over or replacement of an existing intellectual property asset. FIG. 1 is a detailed flow chart of a first embodiment of the present invention. FIG. 2 is a high level flow chart of a first embodiment of the present invention. FIG. 3 is an intermediate level flow chart of a first embodiment of the present invention. FIG. 4 is a detailed flow chart of a second embodiment of the present invention. FIG. 5 is a high level flow chart of a second embodiment of the present invention. FIG. 6 is an intermediate level flow chart of a second embodiment of the present invention. FIG. 7 is a detailed flow chart of a third embodiment of the present invention. FIG. 8 is a high level flow chart of a third embodiment of the present invention. FIG. 9 is an intermediate level flow chart of a third embodiment of the present invention. FIG. 10 is a detailed flow chart of a fourth embodiment of the present invention. FIG. 11 is a high level flow chart of a fourth embodiment of the present invention. FIG. 12 is an intermediate level flow chart of a fourth embodiment of the present invention. The method of this invention comprises a series of associations and calculations that determine the value of an intellectual property asset. Specifically, this invention determines the value of an intellectual property asset as a function of the competitive advantage which it contributes to a tangible asset. Tangible assets (i.e. products, processes, and systems) are viewed as an aggregation of intangible intellectual property assets (i.e. patents, trade secrets, copyrights, trademarks, and business methods). Thus, an intellectual property asset is synonymous for an intangible asset and a tangible asset refers to a concrete product, process, or system. For convenience, the description will often refer to a product as an example of a tangible asset. As many of the associations and calculations are independent or form a basis for later calculations, they do not necessarily need to be performed in any particular order unless noted. Referring now to the drawing Figures, wherein like reference numerals refer to like parts throughout, there is seen in FIG. 1 an illustration of the detailed methodology with all requisite steps required for calculating the value of an intellectual property asset (IPA) 5. FIG. 2 illustrates that the basis of the methodology for calculating the value of an intellectual property asset 5 involves the combination of the present value of an associated product 40 (i.e. a tangible asset incorporating the intellectual property asset) and the competitive advantage of the associated product due to the intellectual property asset 95′. From these modules, a value of the intellectual property asset 105 can be determined. FIG. 3 illustrates an intermediate level of the methodology and provides more detail for the steps to be performed in the valuation of an intellectual property asset 5. As illustrated, the competitive advantage of an associated product 95′ is a function of: (1) a comparison to competing products 70′ based upon parameters relevant to success of associated product in the marketplace, and (2) assocating the intellectual property asset 5 with a parameter group 45′ to allocate a portion of the competitive advantage. The determination of the competitive advantage of the associated product 95′ due to the intellectual property asset 5 allows the value 105 to be determined as a percentage of the present value of the associated product 40. The underlying calculations for these modules are illustrated in FIG. 1. According to FIG. 1, the intellectual property asset 5 to be valued is associated with a product (P) 10, i.e. a tangible asset incorporating the intellectual property asset 5, and the present value (PV) 40 of the associated product is calculated. The calculation requires information about the product's annual gross sales (P:GS) 15, the yearly growth of the market (P:MG) 20 as a percent, the length in years of the product's remaining life (P:RL) 23, the product's profit margin (P:PM) 25 as a percent of gross sales, and the applicable present value discount (PVD) 30. This data can be gleaned from public or private data sources and entered into a spreadsheet for easy calculation. The formula for calculating the present value from this data is: P:PV [1 . . . PRL] =P:GS×(1+P:MG [1 . . . PRL])×P:PM [1 . P:RL] ×PVD [1. P.RL] P:PV=P:PV [1] +P:PV [2] +P:PV [3] +. . . P:PV [P.RL] The intellectual property asset 5 is also associated with one of the three primary parameter groups 45 based on the type of intellectual property that is being valued. The three parameter groups are technical, reputational and operational, and each includes a distinct group of intellectual property assets, the technical parameter group, the reputational parameter group. The operational parameter group representing the various and distinct forms of intellectual property. The technical group includes utility patents (not including method of business patents), functional software copyrights, and technical trade secrets. The reputational group includes trademarks, trade names and brand names. The operational group includes business method patents and proprietary business processes. If the intellectual property asset to be valued is a utility patent, then the technical parameter group should be evaluated. Similarly, if the asset is a trademark or business method, then the reputational or operational groups should be evaluated, respectively. Once the association is made, a parameter group weight, (TP′G:W), (RP′G:W), or (OP′G:W) 50, i.e. technical, reputational, or operational parameter group weight, respectively, is calculated from data obtained about expenditures on research and development (R&D$), advertising (AD$), and business innovation (BI$). The default formulas for determining the technical parameter group weight (TP′G:W), reputational parameter group weight (RP′G:W), and operational parameter group weight (OP′G:W) are: The parameter group weights 50 allow a portion of the present value of the associated product 40 to be allocated to one of the three distinct groups of intellectual property assets. The value of a given intellectual property asset is thus calculated relative to the value of its related intellectual property group at the exclusion of the other groups. For example, a patent, i.e. a technical group asset, that is associated with a product can be valued despite the presence of a strong trademark, i.e. a reputational group asset, that would otherwise inflate or deflate a valuation. Within each parameter group there is a prime parameter (PP′) and a parameter set (P′S) comprised of sub-parameters (SubP′). In the technical parameter group, the prime parameter is price (PrP′) and the parameter set is a set of performance parameters (PfP's). Sub-parameters may include any number of relevant or discrete performance capabilities. In the reputation parameter group, the prime parameter is customer recognition (CrP′) and the parameter set is a set of customer impression parameters (CiP's). In the operational parameter group, the prime parameter is the operation cost (OcP′) and the parameter set is a set of operational efficiency parameters (OeP's). According to the methodology, the prime parameter and parameter set must be weighted 60 based on the number of parameters (NP′) contained in the parameter group. The prime parameter weight equals the parameter set weight when the total number of parameters is two. The prime parameter weight decreases and the parameter set weight increases as the number of sub-parameters increases. This adjustment is necessary to limit the increase in the prime parameter weight due solely to the increase in the number of sub-parameters in the parameter set. The percentage decrease in prime parameter weight and percentage increase in parameter set weight is based on a multiple (M) of the number of parameters. The default value for M is five. The default formulas for calculating the prime parameter weight and the parameter set weight are: If sub-parameters exist, their respective weights can be determined as fractions of the parameter set weight. Once the prime parameter weight and parameter set weight have been determined 60, their relative weights 65 can then be calculated using the parameter group weight 50 determined earlier. The formulas SubP′:RW=(P′S:W×P′G:W)/(NP′−1) [if SubP′:Ws are equal] SubP′:RW=(SubP′:W/P′S:W)×P′G:W×P′S:W [if SubP′:Ws are not equal] Another step of the methodology is to calculate the relative competitive advantage 95 of the intellectual property asset 5 as compared to related intellectual property assets 70. There are two types of related intellectual property assets: substitute intellectual property assets (SIPA) and complementary intellectual property assets (CIPA). Substitute intellectual property assets are intellectual property assets incorporated in competing products which are associated with the same parameter as the intellectual property asset 5 to be valued. Complementary intellectual property assets are intellectual property assets incorporated in the associated product 10 which are associated with the same parameter group as the intellectual property asset 5 to be valued. These assets are compared based upon quantifiable parameter measures that define the asset and are relevant to product sales in the marketplace. There are three types of parameter measures: physical measures, psychological measures and estimation measures. Physical measures provide the most objective parameter comparisons and should be used whenever possible. Some parameters, such as design aesthetics, cannot be physically measured. For these parameters, psychological measures should be used and can be based on consumer focus groups. When it is not possible or too costly to obtain physical or psychological measures, estimation measures can be used. Estimation measures are generally based on a numerical scale. Some parameters might be interdependent, for example size and weight, and the combination of these parameters might produce different values than if the parameters were valued separately as the methodology does by default. Regression analysis or neural network software can be used to analyze the independence or interdependence of parameters. Regardless, a spreadsheet can be used to organize all of the parameter data for the sets of substitute intellectual property assets and complementary intellectual property assets and the formulas can be entered for easy calculation. The methodology generally calculates the relative competitive advantage 95 of the intellectual property asset 5 in three steps. First, base competitive advantages 75, 80 are calculated for the intellectual property asset 5 and each complementary intellectual property asset by comparing these assets to competing assets. Second, weighted competitive advantages 85, 90 are calculated for the intellectual property asset to be valued and each complementary intellectual property asset by factoring the base competitive advantages 75, 80 by the corresponding relative parameter weight 65. Third, relative competitive advantages 95, 100 are calculated for the intellectual property asset to be valued and each complementary intellectual property asset by dividing the weighted competitive advantages 85, 90 of each by the sum of the weighted competitive advantages. The detailed methodology for calculating the base competitive advantages 75, 80, first calculates an average value (AvV) 72 for the substitute intellectual property assets and for the substitute complementary intellectual property assets (SCIPA) 73, and then compares these average values 72, 73 to the actual values (AcVs) 71, 74 of the intellectual property asset and complementary intellectual property assets to determine the base competitive advantages 75, 80 as a percentage variation. The substitutes for the complementary intellectual property assets are intellectual property assets incorporated in competing products which are associated with the same parameters as the complementary intellectual property assets. The values represent quantitative measurements of the characteristic of the product, such as size, weight, speed, etc., if relevant in the marketplace. The formulas are as follows: The next step in determining the relative competitive advantages 95, 100 is to calculate weighted competitive advantages (WCA) 85, 90 for the intellectual property asset to be valued (IPA:WCA) and the complimentary intellectual property assets (CIPA:WCA) using the relative parameter weights calculated earlier 65. The formulas are as follows: The formulas for calculating the intellectual property asset's relative competitive advantage (IPA:RCA) 95 and complementary intellectual property assets' relative competitive advantage (CIPA:RCA) 100 first calculate a total weighted competitive advantage (T:WCA) for the parameter group. The total weighted competitive advantage for the parameter group is the sum of all of the weighted competitive advantages of the intellectual property asset to be valued and the complementary intellectual property assets. The relative competitive advantages 95, 100 are calculated by dividing the weighted competitive advantage 85, 90 by the total weighted competitive advantage and then multiplying the quotient by the associated parameter group weight (P′G:W) 50 determined earlier. The formulas for calculating an intellectual property asset's relative competitive advantage 95 and a single complementary intellectual property asset's relative competitive advantage 100 are: The final step in the methodology is to calculate the value of the intellectual property asset (IPA:V) 105 from the product's present value (P:PV) 40 and the intellectual property asset's relative competitive advantage (IPA:RCA) 95 compared to related intellectual property assets. The value of complementary intellectual property assets (CIPA:V) can also be determined from the product's present value 40 and the complementary intellectual property assets' relative competitive advantages (CIPA:RCA) 100. The formulas are as follows: If the intellectual property asset 5 is associated with multiple products, an intellectual property asset value 105 can be calculated for each product and the results summed to calculate a total intellectual property asset value. If the intellectual property asset 5 is associated with multiple parameters, the intellectual property asset's relative competitive advantage 95 is calculated for each parameter and the results are summed to calculate total value. If multiple intellectual property assets are associated with a single parameter, a relative competitive advantage 95 is calculated for that parameter and divided among the intellectual property assets to calculate their individual values. The value of the intellectual property asset 5 can also be adjusted to account for the degree of security associated with title to the asset. This title security adjustment will decrease value of the intellectual property asset by less than 5% for intellectual property rights which are highly secure and over 95% for intellectual property rights which are highly insecure. The title security adjustment is calculated by first assigning a value between 1 (low) and 10 (high) to five variables that generally determine the strength of title in intellectual property of assets. These five variables are: Scope of Rights; Duration of Rights; Infringement Impunity; Infringement Detection; and Infringement Enforcement. Scope of rights measures the breadth of protection provided to the intellectual property asset. Duration of rights measures the remaining period of intellectual property asset protection. Infringement impunity measures the impunity of the intellectual property asset from reverse engineering and unauthorized use or reproduction. Infringement detection measures the ability to detect infringement of the intellectual property asset if it occurs. Finally, infringement enforcement measures the ability to enforce rights in the intellectual property asset through legal and non-legal means. The title security adjustment equals the sum of these values divided by 50. Title security adjustment is an estimation measure which can distort other, more objective, measures. In addition, to properly account for the effect of the title security adjustment on the value of an intellectual property asset, the title security adjustment must also be calculated for substitute and complementary intellectual property assets. For these reasons, title security adjustment should only be used when there is significant concern over the title security of the intellectual property asset. FIG. 4 illustrates in detail how the methodology can be used to determine the value of a pre-market product (PMP) 200. FIG. 5 illustrates that the primary modules necessary for computing the pre-market product predicted value 310 are: (1) the present value of an intended market 295 for the pre-market product, (2) the competitive advantage 305 of the pre-market product in the intended market, and (3) the predicted market share 300 of the pre-market product 200 in its intended market. According to FIG. 6, the intended market present value 295 is calculated as a function of the intended market annual gross sales 220, the intended market growth 275 annually as a percentage, the pre-market product life cycle 280, the pre-market product profit margin 285 as a percent of gross sales, and an applicable present value discount 290. The competitive advantage 305 of the pre-market product is computed by comparing the pre-market product to competing products 215 on each of the identified relevant parameters. For each parameter, the target value for the pre-market product is compared to the average value for competing products and the results are weighed and averaged 216 to determine the competitive advantage of the pre-market product in the intended market 305′. Finally, the pre-market product's predicted market share 300 is calculated by comparing the competitive advantage of the pre-market product in the intended market 305′ and the average market share of competing products 225. The pre-market product predicted value 310 is computed by multiplying the pre-market product predicted market share 300 by the intended market present value 295. FIG. 4 describes in detail the steps and calculations required for each of the modules in determining the predicted value of the pre-market product 310′. One step is to associate the pre-market product 200 with an intended market (PM) 205 where the pre-market product will eventually be placed. After the association with an intended market 205, it is possible to determine the intended market present value (PM:PV) 295 from information about the market obtained from outside sources. The information that is needed is the current annual gross sales (PM:GS) 220 in the market in dollars; the market growth (PM:MG) 275 as a percent each year; the estimated pre-market product life cycle (PMP:LC) 280; the pre-market product profit margin (PMP:PM) 285 as a percentage of sales; and the appropriate present value discount (PVD) 290. The formula for calculating the product market present value 295 is: PM:PV [1. PMP.LC] =PM:GS×(1+PM:MG [1. . PMP.LC])×PMP:PM [1 .PMP:LC] ×PVD [1 . . . PMP:LC] PM:PV=PM:PV [1] +PM:PV [2] +PM:PV [3] +. . . PM:PVP [PMP:LC] The product market present value 295 is a summation of the product market's present value for each year of the pre-market product life cycle 280. Another step is to associate the pre-market product 200 with the three parameter groups 210. These parameter groups are the reputational, technical, and operational parameter groups discussed above. From this association 210, it is possible to calculate the reputational parameter group weight (RP′G:W) 230, technical parameter group weight (TP′G:W) 245, and operational parameter group weight (OP′G:W) 260. The parameter group weights 230, 245, 260 are calculated from information obtained about expenditures on research and development (R&D$), advertising (AD$), and business innovation (BI$). The formulas are: The next step is to calculate the base competitive advantages for each parameter group, i.e. the reputational base competitive advantage 240, the technical base competitive advantage 255, and the operational case competitive advantage 270. To calculate these base competitive advantages 240, 255, 270, the pre-market product must be associated and compared with competing products (CP) 215. Competing products are products with which the pre-market product will compete in the product market 205. Next, the competing products are compared based upon relevant parameters in the three parameter groups. This comparison can be performed in a spreadsheet, making the calculation of the base competitive advantages 240, 255, 270 simple. The base competitive advantage calculations 240, 255, 270 require that an average value for the competing products (CP:AvV) be calculated for each parameter in the three parameter groups. The average value for each parameter of the competing products is then compared to the target value for the pre-market product (PMP:TV) to calculate the parameter base competitive advantage (P′:BCA). The target values are quantitative representations of the parameters, such as a product's weight, size, speed, efficiency, etc., if relevant in the marketplace. The formula for calculating each parameter base competitive advantage is: Once the parameter base competitive advantages 240, 255, 270 have been calculated, each parameter group average competitive advantage (P′G:ACA) 235, 250, 265 can then be calculated. The average competitive advantage 235, 250, 265 is the sum of all of the parameter base competitive advantages 240, 250, 265 divided by the number of parameters (NP′) in that group. The formula for calculating a parameter group average competitive advantage is: P′G:ACA=(P′ [1] :BCA+P′ [2] :BCA+P′ [3] :BCA+. . . P′ [n] :BCA)/NP′ Using the average competitive advantages 235, 250, 265, a weighted average competitive advantage (WACA) 305 can be calculated from the reputational, technical, and operational parameter group weights 230, 245, 260 determined earlier. The formula is as follows: Before determining the pre-market product predicted value 310, the predicted market share must be calculated 300. The calculation of the pre-market product predicted market share (PMP:PredMS) 300 begins with the calculation of the product market average market share (PM:AvMS) 225. This calculation is simply one hundred (100) divided by one plus the number of substitute products (NSP), or graphically, PM:AvMS=100/(NSP+1). To calculate the predicted market share 300, the average market share 225 is multiplied by the weighted average competitive advantage 305 plus one (1) as follows: The final calculation for determining the pre-market product predicted value (PMP:PredV) 310 consists of multiplying the product market present value 295 times the predicted market share 300, or graphically, PMP:PredV=PM:PV×PMP:PredMS. The valuation of a pre-market product can be used in four related ways. First, the methodology can be used to value different parameter configurations of a pre-market product to determine which configuration provides the greatest return on total investment in the pre-market product. When the method is used in this way, each parameter configuration of pre-market product is viewed as a distinct aggregation of parameter values. Second, the methodology can be used to select among alternative investments in the creation of new intellectual property assets to be incorporated in pre-market products. When the invention is used in this way, each investment is viewed as a trade among alternative inchoate intellectual property assets. Third, the methodology can be used to position a pre-market product in the product market. When the method is used in this way, each market position is viewed as an alternate set of parameter values. Fourth, the methodology can be used to price a pre-market product in the product market according to the competitive advantage value which PMP provides customers. When the method is used in this way, each market price is viewed as a trade among levels of competitive advantage. The default calculations in the method assume a correlation between a product's average competitive advantage and its market share. If a product enjoys a positive 25% average competitive advantage, the assumption is that the product will have a market share 25% greater than the average market share. If a product has a negative 25% average competitive advantage, the assumption is that the product will have a market share 25% less than the average market share. Regression analysis or neural network software can be used to test and refine the correlation between a product's average competitive advantage and its market share. If these tools are not available, or too costly to use, the default method of one to one between competitive advantage and market share can be applied. The pre-market product predicted value can be adjusted for risks such as development, fabrication, marketing and sales uncertainty. To adjust the pre-market product predicted value for risk, a risk factor is determined for each type of risk and the cumulative or average risk factor is used to reduce the pre-market product predicted value. FIG. 7 illustrates how the methodology can be used to value an intellectual property license (IPL) 350 between a licensor 355 and licensee 360. FIG. 8 illustrates the high level methodology with the modules required for computing the net value of a license 400 as a function of the minimum value to licensor 385 and maximum value to licensee 395 as determined from an analysis of the change in value of a tangible asset 363, 378 (i.e. a product) associated with an intellectual property asset subject to the license 350. According to FIG. 9, the change in value of an associated product 363, 378 due to the intellectual property asset that is the subject of the license is used to determine the value of the license. The change in value of an associated product 363, 378 is a function of (1) the product market present value 370, (2) the average market share 375, and (3) the changes 380, 365 in the competitive advantage of the tangible asset as a result of the license. The net value of the license 400 is used to determine an equal return payment 405 that provides an equal return on investment to both parties to the license 350 and is used to establish the value to the licensor 425 and the value to the licensee 430. According to FIG. 7, the method determines the license net value 400 to both a given licensor (LR) 415, 425 and a given licensee (LE) 420, 430 depending on presence of guaranteed payments. First, the change in competitive advantage of an associated product 365 of the licensor and the change in competitive advantage of an associated product 380 of the licensee must be calculated. This can be accomplished by using the preceding embodiments of the methodology to determine the necessary elements. For example, the licensor's change in competitive advantage (LR:CAΔ) 365 is calculated by subtracting the licensor's average competitive advantage (ACA) with the intellectual property license (+IPL) from the licensor's average competitive advantage without the intellectual property license (˜IPL) and multiplying the difference by the relevant parameter group weight (P′G:W). The formula is as follows: The licensee's change in competitive advantage (LE:CAΔ) 380 is calculated by subtracting the licensee's average competitive advantage (ACA) without the IPL (˜IPL) from the licensee's average competitive advantage with the IPL (+IPL) and multiplying the difference by the relevant parameter group weight (P′G:W). The formula is as follows: It is also necessary to calculate the average product present value (AvP:PV) 390 by multiplying the product market present value (PM:PV) 370 by the average market share (AvMS) 375. The formula is Once the preceding calculations have been made, the minimum value (LR:MinV) 385 to the licensor and the maximum value (LE:MaxV) 395 to the licensee can be determined. The licensor's minimum value 385 is the amount which the licensor must earn to compensate for its loss in competitive advantage due to the license. In other words, the licensor minimum value 385 equals the licensor's loss of product present value due to the license. The licensor minimum value 385 is calculated only if the licensor is a for-profit organization and the licensee is a competitor firm. The licensor minimum value 385 is equal to zero if the licensor is a not-for-profit organization or the licensee is not a competitor firm. The licensor minimum value 385 is calculated by multiplying the licensor's change in competitive advantage 365 by the average product present value 390, or LR:MinV=LR:CAΔ×AvP:PV. The licensee's maximum value 395 is the maximum amount which the licensee 360 can earn from its increase in competitive advantage due to the license 350. The licensee maximum value 395 must be calculated regardless of licensor 355 status or licensor 355 competition with licensee 360. The licensee maximum value 395 equals licensee's maximum increase in product present value due to the intellectual property license 350. The formula for calculating licensee maximum value 395 is the product of the licensee change in competitive advantage 380 and the average product present value 390, or LE:MaxV=LE:CAΔ ×AvP:PV. The next step in valuing the intellectual property license 350 is to calculate an equal return payment (ERP) 405. The equal return payment 405 is a payment by the licensee 360 to the licensor 355 which will provide both the licensee 360 and licensor 355 an equal return on their respective investments in the license 350. The equal return payment 405 assumes that licensor 355 and licensee 360 share an equal risk in the license 350. An equal return payment 405 based solely on licensee sales (i.e. “running royalties”) divides risk equally between the licensor 355 and licensee 360. An equal return payment 405 which includes guaranteed payments shifts risk from the licensor 355 to the licensee 360. The first step in the calculation of an equal return payment 405 is to calculate the intellectual property license net value (NV) 400. The net value 400 is the value of the license 350 after deducting the licensor minimum value 385 from the licensee maximum value 395. The net value 400 assumes that the licensor 355 receives a payment equal to the licensor minimum value 385 before licensor return is calculated. The formula is thus IPL:NV=LE:MaxV−LR:MinV. The next step necessary for determining the final license values is to calculate the licensor investment 435. The licensor investment 435 is the percentage amount of licensor total investment (TI) attributable to the intellectual property asset that is the subject of the license. If the intellectual property asset is currently used by the licensor 355, the licensor total investment 435 is allocated between the licensor's current applications (CA) and the intellectual property license 350. The default allocation divides the licensor total investment 435 equally between the number of current applications (NCA) and the intellectual property license 350. If the intellectual property asset is not currently used by the licensor 355, the full amount of the licensor total investment is allocated to the intellectual property asset. However, if the intellectual property asset is not currently used by the licensor 355, the licensor minimum value will equal zero. Licensing an unused intellectual property asset to a competitor is the same as licensing a used intellectual property asset to a non-competitor. The formula for calculating licensor investment (LR:I) 435 is: Using the licensor investment 435 and the license net value 400 calculations, an equal return payment can be determined. The equal return payment 405 represents a payment by the licensee 360 to the licensor 355 which provides both parties and equal return on investment in the license 350. The licensor return on investment equals the equal return payment 405 divided by licensor investment 435. The licensee return on investment equals the difference between license net value 400 and the equal return payment 405 divided by the equal return payment 405. For the licensee 360, the equal return payment 405 equals the licensee investment in the license 350. The formulas are as follows: The equal return payment 405 is a measure of the point at which the return on investment to the licensor 355 equals the return on investment to the licensee 360. As the licensor and licensee return on investment are equal, the individual equations can be substituted to solve for the equal return payment 405 as follows: ERP ^2 =LR:I(IPL:NV−ERP) Solving for ERP, the formula becomes: Once the equal return payment 405 has been calculated, the licensor 415, 425 and licensee 420, 430 values can be determined. The methodology takes two separate tracks depending on whether there are guaranteed payments or not 410. If there are no guaranteed payments included in equal return payment 405, licensor value 425 equals the equal return payment 405 and licensee value 430 equals the license net value 400 minus the equal return payment 405. Thus, LR:V=ERP and LE:V=L:NV−ERP. If guaranteed payments are included in equal return payment 410, further calculation is necessary. To calculate licensor and licensee value 415, 420 when there are guaranteed payments, the actual monetary amount of guaranteed payments (GP$) must be determined. The guaranteed payments are the sum of all required payments under the license to the licensor 355 including license fees, mandatory license payments, and minimum royalties. Next, a guaranteed payment discount factor (DF) must be determined. The discount factor discounts the guaranteed payment to the licensor and is set at a default value of 0.5 (0.5 ×GP$). The licensor and licensee values 415, 420 can be calculated from the equal return payment 405, the amount of the guaranteed payments, and the guaranteed payment discount factor by the following formulas: The basic formula calculates the value of an exclusive license to a given licensor 355 and a given licensee 360. The methodology can also be used to calculate the value of a non-exclusive license to given licensor 355 and multiple licensees 360. When used to calculate the value of a non-exclusive license, the licensor minimum value 385 and licensees' maximum values 395 must take into account the use of the technology by the multiple licensees. The greater the number of licensees, the higher the licensor minimum value 385 will be and the lower the licensees' maximum values 395 will be. The equal return payment 405 can be calculated based on the licensees' average maximum value or the equal return payment 405 can be calculated for each licensee based on that licensee's competitive advantage change 380. The methodology can also be used to calculate the value of a license 350 when the licensee 360 grants the licensor 355 a cross license. Under these circumstances, the licensor's minimum value 385 would be reduced by the competitive advantage gain to the licensor from the cross license, and the licensee's maximum value 395 would be reduced by the licensee's loss of competitive advantage due to the cross license. The formula can be similarly adjusted to account for the licensee's grant back to the licensor of rights in improvements which the licensee makes to the intangible asset licensed. Here again, the licensor's minimum value 385 would be reduced by the competitive advantage gain to the licensor from the grant back license and the licensee's maximum value 395 would be reduced by the licensee's loss of competitive advantage due to the grant back license. Under the methodology, a licensor 355 adds greater value to an intellectual property license 350 if the licensor 355 is a non-profit entity, if the license 350 is to a non-competitor, or if the subject of the license is an intellectual property asset which the licensor 355 does not use. If the licensor 355 is a non-profit entity, the licensor 355 will have no minimum value and might have a low investment value due to subsidization of the investment. If the licensee 360 is a non-competitor, or the subject of the license 350 is an intellectual property asset which the licensor 355 does not use, the licensor 355 will also have no minimum value. Although intellectual property licenses between competitors add less potential value to the licensor 355 and licensee 360, the subject matter of a competitor license is likely to provide a licensee 360 with a more direct and immediate competitive advantage. Although the methodology seeks to minimize the amount of information necessary to value an intellectual property license, it does require information on each competing product. If this information is not available, or too costly to obtain, the methodology can be based on a hypothetical average competing product. When based on a hypothetical average product, the competitive advantage changes can be calculated in terms of the hypothetical average product to approximate the licensor and licensee values. FIG. 11 illustrates the basic modules required to reach the value 535 of a new intellectual property asset 450. These modules involve a calculation of (1) the change in competitive advantage of an associated product 515, (2) the resultant change in present value of the associated product 530, and (3) the relative competitive advantage 525′ or the percent contribution of the new intellectual property asset to the change in present value. FIG. 12 further details the components of the modules and shows that the change in competitive advantage of an associated product 515 is determined after identifying parameters relevant to success in the marketplace 464, comparing those parameters to competing products 455, and calculating the competitive advantage with and without the new intellectual property asset 460. The change in associated product present value 530 is calculated as in the previous embodiments from the product market present value 500 and average market share 505 . The relative competitive advantage of the associated product due to the new intellectual property asset 525′ is also calculated as previously by determining the weighted competitive advantage 480 from the parameter weight 465 after identifying the parameters associated with the new intellectual property asset 453. FIG. 10 demonstrates in detail how the methodology may be used to value 535 a new intellectual property asset 450. This embodiment of the methodology is based on the change in competitive advantage (CAΔ) 515 due to the difference between the average competitive advantage (ACA) of a product with an existing intellectual property asset (EIPA) and the average competitive advantage of a product with the new intellectual property asset (NIPA). To determine the average competitive advantage of the existing intellectual property asset, it is necessary to first calculate an average value (AvV) for substitute intellectual property assets (SIPA). This is accomplished in the same manner as the embodiments described above by using a spreadsheet to compare parameters of the existing intellectual property asset against competing assets in the marketplace 455. The comparison should include the product with existing intellectual property and with the new intellectual property asset. From these comparisons, the average values for competing assets can be determined as explained previously. Next, the existing intellectual property asset base competitive advantage (EIPA:BCA) 470 is calculated from the substitute intellectual property asset average value (SIPA:AvV) and the existing intellectual property asset actual values (EIPA:AcV) for each relevant parameter as determined in the comparison 455. The formula is as follows: From these calculations, the existing intellectual property asset average competitive advantage (EIPA:AVA) 485 can be determined by totaling the existing intellectual property asset base competitive advantages 470 for each parameter and dividing by the number of parameters (NP′). The formula is as follows: EIPA:ACA=(EIPA [1] :BCA+EIPA [2] :BCA+EIPA [3] :BCA+. . . EIPA [n] :BCA)/NP′ The calculation of the new intellectual property asset average competitive advantage (NPA:AVA) 495 begins with a determination of the base competitive advantage 475 for all of the parameters through the comparison 460 to competing products. This is accomplished by comparing the competing average value for each parameter, as done previously for the existing intellectual property asset, with the target values (TV) for each corresponding parameter that the new intellectual property asset is expected to achieve. The formula for calculating the base competitive advantage 475 is: The average competitive advantage of the new intellectual property asset 495 is the sum of the base competitive advantages 475 of all of the parameters divided by the number of parameters, as NIPA:ACA=(NIPA [1] :BCA+NIPA [2] :BCA+NIPA [3] :BCA+. . . NIPA [n] :BCA)/NP′ Before calculating the change in competitive advantage 515 due to the new intellectual property asset 450, the parameter group weight 490 must be determined. The parameter group weight 490 is calculated as a ratio between the firm's R&D$, AD$ and BI$, just as in the previous embodiments of the methodology. The formulas for the technical, reputational, and operational parameter group weights are, respectively: The change in competitive advantage of the associated product 515 can now be calculated according to the following formula: The competitive advantage change 515 can be used to determine the change in product present value 530 due to the new intellectual property asset. The calculation first requires a determination of the average product present value 520. The average product present value 520 is the present value of an average product in the market as calculated from the product market present value (PM:PV) 500 and the average market share (AvMS) 505. The product market present value can be calculated, as in the second embodiment illustrated by FIG. 4, from information about the market obtained from outside sources. As in FIG. 4, the information that is needed is current annual gross sales (PM:GS) 220 in the market in dollars; the market growth (PM:MG) 275 as a percent each year; the product estimated life cycle (P:LC) 280; the product profit margin as a percentage of sales (P:PM) 285; and the appropriate present value discount (PVD) 290. The formula for calculating the product market's present value 295 is: PM:PV [1 .PMLC] =PM:GS×(1+PM:MG [1. . . PM:LC])×PM:PM [1 . . PMLC] ×PVD [1. . . PM.LC] PM:PV=PM:PV [1] +PM:PV [2] +PM:PV [3] +. . . PM:PVP [PM.LC] The product market's present value is calculated for each year of life cycle and the results are summed to determine the total product market present value 500. The average market share 505 is a percentage calculated from the number of substitute products (NSP) in the product market. The formula is AvMS=100/(NSP+1). From the average market share 505 and the product market present value 500, the average product present value 520 can be calculate by the formula AvP:PV=PM:PV×AvMS. The change in product present value (P:PVΔ) 530 attributable to the new intellectual property asset 450 can now be calculated by multiplying the change in competitive advantage 515 and the average product present value 520, or P:PVΔ=CAΔ×AvP:PV. Before calculating the value of the new intellectual property asset 535, the relative competitive advantage 525 must be also be determined. The calculation of the new intellectual property asset's relative competitive advantage 525 begins with the calculation of the parameter weight (P′:W) 465. The formula for calculating the parameter weight depends on whether the new intellectual property asset is associated with a prime parameter (PP′) or a parameter set (P's) in the comparisons 455, 460. The formula if associated with a prime parameter is: The formula if associated with a parameter set is: Once the parameter weight 465 has been determined, the new intellectual property asset weighted competitive advantage (NIPA:WCA) 480 can be calculated from its base competitive advantage 460 as This calculation is performed for each parameter and added together to constitute the total weighted competitive advantage (TWCA) 510 of the parameter group. Thus, P′G:TWCA=P′[1]:WCA+P′[2]:WCA+P′ [3]:WCA+. . . P′[n]:WCA. The formula for calculating the new intellectual property asset's relative comparative advantage (NIPA:RCA) 525 is as follows: The value of the new intellectual property asset (NIPA:V) can now be determined by multiplying the new intellectual property asset's relative comparative advantage 525 and the change in product present value 530. The formula is as follows: The methodology values a new intellectual property asset based on its competitive advantage change 515, and the resulting product present value change 530, that is attributable to the new intellectual property asset. This value alone, however, does not determine whether the new intellectual property asset is a good investment of a firm's resources. To determine the relative worth of an investment in a new intellectual property asset, the value must be compared to the cost of investment and a rate of return calculated. The rate of return on an investment in any new intellectual property asset can be compared to the rate of return on alternative investments in other new intellectual property assets to determine which combination of investments produces the highest overall rate of return. The general formula for calculating the rate of return on an investment in a new intellectual property asset (NIPA) is: Rate of Return=NIPA Value/Cost of Investment in NIPA The description of the methodology is based on use by a single firm or entity. The method may also be used by two or more firms engaged in a research and development joint venture. The methodology can be used to value intellectual property assets created in a research and development joint venture, to divide these assets among joint venture partners according to their highest valued uses, and to calculate rates of return on investment to joint venture partners from newly created intellectual property assets.
{"url":"http://www.google.ca/patents/US7188069","timestamp":"2014-04-18T23:55:06Z","content_type":null,"content_length":"158433","record_id":"<urn:uuid:ec876ed5-8b74-4059-b098-a673f2031368>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Linear State Estimation Using a Weighted Least-Squares Method Vary the variances of three measurements performed on a linear system to observe the changes in the estimation of the single state variable. The state variable is estimated using a weighted least-squares method. The system is linear simply because linear functions describe the variation of the measured quantities with changes in the state variable. Optionally, you can see the true values of the state variable and the measured quantities, as well as the state variable estimate and the recalculated values for the three measurements. You can zoom to see details on the axis. State estimation consists of using a statistical criterion to assign values to a set of (unknown) state variables, based on a set of measurements performed on the system. Since most state variables are unknown in a real-life system, they have to be estimated from the measurements that can actually be performed. The values assigned to the state variables by means of a state estimation technique are called state variable estimates. State estimation can be performed on either linear or nonlinear systems. This Demonstration represents a linear system with a single state variable ( axis) and three different quantities ( axis) that are measured. Assume that the system's model is known in full; then the graph shows the linear relationships describing the values of each of the three measured quantities as functions of the state variable. The measurements include normal distribution errors, which describe the usual fact that measuring a given quantity never yields its true value but a value resulting from adding a zero-mean Gaussian error to the true value. From these deliberately inaccurate measurements, an estimate for the state variable is calculated using a weighted least-squares method. To this effect, variance-based measurement weight factors are applied. Namely, small-variance measurements, obtained from the best measuring instruments, are preponderant when it comes to calculating the estimate; on the other hand, measurements with larger variances (coming from worse instruments) have smaller weighting factors and a lower impact on the determination of the state variable estimate.
{"url":"http://demonstrations.wolfram.com/LinearStateEstimationUsingAWeightedLeastSquaresMethod/","timestamp":"2014-04-18T18:13:52Z","content_type":null,"content_length":"44873","record_id":"<urn:uuid:59a16465-693e-4062-a27c-27d9048bf6c9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Round and Round She Goes 2.2: Round and Round She Goes Created by: CK-12 This activity is intended to supplement Trigonometry, Chapter 1, Lesson 7. ID: 12385 Time Required: 20 minutes Activity Overview In this activity, students will explore relationships on the unit circle. Students will identify coordinates of points given an angle measure in degrees. Topic: Unit Circle • Right triangle trigonometry and the unit circle • Special right triangles • Cosine and sine on the unit circle Teacher Preparation and Notes • The first problem and second problem engage students in the exploration of the connection between angle measure and the coordinates of points in the first quadrant. Associated Materials Problem 1 – Introduction to the Unit Circle Students are introduced to the concept of the unit circle. Right triangle relationships are explored to develop an understanding of the patterns involved. Special right triangles are addressed to help students understand the exact values they will likely be expected to know. Students often have difficulty with remembering some of these special values. Ask students if they can see a attern that might help them remember that $\frac{\sqrt2}{2}$$45-45-90$ Similarly, ask them how they might remember that $\frac{\sqrt3}{2}$$30-60-90$$\frac{\sqrt3}{2}$$\frac{1}{2}$$60^\circ$$x-$$\frac{1}{2}$$30^\circ$$y-$$\frac{1}{2}$ Problem 2 – Extending the Pattern Students use a visual model to extend what they established in Quadrant I to Quadrants II, III, and IV. It is very helpful for students to think about symmetry as they move on to these other quadrants. Construction of rectangles in the unit circle is helpful for many students to make this extension. 1. $x = \cos \theta$ 2. $x = \sin \theta$ 3. $\frac{1}{2}$ 4. $\frac{\sqrt2}{2}$ 5. $\left(\frac{\sqrt3}{2},\frac{1}{2}\right)$ 6. $\left(\frac{1}{2},\frac{\sqrt3}{2}\right)$ 7. $\frac{\sqrt3}{2}$ 8. $\frac{1}{2}$ 9. $\frac{1}{2}$ 10. $\frac{\sqrt3}{2}$ 11. $\left(\frac{\sqrt2}{2},\frac{\sqrt2}{2}\right)$ 12. $\frac{\sqrt2}{2}$ 13. $\frac{\sqrt2}{2}$ 14. $(-a, b)$ 15. $(-a, -b)$ 16. $(a, -b)$ 17. $120^\circ$ 18. $240^\circ$ 19. $300^\circ$ You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/book/Texas-Instruments-Trigonometry-Student-Edition/r3/section/2.2/anchor-content","timestamp":"2014-04-24T00:28:21Z","content_type":null,"content_length":"109087","record_id":"<urn:uuid:095e4e32-0d73-450a-9530-6827f6425309>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2011 [00337] [Date Index] [Thread Index] [Author Index] Re: changing variable in an equation • To: mathgroup at smc.vnet.net • Subject: [mg116393] Re: changing variable in an equation • From: "J and B" <noslowski at comcast.net> • Date: Sun, 13 Feb 2011 03:06:12 -0500 (EST) • References: <201102110917.EAA07980@smc.vnet.net> <201102121020.FAA20150@smc.vnet.net> Updating my question. Those that have responded have requested further information. I hope this will help. First of all, thanks to those that have helped me with this problem. Please let me explain what is going on. I am the same person that asked about phase portraits. I am writing a text book using Mathematica to describe how neurons work. I ran across the book that I describe below. I knew that the way I was describing how neurons are working was wrong, but I had no modern books and so on. I am disabled and home bound. I thought the book that I found is a gold mine for me, but it is very hard for me to follow, due to his manner of writing which is more up to date than anything that I am use to. So I am like a kid in a class room again, trying to understand what the author is talking about. I need to be able to do what he does with another system but with Mathematica. Due to medical problems, I know I don't get to spend enough time working on Mathematica. It makes it a lot rougher when you can't work with the program like you should. I am lucky to get 3 hours a day at the computer. I am using the book Dynamical Systems in Neuroscience, by Dr. E. M. Izhikevich. I am working on chapter 3, one dimensional systems. What he does is break up a hard problem into small steps to get you use to his style. And what he has done is to take the equation c*V'= -gsubL(V - Vsubnl) Where c = 10, gsubL = 19, Vsubnl = -67. His solution comes out as V'(t)= Vsubnl + (V0 - VsubnL)*e^-gsubL * t/c. And V0 is the initial voltage in the membrane which he changes in order to get each plot. He starts out with an initial voltage of 0 and works to -100. He got all of these numbers in an experiment. He does not supply the numbers he used. Its like he is skimming over everything and thinking that you already know it what he is He then uses another system to solve it and he shows the plots. What you end up with is very similar to what several people have sent me, where you have many plots that are similar to e^-x. His vertical axes is V or the membrane voltage from -100 to 0, and the horizontal axes is time going from 0 to 5. His phase portrait for this is a straight line. The vertical axes is F(V)= V', and the horizontal axes is the membrane voltage going from -100 to 0. The line starts on the vertical axes which is "graph of F(V) = V'" at (V'=-67 & V = -100) and ends at the horizontal axes at (V = 20 & V' = 0), where the horizontal axes is the membrane potential V. The plots in his book are very small so I had to guess at the numbers. What I tried to do was to just plot his solution which is V(t)= Vsubnl + (V0 - Vsubnl)times e to the -gsubL * t divided by c. What I did was make up different equations such as vt1 = -67+(0-(-67))*e^-19t/c and substitute values for t. And then make another equation with a different V0 and do the same thing. Then finally combine them using Plot{{vt1,vt2 and so on}, These results mimic his plots so I am on the right track. It's just that my way takes an enormous amount of time. I know that there must be a better way to do this with Mathematica. That's the whole point of my request for Thank You noslowski at comcast.net -----Original Message----- From: DrMajorBob [mailto:btreat1 at austin.rr.com] Sent: Saturday, February 12, 2011 5:20 AM To: mathgroup at smc.vnet.net Subject: [mg116393] [mg116378] Re: changing variable in an equation You have a missing parenthesis, so I'll have to guess where you wanted it. Your three examples don't seem to match the phrase "working from -100 to And there's a third variable -- x -- that you haven't explained. I'll give it a fixed value. So... working from -10 to t+10 to save time, and t varying from 0 to .1 so that everything isn't obscured by the scale: v[i_, x_, t_] = -67 + (10 (i - 1) + 67) x E^(-19 x t)/10; x = 3; Plot[Evaluate@Table[v[i, x, t], {i, -10, 10}], {t, 0, .1}, PlotRange -> All] In case it's x that should vary from -100 to 100, then I don't know what values i should take on. But... if you want to vary both, here's an example: Plot[Evaluate@Table[v[i, x, t], {i, -10, 10}, {x, 0, 3}], {t, 0, .1}, PlotRange -> All] Notice that this is colored differently: Plot[Table[v[i, x, t], {i, -10, 10}, {x, 0, 3}], {t, 0, .1}, PlotRange -> All] That's what Evaluate is about, in the other plots. On Fri, 11 Feb 2011 03:17:10 -0600, J and B <noslowski at comcast.net> wrote: > Below is an equation that I am working on. I know there is some way to > work it out better than what I am doing. I would like the variable a > to change in increments of 10, from -100 to 100. > Thanks > my main equation is: v = -67+(a-(-67) x E ^ (-19 x t)/10 what I am > doing: > v1= -67+(0-(-67) x E ^ (-19 x t)/10; > v2= -67+(-10-(-67) x E ^ (-19 x t)/10; v3= -67+(-20-(-67) x E ^ (-19 x > t)/10; and so on and working from -100 to 100 Then I use Plot [ { > v1,v2, v3 ........},{t,0,5}, PlotRange -> All] Please note that I have > added some spaces in to make it more readable. > thanks > Jake DrMajorBob at yahoo.com • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Feb/msg00337.html","timestamp":"2014-04-18T13:28:45Z","content_type":null,"content_length":"30917","record_id":"<urn:uuid:7f6fdff8-2fbb-4f43-9a98-4138f71590c7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Aerts, Diederik - Leo Apostel Centre, Vrije Universiteit Brussel • Quantum Axiomatics Diederik Aerts • Published as: Aerts, D., 1993, "Quantum Structures due to fluctuations of the measurement situa-tions", Int. J. Theor. Phys., 32, 2207 -2220. • Being and Change: Foundations of a Realistic Operational Formalism • Relativity Theory: what is Reality? Diederik Aerts • Published as: Aerts, D., 1992, "Construction of reality and its influence on the understanding of quantum structures", Int. J. Theor. Phys., 31, 1815-1837. • Linearity and compound physical systems: the case of two separated spin 1/2 entities • The Violation of Bell Inequalities in the Macroworld Diederik Aerts, Sven Aerts, Jan Broekaert and Liane Gabora • Participating in the World: Research and Education in a Changing Society • To be published in "Ontology of Dialogue", The International Readings on Theory, History and Philosophy of Culture, ed. Moreva, L. M. • Helvetica Physica Acta, Vol. 52 (1979), Birkhuser Verlag, Basel A connection between propositional systems in • The creation-discovery-view : towards a possible explanation of quantum reality • Quantum Structures and their Future Diederik Aerts • On the Origin of Probabilities in Quantum Mechanics: Creative and Contextual Aspects • Operator structure of a non quantum and a non classical system Diederik Aerts and Bart D'Hooghe • The Game of the Biomousa: A View of Discovery and Creation • Published as: Aerts, D., Coecke, B., Durt, T. and Valckenborgh, F., 1997, "Quantum, classical and intermediate I: a model on the Poincare sphere", Tatra Mt. Math. Publ., 10, • Gepubliceerd als: "Niet-ruimtelijkheid als werktuig", in G r e n z e l o z e Wetenschap: Dertig Gesprekken met Vlamingen over Onderzoek, Van Pelt, • Necessity of Combining Mutually Incompatible Perspectives in the Construction of a Global View • Failure of Standard Quantum Mechanics for the Description of Compound Quantum Entities • Published as: Aerts, D., Broekaert, J. and Gabora, L., 1999, "Formal and Informal Representations of Science", Foundations of Science, 4, 1-2. • A CHARACTERIZATION OF SUBSYSTEMS IN PHYSICS DIRK AERTS and INGRID DAUBECHIES* • Macroscopic models for quantum systems and Diederik Aerts1, Marek Czachor2, Jeroen Dehaene3, Bart De Moor3 • This World Without Another, Meurs, Note and Aerts This worldwithout another. On • Connectedness applied to closure spaces and state property systems • EINSTEIN meets MAGRITTE : The Scholar, Terpsichore and the Barfly • Quantum axiomatics and a theorem of M.P. Sol`er Diederik Aerts, Bart Van Steirteghem • AN ATTEMPT TO IMAGINE PARTS OF THE REALITY OF THE MICRO-WORLD • Published as: Aerts, D., 1991, "A macroscopical classical laboratory situation with only macroscopical classical entities giving rise to a quantum mechanical probability model", in • Published as: Aerts, D. and Reignier, J., 1991, "The spin of a quantum entity and problems of non-locality", in the Proceedings of the Symposium on the Foundations of Modern • Helvetica Physica Acta 0018-0238/91/050527-21$l.50+0.20/0 Vol. 64 (1991) (c) 1991 Birkhuser Verlag, Basel • Published as: Aerts, D., Durt, T. and Van Bogaert, B., 1993, "A physical example of quantum fuzzy sets and the classical limit", Tatra Mt. Math. Publ., 1, 5 -15. • Published as: Aerts, D., 1994, "Quantum structures, separated physical entities and probability", Found. Phys., 24, 1227 -1258. • Gepubliceerd als: Aerts, D., 1994, "Quantummechanica", in Wetenschap en Filosofie, eds. Apostel, L. en Verbeure, F., Pelckmans, Kapellen. • Application of quantum statistics in psychological studies of decision processes • Published as: Aerts, D. and Durt, T., 1994, "Quantum, Classical and Intermediate, and illustrative example", Found. Phys., 24, 1353-1369. • Published as: Aerts, D., 1995, "The game of the biomousa: A view of discovery and creation", in Perspectives on the World, an Interdisciplinary Reflection, eds. • Quantum Physics at the `Einstein meets Magritte' conference • A mechanistic macroscopic physical entity with a three-dimensional Hilbert space description • Published as: Aerts, D., Coecke, B., Durt, T. and Valckenborgh, F., 1997, "Quantum, classical and intermediate I: a model on the Poincare sphere", Tatra Mt. Math. Publ., 10, • Published as: Aerts, D., 1998, "The entity and modern physics: the creation-discovery view of reality", in Interpreting Bodies: Classical and Quantum Objects in Modern Physics, ed. Castellani, E., Princeton • Published as: Aerts, D., 1998, "Editorial: synthesis and analysis, interdisciplinar-ity and foundations", Foundations of Science, 3, 203-206. • Inconsistencies in Constituent Theories of World Views : Quantum Mechanical Examples • Diederik Aerts and Fritz Rohrlich Center Leo Apostel, • The General Introduction of Einstein meets Magritte • The stuff the world is made of: physics and reality • Published as: Aerts, D., 1999, "Creativity and Science", Foundations of Science, 4, 111-112. • The Liar-paradox in a Quantum Mechanical Perspective • International Journal of Theoretical Physics, Vol. 38, No. 1, 1999 D. Aerts, M. Castagnino, T. Durt, A. Gangui, and E. Gunzig • State property systems and closure spaces: a study of categorical equivalence • EDITORIAL: QUANTUM, MIMESIS AND THE SOCIAL We are fortunate to present in this issue a number of high quality • Intrinsic Contextuality as the Crux of Consciousness Aerts D., Broekaert J., Gabora L. • Probing the Structure of Quantum Mechanics Diederik Aerts, Marek Czachor and Thomas Durt • Contextualizing Concepts Liane Gabora and Diederik Aerts • Quantum morphogenesis: A variation on Thom's catastrophe theory Diederik Aerts1 • Contextual Random Boolean Networks Carlos Gershenson, Jan Broekaert, and Diederik Aerts • Towards a general operational and realistic framework for quantum mechanics and relativity theory • The Perception of the Human Self: A Proposal for Ethical Adjustment1 • Constructivist Foundations 2005, vol. 1, no. 1 13 Ceci n'est pas Heinz von Foerster • R. B u cch eri et a l. (ed s.); E n d o p h ysics, T im e, Qu a n tu m a n d th e S u bjective; 1 1 3 1 3 0 c 2 0 0 5 World S cien tifi c P u blishin g C o. All rig hts reserved. • January 6, 2005 10:38 Proceedings Trim Size: 9in x 6in Introduction INTRODUCTION • A Theory of Concepts and Their Combinations I: The Structure of the Sets of Contexts and Properties • Towards a Re-Delineation of the Human Self-Understanding within the Western Worldview • The Generalised Liar Paradox: A Quantum Model and Interpretation • Security in Quantum Cryptography vs. Nonlocal Hidden Variables 1 • Gabora, L. & Aerts, D. (2007). A cross-disciplinary framework for the description of contextually mediated change. Electronic Journal of Theoretical Physics, 4(15), 1-22. • How to play two-players restricted quantum games with 10 cards • Toward an Ecological Theory of Concepts Liane Gabora • To appear as: Aerts, D., Bundervoet, S., Czachor, M., D'Hooghe, B., Gabora L. and Polk, P. (2010). On the foundations of the theory of evolution. In D. Aerts, B. D'Hooghe and N. Note (Eds.), Worldviews, Science and Us: Bridging • Towards a New Democracy: Consensus Through Quantum Parliament • Foundations of Physics, Vol. 30, No. 9, 2000 Editors' Preface: Interdisciplinary Studies of • Distilling the Essence of an Evolutionary Process and Implications for a Formal Description of Culture • Published as: Aerts, D. and Durt, T., 1994, "Quantum, classical and intermedi-ate: a measurement model," in the Proceedings of the International Symposium • Social Space: from Freedom to Freedom of Movement • Reality and Probability: Introducing a New Type of Probability Calculus • A Model for the Emergence and Evolution of the Integrated Worldview • Published as: Aerts, D., 1991, A mechanistic classical laboratory situation violating the Bell inequalities with 22, exactly 'in the same way' as its violations by the EPR experiments, Helv. • Contextualizing Concepts using a Mathematical Generalization of the Quantum Formalism • Quantum structures in macroscopical reality* D. Aerts, T. Durt, A. A. Grib , B. Van Bogaert and R. R. Zapatrin, • A Quantum Structure Description of the Liar Diederik Aerts, Jan Broekaert and Sonja Smets • Quantum Physics and the Nature of Reality: An Introduction to the Book • Quantum Mechanics: Structures, Axioms and Paradoxes • A Theory of Concepts and Their Combinations II: A Hilbert Space Representation • The Linearity of Quantum Mechanics at Stake: The Description of Separated Quantum Entities • Published as: Aerts, D., 1994, "Continuing a quest for the understanding of fundamental physical theories and the pursuit of their elaboration", Found. Phys., 24, 1107-1111. • Evolution as Context-driven Actualization of Potential1 Liane Gabora • The hidden measurement formalism: what can be explained and where quantum paradoxes remain • Physical justification for using the tensor product to describe two quantum systems as one joint • The Description of Joint Quantum Entities and the Formulation of a Paradox • Somewhere over the Rainbow: Editorial introduction • REPRESENTATION OF STATE PROPERTY SYSTEMS D. AERTS AND S. PULMANNOVA • Published as: (1) Aerts, D., 2001, "Transdisciplinary and integrative sciences in sustainable development", in Our Fragile World, a forerunner of the Encyclopedia of Life Support Systems, • Applied Categorical Structures 10: 469480, 2002. 2002 Kluwer Academic Publishers. Printed in the Netherlands. • Int J Theor Phys (2008) 47: 200211 DOI 10.1007/s10773-007-9507-y • Kwantumtheater Gepubliceerd in Etcetera, 15, 64, 7. • Soliton Kinetic Equations with Non-Kolmogorovian Structure: A New Tool for Biological Modeling? • De Potentie van Mens en Maatschappij Diederik Aerts • Helvetica Physica Acta, Vol. 51 (1978), Birkhuser Verlag, Basel About the structure -preserving maps • Foundations of quantum physics: a general realistic and operational approach • Quantum Structures: An Attempt to Explain the Origin of their Appearance in Nature • Quantum Computation: Towards the Construction of a `Between Quantum and Classical Computer' • Towards a framework for possible unification of quantum and relativity theories • Abstract DNA-type systems Diederik Aerts1 • Published in the Proceedings of the International Symposium on the Foundations of Modern Physics 1993, Helsinki, Finland, ed. Hyvonen, T., World Scientific, Singapore, 35 -56.
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/40/005.html","timestamp":"2014-04-16T19:36:17Z","content_type":null,"content_length":"26256","record_id":"<urn:uuid:1ddfbdcc-e0d4-470a-952b-72ba9c939b22>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Lakewood, WA Precalculus Tutor Find a Lakewood, WA Precalculus Tutor ...During that time, I was responsible for creating math materials for several classrooms and I also worked one-on-one with students to help them with math assignments. I worked in the NYC School system for 2.5 years. During that time, much of my time was spent helping students prepare for the Regents Math A exam. 16 Subjects: including precalculus, geometry, ASVAB, algebra 1 ...With sufficient 'vocabulary', the individual topics blend into a mosaic that makes sense, is informative, and useful in many fields such as nursing, bioengineering, biology and other related fields. I look forward to meeting you and assisting you in your efforts to understand and appreciate this... 12 Subjects: including precalculus, chemistry, geometry, ASVAB ...I have been learning French for more than 6 years. I can also help with programming. I earned a Bachelor of Science in Computer Science and in Computer Engineering at UW in Tacoma. 16 Subjects: including precalculus, chemistry, calculus, algebra 2 I have worked at Tacoma Community College for six years; when asked to describe what I tutor by a student, he responded by saying, "he DOESN'T tutor higher level biology or business classes if he can help it, but he does just about everything else." I have held a Master Tutor Certification with the ... 27 Subjects: including precalculus, chemistry, geometry, statistics ...I recently was a volunteer tutor at the Kent and Covington libraries where I tutored children K-12th grade in many subjects. I also volunteered with the Pullman, WA YMCA after school tutoring for over a year while earning my degree at WSU. During this time I also volunteered with the YMCA Speci... 25 Subjects: including precalculus, chemistry, algebra 1, physics Related Lakewood, WA Tutors Lakewood, WA Accounting Tutors Lakewood, WA ACT Tutors Lakewood, WA Algebra Tutors Lakewood, WA Algebra 2 Tutors Lakewood, WA Calculus Tutors Lakewood, WA Geometry Tutors Lakewood, WA Math Tutors Lakewood, WA Prealgebra Tutors Lakewood, WA Precalculus Tutors Lakewood, WA SAT Tutors Lakewood, WA SAT Math Tutors Lakewood, WA Science Tutors Lakewood, WA Statistics Tutors Lakewood, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Lakewood_WA_Precalculus_tutors.php","timestamp":"2014-04-18T11:38:20Z","content_type":null,"content_length":"24077","record_id":"<urn:uuid:15682fcd-cb71-42c0-bd6d-2ef8fad82c04>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Regression analysis From Encyclopedia of Mathematics A branch of mathematical statistics that unifies various practical methods for investigating dependence between variables using statistical data (see Regression). The problem of regression in mathematical statistics is characterized by the fact that there is insufficient information about the distributions of the variables under consideration. Suppose, for example, that there are reasons for assuming that a random variable where the variables correlation (in statistics)). The study of regression for experimental data is carried out using methods based on the principles of mean-square regression. Regression analysis solves the following fundamental problems: 1) the choice of a regression model, which implies assumptions about the dependence of the regression function on From the point of view of a single method for estimating unknown parameters, the most natural one is a regression model that is linear in these parameters: The choice of the functions or, in the simplest case, of a linear function (linear regression) There are criteria for testing linearity and for choosing the degree of the approximating polynomial. According to the principles of mean-square regression, the estimation of the unknown regression coefficients Regression coefficient) and the variance Dispersion) is realized by the method of least squares. Thus, as statistical estimators of The polynomial The random variables unbiased estimator of the parameter If the variance depends on If one studies the dependence of a random variable for some known matrix and an unbiased estimator for Model (*) is the most general linear model, in that it is applicable to various regression situations and encompasses all forms of polynomial regression of The above method for constructing an empirical regression assuming a normal distribution of the results of the observations leads to estimators for In the given matrix form, the general linear regression model (*) admits a simple extension to the case when the observed variables Regression matrix). The problems of regression analysis are not restricted to the construction of point estimators of the parameters "chi-squared" distribution with has the Student distribution with Regression analysis is one of the most widely used methods for processing experimental data when investigating relations in physics, biology, economics, technology, and other fields. Such branches of mathematical statistics as dispersion analysis and the design of experiments are based on models of regression analysis, and these models are widely used in multi-dimensional statistical analysis. [1] M.G. Kendall, A. Stuart, "The advanced theory of statistics" , 2. Inference and relationship , Griffin (1979) [2] N.V. Smirnov, I.V. Dunin-Barkovskii, "Mathematische Statistik in der Technik" , Deutsch. Verlag Wissenschaft. (1969) (Translated from Russian) [3] S.A. Aivazyan, "Statistical research on dependence" , Moscow (1968) (In Russian) [4] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965) [5] N.R. Draper, H. Smith, "Applied regression analysis" , Wiley (1981) Modern research — inspired by modern computational facilities — is aimed at developing methods for regression analysis when the classical assumptions of regression analysis do not hold. For instance, one can estimate the function Robust statistics) in the linear regression model by minimizing the sum of absolute deviations from the regression line instead of the sum of their squares. [a1] W. Härdle, "Applied nonparametric regression" , Cambridge Univ. Press (1990) How to Cite This Entry: Regression analysis. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Regression_analysis&oldid=28558 This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Regression_analysis","timestamp":"2014-04-19T14:30:11Z","content_type":null,"content_length":"35058","record_id":"<urn:uuid:9a438699-7cd2-4089-bf34-b43f4f440843>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Studio City Algebra 1 Tutor Find a Studio City Algebra 1 Tutor ...I have worked with many students over a number of years. Whether at the elementary, high school, or college level, my tutoring has frequently involved helping students to make better use of their time and to improve their study habits. Doing so aids them in every subject. 23 Subjects: including algebra 1, reading, English, writing ...These are academic areas that I am personally most confident in, but my ability to aid others in their work is not limited to those. I can help out in math up to calculus, chemistry, and basic Japanese, just to name a few. Considering that I am just a college student, my youth may seem to be a ... 24 Subjects: including algebra 1, English, chemistry, reading ...Tutoring is my passion and I always look for an opportunity to aid a student, to improve his or her skills, and to bring out his or her talent. Albert Einstein said once: "It is the supreme art of the teacher to awaken joy in creative expression and knowledge."I have been tutoring Chemistry at ... 11 Subjects: including algebra 1, chemistry, algebra 2, geometry ...They measure your ability to find the correct answer quickly, again and again. I can teach you to do that. I have helped many junior high and high school students understand math and science, and I can help students in earlier grades as well. 63 Subjects: including algebra 1, chemistry, English, physics ...In addition, all elementary subjects including English (reading, writing, spelling and more)as well as study skills. Furthermore, I am an expert in test preparation for the California High School Proficiency Exam (CHSPE), CBEST, CSET, and GED Examinations. I have tutored students of all ages and can help you or your child achieve outstanding grades. 12 Subjects: including algebra 1, reading, writing, algebra 2
{"url":"http://www.purplemath.com/studio_city_algebra_1_tutors.php","timestamp":"2014-04-18T13:39:58Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:12342dd0-85cb-4fff-99ee-eff958a3f576>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Stanley Deser Professor Emeritus of Physics Harvard University, Ph.D. Harvard University, M.A. CUNY Brooklyn College, B.S. Quantum theory of fields. Elementary particles. Gravitation. Supergravity. Strings. Awards and Honors Deliver annual Division lecture, reed College. (2007) Festschrift volume in his honor (2005) Lifetime Achievement Award, Brooklyn College (2004) Honorary D. Tech, Chalmers University of Technology, Gothenberg, Sweden (2001) Honorary Foreign Member, Italian Academy of Science, Turin (2001) Erwin Schrodinger Visiting Professor of Theoretical Physics, University of Vienna (1996) Dannie Heineman Prize in Mathematical Physics, American Physical Society (1994) Member, National Academy of Sciences (1994) Scientific Advisory Committee, Institute for Theoretical Physics (Chair 1993-4) (1991 - 1995) Editorial Boards: The Physical Review, Journal Mathematical Physics, Annals of Physics, Journal of High Energy Physics, Classical and Quantum Gravity, Journal of Geometry and Physics (1989) Bude' Medal, College de France (1984) Investigador Titular ad Honorem, CIDA Venezuela (1983) SRC Senior Research Fellow, Imperial College, London (1980) Fellow, American Academy of Arts and Sciences (1979) Honorary D. Phil., University of Stockholm Centennial (1978) Fellow, All Souls College, Oxford University (1977) Bude' Medal, College de France (1976) Morris Loeb Lecturer, Harvard University (1975) North Atlantic Treaty Organzation (NATO) Senior Science Fellow (1975) Fellow, American Physical Society (1972) Distinguished Alumnus Award of Honor, Brooklyn College (1971) Fulbright Fellow, University of Paris (1971 - 1972) Fulbright Fellow, University of the Republic of Uruguay (1970) Fulbright Grant (1970) Guggenheim Memorial Foundation Fellow (1966) Atomic Energy Commission, NSF, Jewett Postdoctoral Fellow (1953 - 1957) Deser,Stanley, D. Seminara. "Duality invariance of all free bosonic and fermionic gauge fields." Physics Letters (2005): 317. Deser,Stanley. "Cotton Blen Gravity pp Waves." Acta Physica Polonica (2005). Deser,Stanley. "Duality invariance of all free bosonic and fermionic gauge field." Physics Letters (2005). Deser,Stanley. "Free Spin 2 Duality Invariance Cannot be Extended to GR." Physical Review (2005). Deser,Stanley. "How Special Relativity Determines the Signs of the Nonrelaivistic, Coulomb and Newtonian, Forces." American Journal of Physics (2005). Deser,Stanley. "Schwarzschild and Birkhoff a la Wey." American Journal of Physics (2005). Deser,Stanley, A. Waldron. "Conformal Invariance of Partially Massless Higher Spins." Physics Letters (2004): 30-34. Deser,Stanley. "Conformal Invariance of Partially Massless Hgher Spins." Physics Letters (2004). Deser,Stanley. "Shortcuts to Spherically Symmetric Solutions: a Cautionary Note." Classical Quantum Gravity (2004). Deser,Stanley, A. Waldron. "Null Propagation of Partially Massless Higher Spins in (A)dS and Cosmological Constant Speculations." Physics Letters (2001): 137. Deser,Stanley. A Century of Gravity: 1901-2000 (plus some 2001) in '2001: A Spacetime Odyssey'. Proc. of Proceedings of the Inaugural Conference of the Michigan Center for Theoretical Physics. Ann Arbor, Michigan: World Scientific, 2001. Deser,Stanley. "Closed Form Effective Conformal Anomaly Actions in D>4." Physics Letters (2000): 315-320. Deser,Stanley, K. Bautier, M. Henneaux, and D. Seminara. "No Cosmological D=11 Supergravity." Physics Letters (1997): 49. Deser,Stanley, L. Griguolo and D. Seminara. "Large Gauge Invariance of Finite Temperature Gauge Theories." Physical Review Letters (1997): 1976. Deser,Stanley, A. Schwimmer. "Geometric Classification of Conformal Anomalies in Arbitrary Dimensions." Physics Letters (1993): 279. Deser,Stanley, R. Jackiw and G. 't Hooft. "Physical Cosmic Strings Do Not Generate Closed Time-like Curves." Physical Review Letters (1992): 267. Deser,Stanley. "Gravitational Anyons." Physical Review Letters (1990): 611. Deser,Stanley, L.F. Abbott. "Stability of Gravity with a Cosmological Constant." Nuclear Physics (1982): 76. Deser,Stanley, R. Jackiw and S. Templeton. "Three-dimensional Massive Gauge Theories." Physical Review Letters (1982): 975. Deser,Stanley, R. Jackiw and S. Templeton. "Topologically Massive Gauge Theories." Annals of Physics (1982): 372. Deser,Stanley, C. Teitelboim. "Supergravity Has Positive Energy." Physical Review Letters (1977): 249. Deser,Stanley, J. Kay and K. Stelle. "Renormalizability Properties of Supergravity." Physical Review Letters (1977): 527. Deser,Stanley, B. Zumino. "A Complete Action for the Spinning String." Physics Letters (1976): 369. Deser,Stanley, B. Zumino. "Consistent Supergravity." Physics Letters (1976): 335. Deser,Stanley, C. Teitelboim. "Duality Transformations of Abelian and Non-Abelian Gauge Fields." Physical Review (1976): 1592. Deser,Stanley, M. Duff and C.J. Isham. "Non-Local Conformal Anomalies." Nuclear Physics (1976): 45. Deser,Stanley. "Absence of Static Solutions in Source-Free Yang-Mills Theories." Physics Letters (1976): 463. Deser,Stanley. "Self-Interaction and Gauge Invariance." J. Gravitation and Relativity (1970): 9. Deser,Stanley, R. Arnowitt and C.W. Misner. "The Dynamics of General Relativity in 'Gravitation: An Introduction to Current Research'." (1962). Deser,Stanley, M.L. Goldberger, W. Thirring, and Baumann. "Energy Level Displacement in pi-mesic Atoms (1954)." Physical Review (1960): 774.
{"url":"http://www.brandeis.edu/facultyguide/person.html?emplid=615b5ae384af006d0939e2f5ad09bbc648c6e3c0","timestamp":"2014-04-21T14:56:04Z","content_type":null,"content_length":"21110","record_id":"<urn:uuid:0bdbc160-e62d-4277-90e0-99f5e594b36c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
[Python-ideas] Proposal: add a calculator statistics module Nick Coghlan ncoghlan at gmail.com Tue Sep 13 06:06:45 CEST 2011 On Tue, Sep 13, 2011 at 11:00 AM, Steven D'Aprano <steve at pearwood.info> wrote: > I propose adding a basic calculator statistics module to the standard > library, similar to the sorts of functions you would get on a scientific > calculator: > mean (average) > variance (population and sample) > standard deviation (population and sample) > correlation coefficient > and similar. I am volunteering to provide, and support, this module, written > in pure Python so other implementations will be able to use it. > Simple calculator-style statistics seem to me to be a fairly obvious > "battery" to be included, more useful in practice than some functions > already available such as factorial and the hyperbolic functions. And since some folks may not have seen it, Steven's proposal here is following up on a suggestion Raymond Hettinger posted to this last >From my point of view, I'd make the following suggestions: 1. We should start very small (similar to the way itertools grew over time) To me that means: mean, median, mode standard deviation Anything beyond that (including coroutine-style running calculations) is probably better left until 3.4. In the specific case of running calculations, this is to give us a chance to see how coroutine APIs are best written in a world where generators can return values as well as yielding them. Any APIs that would benefit from having access to running variants (such as being able to collect multiple statistics in a single pass) should also be postponed. Some more advanced algorithms could be included as recipes in the initial docs. The docs should also include pointers to more full-featured stats modules for reference when users needs outgrow the included batteries. 2. The 'math' module is not the place for this, a new, dedicated module is more appropriate. This is mainly due to the fact that the math module is focused primarily on binary floating point, while these algorithms should be neutral with regard to the specific numeric type involved. However, the practical issues with math being a builtin module are also a factor. There are many colours the naming bikeshed could be painted, but I'd be inclined to just call it 'statistics' ('statstools' is unwieldy, and other variants like 'stats', 'simplestats', 'statlib' and 'stats-tools' all exist on PyPI). Since the opportunity to just use the full word is there, we may as well take it. Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia More information about the Python-ideas mailing list
{"url":"https://mail.python.org/pipermail/python-ideas/2011-September/011538.html","timestamp":"2014-04-20T22:59:53Z","content_type":null,"content_length":"5985","record_id":"<urn:uuid:b467fbf6-b33c-4b19-ad38-8f4eb1e56c1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Theory Results 11 - 20 of 65 - GeoInformatica , 2003 "... Spatial outliers represent locations which are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as loc ..." Cited by 16 (6 self) Add to MetaCart Spatial outliers represent locations which are significantly different from their neighborhoods even though they may not be significantly different from the entire population. Identification of spatial outliers can lead to the discovery of unexpected, interesting, and implicit knowledge, such as local instability. In this paper, we first provide a general definition of S-outliers for spatial outliers. This definition subsumes the traditional definitions of spatial outliers. Second, we characterize the computation structure of spatial outlier detection methods and present scalable algorithms. Third, we provide a cost model of the proposed algorithms. Finally, we provide experimental evaluations of our algorithms using a MinneapolisSt. Paul(Twin Cities) traffic data set. - Bioinformatics , 2008 "... Motivation: With the exponential growth of expression and proteinprotein interaction (PPI) data, the frontier of research in system biology shifts more and more to the integrated analysis of these large datasets. Of particular interest is the identification of functional modules in PPI networks, sha ..." Cited by 12 (1 self) Add to MetaCart Motivation: With the exponential growth of expression and proteinprotein interaction (PPI) data, the frontier of research in system biology shifts more and more to the integrated analysis of these large datasets. Of particular interest is the identification of functional modules in PPI networks, sharing common cellular function beyond the scope of classical pathways, by means of detecting differentially expressed regions in PPI networks. This requires on the one hand an adequate scoring of the nodes in the network to be identified and on the other hand the availability of an effective algorithm to find the maximally scoring network regions. Various heuristic approaches have been proposed in the literature. Results: Here we present the first exact solution for this problem, which is based on integer linear programming and its connection to the well-known prize-collecting Steiner tree problem from Operations - 2004 FUJIMOTO, TAKAHIRO AND AKIRA TAKEISHI (2001) MODULARIZATION IN THE AUTO INDUSTRY: INTERLINKED MULTIPLE HIERARCHIES OF PRODUCT, PRODUCTION AND SUPPLIER SYSTEMS, TOKYO UNIVERSITY DISCUSSION PAPER, CIRJE-F-107 , 2004 "... ..." , 2009 "... We study the identification of panel models with linear individual-specific coefficients, when T is fixed. We show identification of the variance of the effects under conditional uncorrelatedness. Identification requires restricted dependence of errors, reflecting a trade-off between heterogeneity a ..." Cited by 9 (2 self) Add to MetaCart We study the identification of panel models with linear individual-specific coefficients, when T is fixed. We show identification of the variance of the effects under conditional uncorrelatedness. Identification requires restricted dependence of errors, reflecting a trade-off between heterogeneity and error dynamics. We show identification of the density of individual effects when errors follow an ARMA process under conditional independence. We discuss GMM estimation of moments of effects and errors, and introduce a simple density estimator of a slope effect in a special case. As an application we estimate the effect that a mother smokes during pregnancy on child’s birth weight. - IN PROCEEDINGS OF THE SUMMER SCHOOL ON NEURAL NETWORKS , 1994 "... In this paper we present a systematic approach to constructing neural network classifiers based on stochastic model theory. A two step process is described where the first problem is to model the stochastic relationship between sample patterns and their classes using a stochastic neural network. The ..." Cited by 8 (1 self) Add to MetaCart In this paper we present a systematic approach to constructing neural network classifiers based on stochastic model theory. A two step process is described where the first problem is to model the stochastic relationship between sample patterns and their classes using a stochastic neural network. Then we convert the stochastic network to a deterministic one, which calculates the a-posteriori probabilities of the stochastic counterpart. That is, the outputs of the final network estimate a-posteriori probabilities by construction. The well-known method of normalizing network outputs by applying the softmax function in order to allow a probabilistic interpretation is shown to be more than a heuristic, since it is well-founded in the context of stochastic networks. Simulation results show a performance of our networks superior to standard multilayer networks in the case of few training samples and a large number of classes. - Monthly Weather Review , 1970 "... The HURRAN (hurricane analog) technique for selecting analogs for an existing tropical storm or hurricane is described. This fully computerized program examines tracks of all Atlantic tropical storms or hurricanes since 1886, and those that have designated characteristics similar to an existing stor ..." Cited by 8 (2 self) Add to MetaCart The HURRAN (hurricane analog) technique for selecting analogs for an existing tropical storm or hurricane is described. This fully computerized program examines tracks of all Atlantic tropical storms or hurricanes since 1886, and those that have designated characteristics similar to an existing storm are selected and identified. Positions of storms selected as analogs are determined at 12, 24, 36, 48, and 72 hr after the initial time. Probability ellipses are computed from the resulting arrays and plotted on an 2, y (CALCOMP) offline plotter. The program also has the option of computing the probability that the storm center will be located within a fixed distance of a given point at a specific time. Operational use of HURRAN during the 1969 hurricane season, including both its utility and limitations, is described. 1. - Tech. Rep. IAI-TR-96-2 , 1996 "... We present a novel framework for unsupervised texture segmentation, which relies on statistical tests as a measure of homogeneity. Texture segmentation is formulated as a pairwise data clustering problem with a sparse neighborhood structure. The pairwise dissimilarities of texture blocks are compute ..." Cited by 7 (1 self) Add to MetaCart We present a novel framework for unsupervised texture segmentation, which relies on statistical tests as a measure of homogeneity. Texture segmentation is formulated as a pairwise data clustering problem with a sparse neighborhood structure. The pairwise dissimilarities of texture blocks are computed using a multiscale image representation based on Gabor filters, which are tuned to spatial frequencies at different scales and orientations. We derive and discuss a family of objective functions to pose the segmentation problem in a precise mathematical formulation. An efficient optimization method, known as deterministic annealing, is applied to solve the associated optimization problem. The general framework of deterministic annealing and meanfield approximation is introduced and the canonical way to derive efficient algorithms within this framework is described in detail. Moreover the combinatorial optimization problem is examined from the viewpoint of scale space theory. The novel algorithm has been extensively tested on Brodatz-like microtexture mixtures and on real--word images. In addition, benchmark studies with alternative segmentation techniques are reported. - Neural Computing: Research and Applications III, Proc. 5th Irish Neural Network Conference, St. Patrick's , 1997 "... This paper describes a technique for detection of flaws in woven textile fabric, using Fourier transform spectral texture features. Two chief attributes make it a particularly suitable approach to this problem: because of the repetitive nature of the woven pattern, patterns can be expressed by very ..." Cited by 6 (3 self) Add to MetaCart This paper describes a technique for detection of flaws in woven textile fabric, using Fourier transform spectral texture features. Two chief attributes make it a particularly suitable approach to this problem: because of the repetitive nature of the woven pattern, patterns can be expressed by very few spectral components (features), and, the whole detection system -- both the texture feature extractor and the subsequent decision mechanism -- have potential for parallel implementation via a feed-forward neural network structure. The technique also has potential for self-calibration, and hence a capability for adaptive operation. The performance of the technique is evaluated on a set of samples of denim fabric, containing real flaws. Constraints and trade-offs of practical implementation are discussed, including considerations of analog VLSI implementation of the neural network structure. 1 Introduction The cost of poor quality, i.e. the cost to repair and the damage to reputation, are... , 2002 "... These include: This article cites 4 articles, 3 of which can be accessed free at: ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=391719&sort=cite&start=10","timestamp":"2014-04-16T05:57:10Z","content_type":null,"content_length":"37215","record_id":"<urn:uuid:fd99bbea-bba6-45b9-abac-8b33b2a68ca4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
study of ones self. Everything in the world is like numbers, connected and explainable. everything is mathematics -mos def - mathematics Why did one straw break the camel's back? Here's the secret: the million other straws underneath it - it's all mathematics - Mos Def - Mathematics The unified theorems and theories used in the analysis of order, sequence, pattern, form, space, figures, and numbers. studies the existence and uniqueness of solution sets in equations and inequalities. , and its subset Differential Equations, further explores the properties of Algebra through use of derivatives, antiderivatives (integrals), and partial derivatives/antiderivatives. Game Theory involves the study of outcomes in scenarios, prediction, and optimization strategies in practical applications. There's many more branches, such as signal theory and other shit, but, quite simply, nobody, yes, mathematicians, NOBODY gives a flying fuck. Mathematics, akin to , is not about making friends. You need it a hell of a lot more than it needs you. Contrary to scholastic doctrine, the majority of Mathematics must be purposely rigged in order to actually achieve a solution. Even more padding and precautionary measures must be added in order to make the solution somewhat understandable or practical. Mathematics varies from the concrete applicational types, such as calculus and algebra, to the overly-abstract fields, such as linear theory. It's a necessary in anything scientific, and is useful in sweeping the academic debris away from fields in which they could do serious harm, safely into liberal arts studies. While we should all be thankful for that aspect, many people, author included, still despise Mathematics for being overly complicated due to lackidaisical teaching and god-awful textbook writers. Jeepers, I love Mathematics. 1. A truly amasing subject enjoyed only by the people who understand it. 2. Science would not exist without mathematics. 3. "Mathematics is not magic... just logic" - my maths teacher >< 4. Some ignorant people believe that maths is a male subject (Pffft) =) Doing a mathematics question: Gather the important information; figure out what you are required to find; apply the formulae most apropriate to the question; calculate. The feeling of getting the correct answer (expecially if you have toiled a great deal over it) is unbeatable. It's orgasmic ;) 1) A mind-altering form of , practiced and developed largely by , having its origins in prehistory. That study of logic in which simple ideas such as are used and combined to describe the properties of number, point, line, plane and symmetry. Subjects within a modern mathematics degree might include selected topics from; Number Theory : the analysis of why not all numbers are like the number 1. : Differential calculus is the study of how to find the slope of a single point while integral calculus is the study of how to add up infinite quantities of numbers and obtain a single finite number as a result. : The study of how coffee cups are actually quite similar to doughnuts. Mathematics as a form of mental kung fu is used extensively in all the sciences and in business and trade, it is at the heart of all technical progress. The average person uses mathematics, for example, to locate bargains in shopping malls by simply looking at a few numerals on a price tag. Hey Cuthberton, did I tell you that I used some kickass zen master mathematics to calculate which I would ask out to the science fair? The World is a Wondrous Place to those who don’t Know Mathematics. An impossible system time-consuming shit designed to form mental kidney stones Man- "I'm going to need you to pick me up from the hospital after I pass mathematics today. The pain is unbearable..." often cold and misunderstood science of numbers applicable to the world; see The professor taught math to his students to help them engineer better computers.
{"url":"http://www.urbandictionary.com/define.php?term=mathematics&defid=1149982","timestamp":"2014-04-18T21:13:53Z","content_type":null,"content_length":"65196","record_id":"<urn:uuid:f4066e0f-e6e2-4589-8b28-e9b605877904>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
s in Afghanistan Summaries of Important Areas for Mineral Investment and Production Opportunities of Nonfuel Minerals in Afghanistan Table of Contents By Stephen G. Peters, Trude V.V. King, Thomas J. Mack, Michael P. Chornack (eds.), and the U.S. Geological Survey Afghanistan Mineral Assessment Team Open-File Report 2011–1204 USGS Afghanistan Project Product No. 199 Volume I (n) Indicates index map reference Click to view vicinity map of AOI Volume II [The appendixes are not printed. A DVD in the pocket of volume I contains the whole report, including appendixes 1–3] Appendix 1. Inventory Spreadsheets for Information Packages Compiled for Each Area of Interest. Appendix 2. Streamflow Statistics at Ungaged Sites in Areas of Interest. Appendix 3. Groundwater Hydrographs at or Near Areas of Interest. Click here for links to the high-resolution versions of the figures for the hyperspectral mapping analysis and spatial index map. *Download Acrobat® Reader, available at no charge from Adobe Systems, to view and print PDFs.
{"url":"http://afghanistan.cr.usgs.gov/nonfuel-report","timestamp":"2014-04-18T23:45:17Z","content_type":null,"content_length":"31796","record_id":"<urn:uuid:1532acef-46b7-4829-8c5d-2384d2f337fb>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Multilevel model using long data and residual Jielu Lin posted on Wednesday, September 24, 2008 - 6:51 am Dear Drs. Muthen, I am doing a two-level growth model for a continuous outcome using long data format (like example 9.16). I want to obtain the residuals but the "output: residual" command is not available for "type = random". Is there anyway to get the residuals? Thanks! This is my code: names = y id time; cluster = id; within = time; type = twolevel random; s | y on time; y s Linda K. Muthen posted on Wednesday, September 24, 2008 - 10:11 am The variance of y varies as a function of time. Therefore, there is not one covariance matrix. This is why the RESIDUAL option is not available. Jielu Lin posted on Wednesday, September 24, 2008 - 9:30 pm Thanks for answering my question! I did a fix-random effects model in STATA using the "xmixed" procedure and could obtain a global residual using "predict r, resid".I want to replicate this in mplus. Just want to make sure--Do you mean that mplus won't produce a global residual like the structural equation model growth curves? Thanks! Bengt O. Muthen posted on Thursday, September 25, 2008 - 8:59 am You say your STATA analysis used "a fix-random effects model". I don't know if that means fixed slopes only or also random slopes. If there are no random slopes for covariates, Mplus does give a global residual for the outcome at each time point. Jielu Lin posted on Friday, September 26, 2008 - 6:11 am Got it. Thank you! Jielu Lin posted on Monday, September 29, 2008 - 11:07 am New question: Now I am doing random intercepts only and as you indicated, this gives me a global residual. Is there anyway to get the actual numbers of the residual? I want to do a histogram to see how these residuals are distributed. Thanks! Linda K. Muthen posted on Monday, September 29, 2008 - 2:57 pm We don't give individual residuals. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=12&page=3571","timestamp":"2014-04-17T00:55:08Z","content_type":null,"content_length":"23803","record_id":"<urn:uuid:218348dd-5ed2-49de-af7c-061dbdabd7af>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/andy454/medals","timestamp":"2014-04-21T15:29:27Z","content_type":null,"content_length":"84174","record_id":"<urn:uuid:4b5f7adc-1069-46d0-b63c-31b16d5c070c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
model structure for dendroidal left fibrations model structure for dendroidal left fibrations Model category theory Universal constructions Producing new model structures Presentation of $(\infty,1)$-categories Model structures for $\infty$-groupoids for $n$-groupoids for $\infty$-groups for $\infty$-algebras for stable/spectrum objects for $(\infty,1)$-categories for stable $(\infty,1)$-categories for $(\infty,1)$-operads for $(n,r)$-categories for $(\infty,1)$-sheaves / $\infty$-stacks Higher algebra Algebraic theories Algebras and modules Higher algebras Model category presentations Geometry on formal duals of algebras The model structure for dendroidal left fibrations is an operadic analog of the model structure for left fibrations. Its fibrant objects over Assoc are A-∞ spaces, over Comm they are E-∞ spaces. For $f : S \to T$ any morphism of dendroidal sets, the induced adjunction (by Kan extension) $(f_! \dashv f^* ) : dSet/T \stackrel{\overset{f_!}{\leftarrow}}{\underset{f^*}{\to}} dSet/S$ is a Quillen adjunction for the corresponding model structures for dendroidal left fibrations over $S$ and $T$. It is a Quillen equivalence if $f$ is a weak equivalences in the Cisinki-Moerdijk model structure on dendroidal sets. This is (Heuts, prop. 2.4). Relation to other model structures For an overview of models for (∞,1)-operads see table - models for (infinity,1)-operads. The model structure for dendroidal left fibrations is due to The model structure for dendroidal Cartesian fibrations that it arises from by Bousfield localization is due to Revised on March 7, 2012 10:42:57 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/model+structure+for+dendroidal+left+fibrations","timestamp":"2014-04-16T13:45:43Z","content_type":null,"content_length":"40890","record_id":"<urn:uuid:9c501c10-a635-4b68-b677-5cfecbb36eb0>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Introducing Math.Combinatorics.Species! I have just uploaded to Hackage version 0.1 of the species package, a Haskell library for computing with combinatorial species. Much like David Amos’s great series of posts introducing his Haskell for Maths library, I plan to write a series of posts over the next week or two introducing the library, explaining how it works, and showing off some interesting examples. But, first things first: if you’d like to install the library and play along, just cabal update; cabal install species (Man, do I ever love cabal-install! But I digress.) Combinatorial what? So, what are combinatorial species? Intuitively, a species describes a certain combinatorial structure: given an underlying set of labels, it specifies a set of structures which can be built using those labels. For example, $L$, the species of lists, when applied to an underlying set of labels $U$ yields the set of all linear orderings of the elements of $U$. So in general a species can be viewed as a function which takes any set (the labels) and produces another set (of structures built out of those labels). Actually, this isn’t quite enough to capture our intuition about what a species ought to be: we want a species to work “independently” of the underlying set; which labels we use shouldn’t matter at all when it comes to describing structures. So, additionally, we require that if $F$ is a species, any bijection $\sigma : U \to T$ between two sets of labels can be “lifted” to a bijection between sets of $F$-structures, $F[\sigma] : F[U] \to F[T]$, in a way that respects composition of bijections. (Of course, the categorists ought to be jumping out of their seats right now: all this just amounts to saying that a species is an endofunctor $F : \mathbb{B} \to \mathbb{B}$ on the category of sets with bijections.) Importantly, it is not too hard to see that this requirement means that for any species $F$, the size of $F[U]$ depends only on the size of $U$, and not on the actual elements of $U$. Counting labelled structures So, let’s see some examples already! What sorts of things might we want to compute about species? First, we of course want to be able to count how many structures are generated by a species. As a first example, consider again the species $L$ of lists. Given an underlying set $U$ of size $n$, how many lists $L[U]$ are there? That’s easy: $n!$. [brent@euclid:~]$ ghci -XNoImplicitPrelude > :m +Math.Combinatorics.Species > take 10 $ labelled lists The function labelled takes a combinatorial species $F$ as an argument, and computes an infinite list where the entry at index $n$ is the number of labelled $F$-structures on an underlying set of size $n$. (This is also a good time to mention that the species library depends on the Numeric Prelude, an alternative Haskell Prelude with a mathematically sane hierarchy of numeric types; hence we must pass ghci the -XNoImplicitPrelude flag so we don’t get lots of horrible name clashes. I’ll write some additional thoughts on the Numeric Prelude in a future post.) Now, so far this is nothing new: Dan Piponi wrote a blog post about a Haskell DSL for counting labelled structures back in 2007, and in fact, that post was part of my inspiration for this library. Counting labelled structures works by associating exponential generating functions to species. (More on this in a future post.) But we can do more than that! Counting unlabelled structures For one, we can also count unlabelled structures. What’s an unlabelled structure? Intuitively, it’s a structure where you can’t tell the difference between the elements of the underlying set; formally, it’s an equivalence class of labelled structures, where two labelled structures are equivalent if one can be transformed into the other by permuting the labels. So, how about unlabelled lists? > take 10 $ unlabelled lists Boring! This makes sense, though: there’s only one way to make a list out of n identical objects. But how about something a bit less trivial? > take 10 $ labelled partitions > take 10 $ unlabelled partitions > :m +Math.OEIS > description `fmap` (lookupSequence . take 10 $ labelled partitions) Just "Bell or exponential numbers: ways of placing n labeled balls into n indistinguishable boxes." > description `fmap` (lookupSequence . take 10 $ unlabelled partitions) Just "a(n) = number of partitions of n (the partition numbers)." (I couldn’t resist sneaking in a little plug for my Math.OEIS module there too. =) So, how does this work? Well… it’s a bit more complicated! But I’ll explain it in a future post, too. Generating structures But that’s not all! Not only can we count structures, we can generate them, too: > generate lists ([1..3] :: [Int]) > generate partitions ([1..3] :: [Int]) This is a bit magical, and of course I will… explain it in a future post. For now, I leave you with this challenge: can you figure out what the asterisks are doing there? (Hint: the curly brackets denote a cycle…) Of course, no DSL would be complete without operations with which to build up more complicated structures from simpler ones; in my next post I’ll talk about operations on combinatorial species. Further reading If you just can’t wait for my next post and want to read more about combinatorial species, I recommend reading the Wikipedia article, which is OK, this fantastic blog post, which is what introduced me to the wonderful world of species, or for a great dead-tree reference (whence I’m getting most of my information), check out Combinatorial Species and Tree-Like Structures by Bergeron, Labelle, and Leroux. 14 Responses to Introducing Math.Combinatorics.Species! 1. in the generate examples, why do you get all the permutations for lists but not for partitions? 2. Because the partitions are set partitions; for example, [[1,2],[3]] denotes a set of sets. I guess the notation I chose is not the best, this is something I can improve in future versions. =) 3. Cool! I’ve been waiting for someone to write a nice library like this, especially the generation part. □ Thanks Dan! I’d love any feedback (or even code contributions! =) you might have. 4. I got the following error while installing: [10:52 PM]$ sudo cabal install species Resolving dependencies… cabal: cannot configure unamb-0.2.2. It requires base ==4.* There is no available version of base that satisfies ==4.* Any ideas on how to solve this? □ What version of ghc do you have? base-4 comes with ghc-6.10.x, so it appears that (for now) the species library will only compile with ghc-6.10.1 or later. I may not ultimately need the unamb dependency, though. ☆ Ah thanks for quick response. I was using 6.8.2. I changed it to 6.10.4 and that fixed the problem :) 5. A while ago, inspired by a conversation with Wouter Swierstra, I decided to implement generic data generation based on combinatorial species theory. For instance, for the family of datatypes data Expr = Const Int | Add Expr Expr | Mul Expr Expr | EVar Var | Let Decl Expr data Decl = Var := Expr | Seq Decl Decl | None type Var = String we can generate declarations up to size 6: > test (elements Decl) 6 0 (0): 1 (1): None 2 (0): 3 (1): Seq None None 4 (2): “” := Const 0 “” := EVar “” 5 (2): Seq None (Seq None None) Seq (Seq None None) None or expressions: > test (elements Expr) 6 0 (0): 1 (0): 2 (2): Const 0 EVar “” 3 (0): 4 (2): Let None (Const 0) Let None (EVar “”) 5 (8): Add (Const 0) (Const 0) Add (Const 0) (EVar “”) Add (EVar “”) (Const 0) Add (EVar “”) (EVar “”) Mul (Const 0) (Const 0) Mul (Const 0) (EVar “”) Mul (EVar “”) (Const 0) Mul (EVar “”) (EVar “”) The example code is here http://hpaste.org/fastcgi/hpaste.fcgi/view?id=7569#a7569, but this really needs memoization for any serious use. 6. Hi Brent, When you explained this in the vegetarian restaurant in Edinburgh, I mentioned that there was a talk at ICFP that seemed related. Here is the paper: It defines a “fan”: “fan.F of a datatype F is as a non-deterministic program that, given a so-called seed, constructs an arbitrary F structure in which the only stored value is the seed.” Which sounds really similar to what your unlabelled does. □ Thanks Sjoerd! I’ll be sure to give it a read. It was good to meet you at ICFP–I look forward to seeing you at future conferences as well. 7. I’m tried running some of your inline examples and the “generate” and “lists” bindings are no longer in the Math.Combinatorics.Species namespace. Are there equivalents in version 0.3.2? □ Hi David, sorry for the delayed response. Yes, generate and lists have been renamed to ‘enumerate’ and ‘linOrds’ respectively (since these names are more accurate). This entry was posted in combinatorics, haskell, math, projects and tagged combinatorial species, combinatorics. Bookmark the permalink.
{"url":"http://byorgey.wordpress.com/2009/07/24/introducing-math-combinatorics-species/","timestamp":"2014-04-20T03:10:39Z","content_type":null,"content_length":"92174","record_id":"<urn:uuid:db90b889-cab1-45e0-9211-8567fb116837>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Jacobs Physics Busy, busy time here at Woodberry. We're approaching trimester exams; today was the last day of class before exam week. Sunday is "consultation day," during which teachers will be in their classroom all afternoon. My general (non-AP) physics class follows approximately the AP physics B curriculum... except that we spend from September through February on mechanics only. The upcoming exam in that class will consist of seven problems, each one based on an authentic AP physics B problem from the last 20 years. I've spent the week creating new exam preparation exercises, especially for general physics. I wrote some new exam-style questions based on AP problems; I followed these up with quizzes, and then multiple choice review exercises with the "clickers," a.k.a. classroom response system. Things are still too hectic for now to post specifics -- baseball tryouts concluded yesterday, and I'm way behind on my grading. But next week I will be able to start posting again. I think these clicker review exercises will be well worth a look. P.S. In case you're confused, the picture is of Woodberry Forest baseball captain and soon-to-be Georgia Tech Yellow Jacket Paul Kronenfeld. And yes, Mr. Kronenfeld IS in fact a physics student. I think I've done a reasonable job over the years teaching Lenz's Law for the direction of an induced current -- my approach is detailed in this post. The next obstacle to my students' understanding is to remember and internalize the problem solving process. This time of year, we've just finished electricity AND magnetism -- it's tough enough to get the class to remember the difference between electric and magnetic fields, let alone the difference between the THREE right hand rules for magnetism and how each one works. Because the use of Lenz's Law seems so easy on the surface, and because you usually have a 50-50 shot of getting the right direction for an induced current, students tend to give short shrift to using the law correctly. They don't write out their reasoning, and so they lose opportunities to cement confusing ideas like the difference between magnetic flux and magnetic field. I'm going to try something a bit different this year. On Monday, when I introduce Lenz's Law, I'm going to write out each step in the process on the board, in longhand. When I assign Lenz's Law problems, then, I am going to require everyone to write out the steps as their solution. In the past, I've begged, pleaded, and cajoled the class to explain their reasoning. Well, I'm gonna try modeling the reasoning I want to see, hoping that I get better retention. Below is an example problem from Serwaybased on the diagram at the top of the page. I've typed in the template that I want students to use: they write out the phrases, and fill in the blanks. This template works with virtually any Lenz's Law problem. A rectangular loop is placed near a long wire carrying a current I¸ as shown above. The current I is decreasing. What is the direction of the current in the resistor? Write out the Lenz's law problem solving process: (1) The direction of the magnetic field is _____________. I know this because___. (2) The magnetic flux is (increasing / decreasing). I know this because ____. (3) So I point my right thumb ____________ toward (increasing / decreasing) flux, which means the current flows _______________. My AP labs generally follow a predictable pattern: We take a large set of data, make a graph that's curved, manipulate the graph's axes to form a straight-line, and then use the slope of that line to determine some value associated with the experiment. AP exam questions frequently follow this same approach. For example, my first experiment of the year asks students to attach a cart to a spring scale, then to put the cart on an incline. We measure the force parallel to the incline plane that holds up the cart; we plot that force as a function of the angle of the incline, giving a curve. Next, we plot the force we measured vs. the sine of the incline angle; this gives a straight line. Students show that the slope of the line is equal to the weight of the cart, and they divide this weight by g to find the cart's mass. This process goes on again and again -- make a straight line, and relate the slope to a constant quantity. I know I'm doing my job when students start to groan about going through the same process again and again. BUT: The slope of a stright-line graph isn't always the most meaningful of that line's attributes. Many folks were shocked in 2007 when problem 6 (check out the link, page 11) required the use not of the slope, but of the y-intercept of an experimental graph. The problem involved use of the thin lens equation, 1/f = 1/di + 1/do. The y-axis was 1/di, the x-axis was 1/do, so the y-intercept became the recipricol of the lens's focal length. I do that experiment in the spring. But I want to give the class some experience with using the y-intercept of an experimental graph now, and we haven't dealt with lenses yet. I've used the pressure in a static column equation, P=Po + ρgh before -- use a vernier probe to measure the absolute pressure in a water-filled container as a function of depth. The slope of the pressure - depth graph is ρg, while the y-intercept is the surface pressure Po. This year, for a variety of reasons, I wanted an experiment less reliant on computer data collection and more complex in its analysis. The equivalent resistance of parallel resistors is given by 1/Req = 1/R1 + 1/R2. Huh... This equation is quite similar in mathematical form to the thin lens equation. I have the equipment to measure the equivalent resistance of a parallel combination (i.e. an ohmmeter); I have several "variable resistance boxes," pictured above, which allow the resistance to be varied through a very wide range. So why not use this equation as my "y-intercept training?" I took a bunch of resistors in the 5-100 kΩ range, put them on breadboards, and called them the "mystery resistors." I showed the class how to put the variable resistance boxes in parallel with the mystery resistors, and how to measure the equivalent resistance with the meter. I initially asked them to graph the equivalent resistance as a function of the box resistance -- this gave a curve. Next, I asked them to plot 1/Req on the vertical axis, and the recipricol of the box resistance on the horizontal axis. This produced a line. Of course, everyone knew by now to draw a best-fit line; but most folks reflexively took the slope of that line. It was only after I made them identify the y-axis, x-axis, slope, and intercept using the equation of a line (y =mx +b) that they recognized to use the y-intercept -- the inverse of the y-intercept is the resistance of the mystery resistor. I will scan in some sample data below, once some folks turn in their labs. It didn't occur to me until later, and it never occurred to any student that I know of, that the original Req vs. Rbox graph could be used directly to find the mystery resistance: draw the assymptote as the box resistance gets very large... then read off the vertical axis to get the mystery resistor. You can tell everyone WHY that works in the comments; I might ask that question as a quiz someday. I gave a free repsonse test today. The first problem was 1993 B2, about the electric field and potential due to point charges. This is the easiest of such problems, I think, in the AP physics B annals. The second problem was 2007 B2, a ranking task about circuits; this one makes my "top 5 AP physics problems of all time" list. (That's a list I should publish some time on this blog.) I haven't graded the third one yet, the easy 1998 problem with standing waves on a string. In those first two problems, at least seven off my 23 students used an incorrect equation. Why is that so annoying? Because the AP free response section, as well as today's test, provides an equation Below is a note I wrote to the class's email folder. I publish it here because I think it gives good advice about how to use the equation sheet, and perhaps implies the fundamental reason that the sheet is provided in the first place. Look. You've got to memorize equations. We all know that. So, then, why do you even have an equation sheet? The novice or dumb physics student hunts and pecks through the sheet. "The problem asked for a charge on the capacitor, and I know charge is Q. So, let's look for an equation with a Q in it. Here's one! ΔU = Q + W! Now I just need to plug in something or other for U and W." I know that youall aren't doing that, and I'm glad. (You would be shocked how many people are just this silly.) However: You should never use an incorrect equation on a free response test! You've got the equation sheet right there. You should use it any time you're unsure of the exact form of an equation: "Huh, is it Q=CV, or C=QV? I don't remember. Let me look that up real quick." Or, "Is electric potential kQ/d, or d squared? I don't remember for sure, I should check." Of course, I want you to know these things by heart. But when you know you're unsure, when you know you might have a brain fart under pressure, just use the sheet. That's what it's for. People ask me about textbooks a lot. I don't have a favorite. I've tried several, but I've never been truly happy with any of them -- most are too detailed for the novice, even though they serve as reasonable references once students have experience. Lately, it seems that all the textbooks are trying to advertise their usefulness for test preparation, which is silly -- if it's a good textbook, then it will be useful in preparing for good test. Period. But someone is insisting that texts add test prep gimmicks, sometimes via online resources, more often by merely adding a section of multiple choice questions. Be careful with these multiple choice questions. Most are LOUSY. Serway, for example, has taken a bunch of plug-and-chug problems and made them into "multiple choice" items; no, that's not what a multiple choice question is about. They and other texts also ask questions that are so involved that they are not answerable in a minute or two -- EVERY multiple choice available requires an average of, at most, a couple of minutes per item. The best multiple choice questions that I personally have seen in a standard textbook are in the Giambattista, Richardson, and Richardson text. This one was recommended to my by Martha Lietz, a colleague from the AP reading who teaches outside of Chicago. Giambattista has a reasonable grasp of the purpose and scope of a multiple choice item. Take a look at these, from the second edition, that I adapted for a recent quiz: 23. An organ pipe is closed at one end. Which sketch is NOT a possible standing wave pattern for this pipe? 24. Which sketch shows the lowest frequency standing wave for an organ pipe closed at one end? Now, I know that these would be back-to-back problems on an extensive exam, and I know that many exams use five rather than four choices. Who cares. The thrust of these problems is good. The first tests whether the student recognizes that a closed pipe includes an antinode at one end and a node at the other; the other simply tests visually the student's ability to apply v=λf based on a picture of the wave. Giambattista has more good multiple choice questions, of course, and a few bad ones as well. But if you're looking for a good source, get yourself a copy of this book.
{"url":"http://jacobsphysics.blogspot.com/2010_02_01_archive.html","timestamp":"2014-04-20T03:17:25Z","content_type":null,"content_length":"100895","record_id":"<urn:uuid:4bc6c882-caae-4d25-93d7-1e6e4c597ee6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning Different Types Of Sport Odds - Sports Betting Library Learning Different Types Of Sport Odds It’s hard enough trying to figure out if the odds for a given game are fair or attractive. It’s even harder when you have to figure out how to read the odds in the first place. There are several different ways that sports odds can be written. While they all ultimately mean the same thing it can be confusing when you first see each type, or when you try to figure out how they all relate. We’re going to take a look at four of the most common – North American and European books most often use U.S., decimal, or fractional odds, while Asian events most commonly use Hong Kong odds. Fractional odds – We’ll start here because these are the easiest to understand, and they are the ones that most people who don’t know much about sports betting talk about. They will occasionally be used in North America, but are most common in the UK. As the name suggests, these odds are presented as fractions. For example, if the odds are 3/2 then for every two dollars you bet on the game you would make a profit of three dollars. At 3/1 you would make three dollars for every dollar you bet, and so on. An even money payoff is 1/1, and payoffs of less than even money are represented by fractions less than one – like with odds of ½ you would make a profit of one dollar for every two dollars you bet. All you have to remember here is that the top number in the fraction represents the amount of profit you are making, not the total amount paid. at 3/2, for example, your total return on a winning two dollar bet would be five dollars – your three dollar profit, plus the original two dollars you bet. U.S. odds – Not surprisingly these are the odds used most often in the U.S. and Canada. The odds are either positive or negative numbers, and they are at least three digit numbers bigger than 100. Negative numbers are for bets that will pay off at less than even money. The easiest way to think about these is that they are the amount of money you would have to bet to win $100. For example, odds of -200 means that you would make a profit of $100 for every $200 bet – the same as fractional odds of ½. Odds that pay more than even money are represented by positive numbers, and can be thought of as the amount you would win if you bet $100. Odds of +150 means you would make a profit of $150 if you bet $100 – the same as fractional odds of 3/2. Even money bets are expressed as +100. Decimal odds – These are the types of odds most commonly used in continental Europe. These odds are expressed as numbers greater than one, and can be thought of as the amount you would get back for every one dollar bet including your original bet. Decimals odds of 1.50 mean that for every dollar you bet you make a profit of 50 cents. That’s the same as fractional odds of ½ and U.S. odds of -200. Decimal odds of 2.00 are even money, and 2.50 would be the equivalent of 3/2 or +150. They are most commonly listed with two decimals places, but can be expressed with more than that in some Hong Kong odds – These are essentially the same as decimal odds, except that they don’t factor in the original bet. That means that Hong Kong odds of 1.00 are even money – you get one dollar back for every dollar you bet. To continue our examples from the previous types of odds, fractional odds of ½, U.S. odds of -200, decimal odds of 1.50 and Hong Kong odds of 0.5 are all the same thing, and so are 3/2, +150, 2.50 and 1.50 respectively. Hong Kong odds aren’t tough to understand, but if you can’t figure them out don’t worry about it unless you plan to head to Asia to place your bets. Several different online sportsbooks allow you to change back and forth between the different types of odds – at least the top three types – so if one way of expressing them makes more sense to you than another you can easily makes your bets in that way.
{"url":"http://www.madduxsports.com/library/sports-betting/different-types-of-sport-odds.html","timestamp":"2014-04-21T12:39:08Z","content_type":null,"content_length":"19730","record_id":"<urn:uuid:41dfb2c5-83fb-47d5-aa92-f33e5d5bffee>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Verlet collision with impulse preservation This article presents a system to integrate verlet physics collision with preservation of impulse. Normally verlet physics annihilates a lot of energy from a system, which makes it very stable but also quite unrealistic. Additionally simple methods of preserving impulse yield very unstable systems, a limitation which can be overcome by two steps of integration, one for at-rest acceleration canceling, and one with impulse preservation. I'm building on the concept of verlet integration that I explained in a previous article. You might also be interested in different integration methods. The simulation below is the result of the tutorial to follow. Hover over it with the mouse to run the simulation. You can click into the canvas to create new spheres of random size. You can put this on your own page if you want by pasting in the code blow. src = "http://codeflow.org/frames/verlet.html#color=#f99e00,bg=none,start=20" width = "300" height = "200" frameborder = "0" scrolling = "no" The arguments after the hash control the circle color, background color and start count of bodies, you can set any size you wish. Verlet physics is very good at resolving hard constraints in a stable fashion. However, in the process it destroys a lot of energy. The issue is visible with the code below. The simulation consists of spherical bodies each of which is collided with every other body and the level boundary. Each body has a radius, position, previous position and accumulated acceleration. var Body = function(x, y, radius){ this.x = x; this.y = y; this.px = x; this.py = y; this.ax = 0; this.ay = 0; this.radius = radius; Acceleration on a body is accumulated in the acceleration components and this method transfers the accumulated acceleration to a new current position. accelerate: function(delta){ this.x += this.ax * delta * delta; this.y += this.ay * delta * delta; this.ax = 0; this.ay = 0; Transfer of existing inertia to a new position in a verlet integration step is done by the following method: inertia: function(delta){ var x = this.x*2 - this.px; var y = this.y*2 - this.py; this.px = this.x; this.py = this.y; this.x = x; this.y = y; A body is drawn with the following code: draw: function(ctx){ ctx.arc(this.x, this.y, this.radius, 0, Math.PI*2, false); The simulation maintains a list of all bodies and runs this method for each simulation step: var step = function(){ var steps = 2; var delta = 1/steps; for(var i=0; i<steps; i++){ // simulation code here It has a few helper methods to apply the discrete steps of the simulation to the bodies (like acceleration, gravity, inertia and so on). This is the content of the inner loop of each step for the energy annihilating pure verlet solution. Gravity accumulates the gravitational acceleration on bodies, accelerate moves the bodies according to acceleration. Note that this results in a higher velocity (as it should) in the verlet integration scheme, since the current position moved away further from the previous position. Collide resolves body vs. body constraints (two spheres may not overlap): var collide = function(){ for(var i=0, l=bodies.length; i<l; i++){ var body1 = bodies[i]; for(var j=i+1; j<l; j++){ var body2 = bodies[j]; var x = body1.x - body2.x; var y = body1.y - body2.y; var slength = x*x+y*y; var length = Math.sqrt(slength); var target = body1.radius + body2.radius; // if the spheres are closer // then their radii combined if(length < target){ var factor = (length-target)/length; // move the spheres away from each other // by half the conflicting length body1.x -= x*factor*0.5; body1.y -= y*factor*0.5; body2.x += x*factor*0.5; body2.y += y*factor*0.5; Inertia processes a bodies velocity derived from its previous position. It is easy to see why this relatively simple scheme does not preserve impulse energy. The verlet constraint resolution just moves the bodies around in order to be conflict free, and the velocity is encoded in the previous position, which is not adjusted for impulse preservation. We could move the previous position around at each collision according to the laws of deflection and impulse transfer as to reflect an energetically more correct system state. The impulse that two spherical bodies can exchange when colliding acts co-linearly to the normal on their contact surface, leading trough their centers of mass. Friction and rotational impulse are ignored for this tutorial. The velocities of the bodies have to be decomposed into two components that represent the size of the velocity in the direction of the possible impulse force. This can be done by vector projection. Here the velocity is the red vectors and the impulse direction component is green. Now the impulses need to be swapped between the bodies, which means subtracting each from one body and adding it to the other which results in a new velocity (blue): The code in the collision loop to do this is below. if(length < target){ // record previous velocity var v1x = body1.x - body1.px; var v1y = body1.y - body1.py; var v2x = body2.x - body2.px; var v2y = body2.y - body2.py; // resolve the body overlap conflict var factor = (length-target)/length; body1.x -= x*factor*0.5; body1.y -= y*factor*0.5; body2.x += x*factor*0.5; body2.y += y*factor*0.5; // compute the projected component factors var f1 = (damping*(x*v1x+y*v1y))/slength; var f2 = (damping*(x*v2x+y*v2y))/slength; // swap the projected components v1x += f2*x-f1*x; v2x += f1*x-f2*x; v1y += f2*y-f1*y; v2y += f1*y-f2*y; // the previous position is adjusted // to represent the new velocity body1.px = body1.x - v1x; body1.py = body1.y - v1y; body2.px = body2.x - v2x; body2.py = body2.y - v2y; This is better in the preservation of impulse, however it is not stable. Ghost impulse is being created by the application of acceleration and the subsequent preservation. See the example below If you look closely you see a jittering effect. It would be possible to minimize the jittering by increasing the step count, but this would be very expensive method, and the jittering would remain (just at smaller scales). If you think about the problem it's obvious why the ghost impulse is introduced. If a sphere lies at rest in equilibrium on top of another sphere or the floor, then the acceleration creates velocity where none would have existed in the first place. The characteristic of acceleration at rest is that the pushback from the constraint exactly cancels the acceleration out. The idea of correcting the simulation for this is to go it in two steps. 1. Accumulate forces and move object, resolve constraints but without preserving impulse. This gives an object the chance to cancel acceleration impulse out when in equilibrium. 2. The resulting velocity remaining after step 1 must have been unconstrained, therefore it is now real impulse, so we can move the object by its inertia and preserve its impulse when constrained. For this the core step loop is adjusted: The collide method now takes an argument (true or false) which indicates if impulse preservation should be performed (the boundary collision is also split). if(length < target){ var v1x = body1.x - body1.px; var v1y = body1.y - body1.py; var v2x = body2.x - body2.px; var v2y = body2.y - body2.py; var factor = (length-target)/length; body1.x -= x*factor*0.5; body1.y -= y*factor*0.5; body2.x += x*factor*0.5; body2.y += y*factor*0.5; var f1 = (damping*(x*v1x+y*v1y))/slength; var f2 = (damping*(x*v2x+y*v2y))/slength; v1x += f2*x-f1*x; v2x += f1*x-f2*x; v1y += f2*y-f1*y; v2y += f1*y-f2*y; body1.px = body1.x - v1x; body1.py = body1.y - v1y; body2.px = body2.x - v2x; body2.py = body2.y - v2y; You can see that the obnoxious jittering is gone. To achieve similarly stable results without the at-rest optimization, 20 or more internal steps would have to be performed. I've demonstrated a method to implement very hard collision contacts and an optimization that makes it possible to have very nice at-rest state while also preserving impulse. I hope you enjoyed the
{"url":"http://codeflow.org/entries/2010/nov/29/verlet-collision-with-impulse-preservation/","timestamp":"2014-04-19T22:07:52Z","content_type":null,"content_length":"51953","record_id":"<urn:uuid:a5198ad3-97c0-48e1-baf8-3082034bf742>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Re: indexing problem Tim Hochberg tim.hochberg at cox.net Mon Feb 13 15:34:01 CST 2006 Ryan Krauss wrote: >At the risk of sounding silly, can you explain to me in simple terms >why s**2 is less accurate than s*s. I can sort of intuitively >appreciate that that would be true, but but might like just a little >more detail. I don't know that it has to be *less* accurate, although it's unlikely to be more accurate since s*s should be nearly as accurate as you get with floating point. Multiplying two complex numbers in numpy is done in the most straightforward way imaginable: result.real = z1.real*z2.real - z1.imag*z2.imag result.image = z1.real*z2.imag + z1.imag*z2.real The individual results lose very little precision and the overall result will be nearly exact to within the limits of floating point. On the other hand, s**2 is being calculated by a completely different route. Something that will look like: result = pow(s, 2.0) Pow is some general function that computes the value of s to any power. As such it's a lot more complicated than the above simple expression. I don't think that there's any reason in principle that pow(s,2) couldn't be as accurate as s*s, but there is a tradeoff between accuracy, speed and simplicity of implementation. That being said, it may be worthwhile having a look at complex pow and see if there's anything suspicious that might make the error larger than it needs to be. If all of that sounds a little bit like "I really know", there's some of that in there too. >On 2/13/06, Tim Hochberg <tim.hochberg at cox.net> wrote: >>>>Ryan Krauss wrote: >>>>>This may only be a problem for ridiculously large numbers. I actually >>>>>meant to be dealing with these values: >>>>>In [75]: d >>>>>array([ 246.74011003, 986.96044011, 2220.66099025, 3947.84176044, >>>>> 6168.50275068, 8882.64396098, 12090.26539133, 15791.36704174, >>>>> 19985.94891221, 24674.01100272]) >>>>>In [76]: s=d[-1]*1.0j >>>>>In [77]: s >>>>>Out[77]: 24674.011002723393j >>>>>In [78]: type(s) >>>>>Out[78]: <type 'complex128scalar'> >>>>>In [79]: s**2 >>>>>Out[79]: (-608806818.96251547+7.4554869875188623e-08j) >>>>>So perhaps the previous difference of 26 orders of magnitude really >>>>>did mean that the imaginary part was negligibly small, that just got >>>>>obscured by the fact that the real part was order 1e+135. >>>>>On 2/13/06, Ryan Krauss <ryanlists at gmail.com> wrote: >>I got myself all tied up in a knot over this because I couldn't figure >>out how multiplying two purely complex numbers was going to result in >>something with a complex portion. Since I couldn't find the complex >>routines my imagination went wild: perhaps, I thought, numpy uses the >>complex multiplication routine that uses 3 multiplies instead of the >>more straightforward one that uses 4 multiples, etc, etc. None of these >>panned out, and of course they all evaporated when I got pointed to the >>code that implements this which is pure vanilla. All the time I was >>overlooking the obvious: >>Ryan is using s**2, not s*s. >>So the obvious answer, is that he's just seeing normal error in the >>function that is implementing pow. >>If this is inacuracy is problem, I'd just replace s**2 with s*s. It will >>probably be both faster and more accurate anyway >>This SF.net email is sponsored by: Splunk Inc. Do you grep through log files >>for problems? Stop! Download the new AJAX search engine that makes >>searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >>Numpy-discussion mailing list >>Numpy-discussion at lists.sourceforge.net >This SF.net email is sponsored by: Splunk Inc. Do you grep through log files >for problems? Stop! Download the new AJAX search engine that makes >searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >Numpy-discussion mailing list >Numpy-discussion at lists.sourceforge.net More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/006263.html","timestamp":"2014-04-16T13:53:02Z","content_type":null,"content_length":"9334","record_id":"<urn:uuid:d28c63d4-9ce8-477f-8e30-01b0ac42f77e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and statistics EBook From Socr This is a General Statistics Curriculum E-Book, which includes Advanced-Placement (AP) materials. This is an Internet-based probability and statistics E-Book. The materials, tools and demonstrations presented in this E-Book would be very useful for advanced-placement (AP) statistics educational curriculum. The E-Book is initially developed by the UCLA Statistics Online Computational Resource (SOCR). However, all statistics instructors, researchers and educators are encouraged to contribute to this project and improve the content of these learning materials. There are 4 novel features of this specific Statistics EBook. It is community-built, completely open-access (in terms of use and contributions), blends information technology, scientific techniques and modern pedagogical concepts, and is multilingual. Follow the instructions in this page to expand, revise or improve the materials in this E-Book. This section describes the means of traversing, searching, discovering and utilizing the SOCR Statistics EBook resources in both formal and informal learning setting. The problems of each section in the E-Book are shown here. The Probability and Statistics EBook is a freely and openly accessible electronic book developed by SOCR and the general community. Chapter I: Introduction to Statistics Although natural phenomena in the real life are unpredictable, the designs of experiments are bounded to generate data that varies because of intrinsic (internal to the system) or extrinsic (due to the ambient environment) effects. How many natural processes or phenomena in the real life that have an exact mathematical closed-form description and are completely deterministic can we describe? How do we model the rest of the processes that are unpredictable and have random characteristics? Statistics is the science of variation, randomness and chance. As such, statistics is different from other sciences, where the processes being studied obey exact deterministic mathematical laws. Statistics provides quantitative inference represented as long-time probability values, confidence or prediction intervals, odds, chances, etc., which may ultimately be subjected to varyious interpretations. The phrase Uses and Abuses of Statistics refers to the notion that in some cases statistical results may be used as evidence to seemingly opposite theses. However, most of the time, common principles of logic allow us to disambiguate the obtained statistical inference. Design of experiments is the blueprint for planning a study or experiment, performing the data collection protocol and controlling the study parameters for accuracy and consistency. Data, or information, is typically collected in regard to a specific process or phenomenon being studied to investigate the effects of some controlled variables (independent variables or predictors) on other observed measurements (responses or dependent variables). Both types of variables are associated with specific observational units (living beings, components, objects, materials, etc.) All methods for data analysis, understanding or visualizing are based on models that often have compact analytical representations (e.g., formulas, symbolic equations, etc.) Models are used to study processes theoretically. Empirical validations of the utility of models are achieved by inputting data and executing tests of the models. This validation step may be done manually, by computing the model prediction or model inference from recorded measurements. This process may be possibly done by hand, but only for small numbers of observations (<10). In practice, we write (or use existent) algorithms and computer programs that automate these calculations for greater efficiency, accuracy and consistency in applying models to larger datasets. Chapter II: Describing, Exploring, and Comparing Data There are two important concepts in any data analysis - Population and Sample. Each of these may generate data of two major types - Quantitative or Qualitative measurements. There are two important ways to describe a data set (sample from a population) - Graphs or Tables. There are many different ways to display and graphically visualize data. These graphical techniques facilitate the understanding of the dataset and enable the selection of an appropriate statistical methodology for the analysis of the data. There are three main features of populations (or sample data) that are always critical in understanding and interpreting their distributions - Center, Spread and Shape. The main measures of centrality are Mean, Median and Mode(s). There are many measures of (population or sample) spread, e.g., the range, the variance, the standard deviation, mean absolute deviation, etc. These are used to assess the dispersion or variation in the population. The shape of a distribution can usually be determined by looking at a histogram of a (representative) sample from that population; Frequency Plots, Dot Plots or Stem and Leaf Displays may be helpful. Variables can be summarized using statistics - functions of data samples. Graphical visualization and interrogation of data are critical components of any reliable method for statistical modeling, analysis and interpretation of data. Chapter III: Probability Probability is important in many studies and discipline because measurements, observations and findings are often influenced by variation. In addition, probability theory provides the theoretical groundwork for statistical inference. Some fundamental concepts of probability theory include random events, sampling, types of probabilities, event manipulations and axioms of probability. There are many important rules for computing probabilities of composite events. These include conditional probability, statistical independence, multiplication and addition rules, the law of total probability and the Bayesian Rule. Many experimental setting require probability computations of complex events. Such calculations may be carried out exactly, using theoretical models, or approximately, using estimation or There are many useful counting principles (including permutations and combinations) to compute the number of ways that certain arrangements of objects can be formed. This allows counting-based estimation of complex events' probabilities. Chapter IV: Probability Distributions There are two basic types of processes that we observe in nature - Discrete and Continuous. We begin by discussing several important discrete random processes, emphasizing the different distributions, expectations, variances and applications. In the next chapter, we will discuss their continuous counterparts and other continuous distributions are discussed in a later chapter. The complete list of all SOCR Distributions is available here. To simplify the calculations of probabilities, we will define the concept of a random variable which will allow us to study uniformly various processes with the same mathematical and computational The expectation and the variance for any discrete random variable or process are important measures of Centrality and Dispersion. This section also presents the definitions of some common population- or sample-based moments. The Bernoulli and Binomial processes provide the simplest models for discrete random experiments. Multinomial processes extend the Binomial experiments for the situation of multiple possible outcomes. The Geometric, Hypergeometric, Negative Binomial, and Negative Multinomial distributions provide computational models for calculating probabilities for a large number of experiment and random variables. This section presents the theoretical foundations and the applications of each of these discrete distributions. The Poisson distribution models many different discrete processes where the probability of the observed phenomenon is constant in time or space. Poisson distribution may be used as an approximation to the Binomial distribution. Chapter V: Normal Probability Distribution The Normal Distribution is perhaps the most important model for studying quantitative phenomena in the natural and behavioral sciences - this is due to the Central Limit Theorem. Many numerical measurements (e.g., weight, time, etc.) can be well approximated by the normal distribution. Other commonly used continuous distributions are discussed in a later chapter. The Standard Normal Distribution is the simplest version (zero-mean, unit-standard-deviation) of the (General) Normal Distribution. Yet, it is perhaps the most frequently used version because many tables and computational resources are explicitly available for calculating probabilities. In practice, the mechanisms underlying natural phenomena may be unknown, yet the use of the normal model can be theoretically justified in many situations to compute critical and probability values for various processes. In addition to being able to compute probability (p) values, we often need to estimate the critical values of the Normal Distribution for a given p-value. The multivariate normal distribution (also known as multivariate Gaussian distribution) is a generalization of the univariate (one-dimensional) normal distribution to higher dimensions (2D, 3D, etc.) The multivariate normal distribution is useful in studies of correlated real-valued random variables. Chapter VI: Relations Between Distributions In this chapter, we will explore the relationships between different distributions. This knowledge will help us to compute difficult probabilities using reasonable approximations and identify appropriate probability models, graphical and statistical analysis tools for data interpretation. The complete list of all SOCR Distributions is available here and the Probability Distributome project provides an interactive graphical interface for exploring the relations between different distributions. The exploration of the relations between different distributions begins with the study of the sampling distribution of the sample average. This will demonstrate the universally important role of normal distribution. Suppose the relative frequency of occurrence of one event whose probability to be observed at each experiment is p. If we repeat the same experiment over and over, the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of experiments increases. Why is that and why is this important? Normal Distribution provides a valuable approximation to Binomial when the sample sizes are large and the probability of successes and failures is not close to zero. Poisson provides an approximation to Binomial Distribution when the sample sizes are large and the probability of successes or failures is close to zero. Binomial Distribution is much simpler to compute, compared to Hypergeometric, and can be used as an approximation when the population sizes are large (relative to the sample size) and the probability of successes is not close to zero. The Poisson can be approximated fairly well by Normal Distribution when λ is large. Chapter VII: Point and Interval Estimates Estimation of population parameters is critical in many applications. Estimation is most frequently carried in terms of point-estimates or interval (range) estimates for population parameters that are of interest. There are many ways to obtain point (value) estimates of various population parameters of interest, using observed data from the specific process we study. The method of moments and the maximum likelihood estimation are among the most popular ones frequently used in practice. This section discusses how to find point and interval estimates when the sample-sizes are large. Next, we discuss point and interval estimates when the sample-sizes are small. Naturally, the point estimates are less precise and the interval estimates produce wider intervals, compared to the case of large-samples. The Student's T-Distribution arises in the problem of estimating the mean of a normally distributed population when the sample size is small and the population variance is unknown. Normal Distribution is an appropriate model for proportions, when the sample size is large enough. In this section, we demonstrate how to obtain point and interval estimates for population In many processes and experiments, controlling the amount of variance is of critical importance. Thus the ability to assess variation, using point and interval estimates, facilitates our ability to make inference, revise manufacturing protocols, improve clinical trials, etc. This activity demonstrates the usage and functionality of SOCR General Confidence Interval Applet. This applet is complementary to the SOCR Simple Confidence Interval Applet and its corresponding Chapter VIII: Hypothesis Testing Hypothesis Testing is a statistical technique for decision making regarding populations or processes based on experimental data. It quantitatively answers the possibility that chance alone might be responsible for the observed discrepancies between a theoretical model and the empirical observations. In this section, we define the core terminology necessary to discuss Hypothesis Testing (Null and Alternative Hypotheses, Type I and II errors, Sensitivity, Specificity, Statistical Power, etc.) As we already saw how to construct point and interval estimates for the population mean in the large sample case, we now show how to do hypothesis testing in the same situation. We continue with the discussion on inference for the population mean of small samples. When the sample size is large, the sampling distribution of the sample proportion $\hat{p}$ is approximately Normal, by CLT. This helps us formulate hypothesis testing protocols and compute the appropriate statistics and p-values to assess significance. The significance testing for the variation or the standard deviation of a process, a natural phenomenon or an experiment is of paramount importance in many fields. This chapter provides the details for formulating testable hypotheses, computation, and inference on assessing variation. Chapter IX: Inferences From Two Samples In this chapter, we continue our pursuit and study of significance testing in the case of having two populations. This expands the possible applications of one-sample hypothesis testing we saw in the previous chapter. We need to clearly identify whether samples we compare are Dependent or Independent in all study designs. In this section, we discuss one specific dependent-samples case - Paired Samples. Independent Samples designs refer to experiments or observations where all measurements are individually independent from each other within their groups and the groups are independent. In this section, we discuss inference based on independent samples. In this section, we compare variances (or standard deviations) of two populations using randomly sampled data. This section presents the significance testing and inference on equality of proportions from two independent populations. Chapter X: Correlation and Regression Many scientific applications involve the analysis of relationships between two or more variables involved in a process of interest. We begin with the simplest of all situations where Bivariate Data (X and Y) are measured for a process and we are interested in determining the association, relation or an appropriate model for these observations (e.g., fitting a straight line to the pairs of (X,Y) The Correlation between X and Y represents the first bivariate model of association which may be used to make predictions. We are now ready to discuss the modeling of linear relations between two variables using Regression Analysis. This section demonstrates this methodology for the SOCR California Earthquake dataset. In this section, we discuss point and interval estimates about the slope of linear models. Now, we are interested in determining linear regressions and multilinear models of the relationships between one dependent variable Y and many independent variables X[i]. Chapter XI: Analysis of Variance (ANOVA) We now expand our inference methods to study and compare k independent samples. In this case, we will be decomposing the entire variation in the data into independent components. Now we focus on decomposing the variance of a dataset into (independent/orthogonal) components when we have two (grouping) factors. This procedure called Two-Way Analysis of Variance. Chapter XII: Non-Parametric Inference To be valid, many statistical methods impose (parametric) requirements about the format, parameters and distributions of the data to be analyzed. For instance, the Independent T-Test requires the distributions of the two samples to be Normal, whereas Non-Parametric (distribution-free) statistical methods are often useful in practice, and are less-powerful. The Sign Test and the Wilcoxon Signed Rank Test are the simplest non-parametric tests which are also alternatives to the One-Sample and Paired T-Test. These tests are applicable for paired designs where the data is not required to be normally distributed. The Wilcoxon-Mann-Whitney (WMW) Test (also known as Mann-Whitney U Test, Mann-Whitney-Wilcoxon Test, or Wilcoxon rank-sum Test) is a non-parametric test for assessing whether two samples come from the same distribution. Depending upon whether the samples are dependent or independent, we use different statistical tests. We now extend the multi-sample inference which we discussed in the ANOVA section, to the situation where the ANOVA assumptions are invalid. There are several tests for variance equality in k samples. These tests are commonly known as tests for Homogeneity of Variances. Chapter XIII: Multinomial Experiments and Contingency Tables The Chi-Square Test is used to test if a data sample comes from a population with specific characteristics. The Chi-Square Test may also be used to test for independence (or association) between two variables. Chapter XIV:Bayesian Statistics This section will establish the groundwork for Bayesian Statistics. Probability, Random Variables, Means, Variances, and the Bayes’ Theorem will all be discussed. In this section, we will provide the basic framework for Bayesian statistical inference. Generally, we take some prior beliefs about some hypothesis and then modify these prior beliefs, based on some data that we collect, in order to arrive at posterior beliefs. Another way to think about Bayesian Inference is that we are using new evidence or observations to update the probability that a hypothesis is true. This section explains the binomial, Poisson, and uniform distributions in terms of Bayesian Inference (also see the chapter on other common distributions). This section will talk about both the classical approach to hypothesis testing and also the Bayesian approach. This section discusses two sample problems, with variances unknown, both equal and unequal. The Behrens-Fisher controversy is also discussed. Hierarchical linear models are statistical models of parameters that vary at more than a level. These models are seen as generalizations of linear models and may extend to non-linear models. Any underlying correlations in the particular model must be represented in analysis for correct inference to be drawn. Topics covered will include Monte Carlo Methods, Markov Chains, the EM Algorithm, and the Gibbs Sampler. Chapter XV: Other Common Continuous Distributions Earlier we discussed some classes of commonly used Discrete and Continuous distributions. Below are some continuous distributions with broad range of applications. The complete list of all SOCR Distributions is available here. The Probability Distributome Project provides an interactive navigator for traversal, discovery and exploration of probability distribution properties and The Gamma distribution is a distribution that arises naturally in processes for which the waiting times between events are relevant. It can be thought of as a waiting time between Poisson distributed The Exponential distribution is a special case of the Gamma distribution. Whereas the Gamma distribution is the waiting time for more than one event, the Exponential distribution describes the time between a single Poisson event. The Pareto distribution is a skewed, heavy-tailed distribution that is sometimes used to model the distribution of incomes. The basis of the distribution is that a high proportion of a population has low income while only a few people have very high incomes. The Beta distribution is a distribution that models events which are constrained to take place within an interval defined by a minimum and maximum value. The Laplace distribution is a distribution that is symmetrical and more “peaky” than a Normal distribution. The dispersion of the data around the mean is higher than that of a Normal distribution. Laplace distribution is also sometimes called the Double Exponential distribution. The Cauchy distribution, also called the Lorentzian distribution or Lorentz distribution, is a continuous distribution describing resonance behavior. The Chi-Square distribution is used in the chi-square tests for goodness of fit of an observed distribution to a theoretical one and the independence of two criteria of classification of qualitative data. It is also used in confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation. The Chi-Square distribution is a special case of the Gamma distribution. Commonly used as the null distribution of a test statistic, such as in analysis of variance (ANOVA). The Johnson SB distribution is related to the Normal distribution. Four parameters are needed: γ,δ,λ,ε. It is a continuous distribution defined on bounded range $\epsilon \leq x \leq \epsilon + \ lambda$, and the distribution can be symmetric or asymmetric. Also known as the Rician Distribution, the Rice distribution is the probability distribution of the absolute value of a circular bivariate normal random variable with potentially non-zero mean. In the Continuous Uniform distribution, all intervals of the same length are equally probable. In the Discrete Uniform distribution, there are n equally spaced values, each of which have the same 1/n probability of being observed. Translate this page: (default) Deutsch Español Français Italiano Português 日本語 България الامارات العربية المتحدة Suomi इस भाषा में Norge 한국어 中文 繁体中文 Русский Nederlands Ελληνικά Hrvatska Česká republika Danmark Polska România Sverige
{"url":"http://wiki.stat.ucla.edu/socr/index.php/EBook","timestamp":"2014-04-18T10:34:18Z","content_type":null,"content_length":"104220","record_id":"<urn:uuid:a4ae78d9-c327-4aba-9d17-8b165d3be89f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Age Problems in School Mathematics Age Problems is the second part of the Math Word Problem Solving Series here in Mathematics and Multimedia. In this series, we are going to learn how to solve math word problems involving age. Age problems are very similar to number word problems. They are easy to solve when you know how to set up the correct equations. Most of the time, this type of problem discusses the age of a certain individual in relation to another in the past, present, or future. Below, are some of the common phrases used in age problems. In all the phrases, we let x be the age of Hannah now. • Hannah’s age four years from now (x + 4) • Hannah’s age three years ago (x – 3) • Karen is twice as old as Anna (Karen’s age: 2x) • Karen is thrice as old as Anna four years ago (Karen’s age: 2(x-4)) In solving age problems, creating a table is always helpful. This is one of the strategies that I am going to discuss in this series. As a start, we discuss one sample problem. Problem: Karen is thrice as old as Hannah. The difference between their ages is 18 years. What are their ages? First, we let x be Hannah’s age. Since Karen is thrice as old as Hannah, we multiply her age by 3. For instance, if Hannah is 10 years old now, then Karen is 30 or 3(10). Now, since Hannah’s age is x, then Karen’s age is thrice x or 3x. Now, to set up the equation, we look at the second statement. The difference between their ages is 18 years. This means that Karen’s Age – Hannah’s age = 18. In equation form, we have $3x - x = 18$. This gives us $2x = 18$ which means that $x = 9$. So, Hannah is $9$ years old and Karen is $18$. Checking Your Work You can always verify your work afters solving a problem. Let us check if the answers above is correct. Let’s look at the first sentence in the problem. Is Karen thrice as old as Hannah? Yes, Karen is 27 and 27 is thrice 9. Is the sum of their ages 18? Yes, 27 – 9 = 18. As you can see, age problems is not that difficult once you know how to set up the correct equation. In the next posts, we are going to discuss more complicated age problems. Stay tuned. 2 thoughts on “Introduction to Age Problems in School Mathematics” 1. I have always had an issue with problems like these. Never, in my life, have I ever had to consider this concept with these types of parameters. I understand they have a small place in mathematics education, but I tend to avoid these problems in particular. I suggest that the most important things students can get out of these types of problems is to use them as a means to teach reading strategies. Problems like this test a student’s ability to read rather than mathematics knowledge. 2. Yes, the truth is, I wouldn’t recommend problems such as these. However, students can use this as resources in case they encounter such problems.
{"url":"http://mathandmultimedia.com/2013/12/21/introduction-to-age-problems/","timestamp":"2014-04-17T01:04:38Z","content_type":null,"content_length":"339504","record_id":"<urn:uuid:cb6d9f52-404c-46dd-814f-96d1e764beb5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
PsychWiki - A Collaborative Psychology Wiki What is statistical significance? From PsychWiki - A Collaborative Psychology Wiki The effect found in a sample is statistically significant if the null hypothesis is rejected. If the null hypothesis is not rejected, then the effect is not significant. Statistical significance (p-value) of a result indicates to which degree the result is “true” in terms of being representative of the population. In other words, it’s a probability that the observed relationship between variables (or a difference between means) in a sample occurred by pure chance. A level of significance is selected prior to conducting statistical analysis. Traditionally, either the 0.05 level (sometimes called the 5% level) or the 0.01 level (1% level) is used. If the probability is less than or equal to the significance level, then the null hypothesis is rejected and the outcome is said to be statistically significant. The 0.01 level is more conservative than the 0.05 level. The Greek letter alpha (α) is sometimes used to indicate the significance level. Example / Application Example: [1] Application: In the normal curve seen above, the shaded region represents the area in which the results are "significant". Field, A. (2006). Discovering Statistics Using SPSS: Second Edition. London. Thousand Oaks. New Delhi. Sage Publication
{"url":"http://www.psychwiki.com/wiki/What_is_statistical_significance%3F","timestamp":"2014-04-16T22:37:50Z","content_type":null,"content_length":"14213","record_id":"<urn:uuid:96c74b2e-749c-440f-aac9-1720419f06a4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Lyapunov Exponent using OpenTSTool up vote 1 down vote favorite Has anyone calculated Lyapunov Exponents (LE) using OpenTSTool? It says it calculates the Largest Lyapunov exponent by computing the average exponential growth of the distance of neighboring orbits via the prediction error. So, I am trying to plot Lyapunov exponents for Chua's oscillator that is showing chaotic behavior. Hence, I am expecting a positive LE, a negative LE and a zero LE. But the plot I get is all positive values, and the values are pretty high as well, spanning fro 0 to 10. Can anyone tell me how to interpret this Lyapunov exponent plot? Your previous (very similar) question was closed with a fairly detailed explanation that technical questions about computer software aren't really appropriate for this forum. It looks like your thread will be closed soon but if you would like it to be re-opened you should edit your question to respond to the concerns brought-forward in your previous question. – Ryan Budney Dec 28 '11 at add comment 1 Answer active oldest votes After a year, I am reviving this question to finally answer it. I hope it's not a bad thing. Reference you need is U. Parlitz, Nonlinear Time-Series Analysis, in: Nonlinear Modeling - Advanced Black-Box Techniques, Eds. J.A.K. Suykens and J. Vandewalle, Kluwer Academic Publishers, Boston, 209-239, up vote 1 down vote This chapter is the basis of OpenTSTool, and it contains almost all theoretical references. There you can learn that this tool uses Sato algorithm and see how the prediction error graph (the one you get) is converted to Lyapunov exponents (pages 223 and 224). 2 The link didn't quite work for me, possibly because of a %20 in the URL. physik3.gwdg.de/~ulli/pdf/P98b.pdf worked. – Gerry Myerson Nov 29 '12 at 22:11 Thank you! Edited. – Harun Šiljak Nov 30 '12 at 5:48 add comment Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems or ask your own question.
{"url":"http://mathoverflow.net/questions/75052/lyapunov-exponent-using-opentstool","timestamp":"2014-04-17T18:47:27Z","content_type":null,"content_length":"54567","record_id":"<urn:uuid:8d28b01f-5efc-49c9-9bfb-90f72d3cabb3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Casim Abbas' Home Page • Associate Professor of Mathematics • Director of the Graduate Program • Office: C215 Wells Hall • Phone: (517) 353-4650 • Fax: (517) 432-1562 I am currently working on Hamiltonian dynamical systems and on symplectic and contact geometry. The material in items #3-#6 is based upon work supported by the National Science Foundation under Grant No. 0196122. The material in #7 is based upon work supported by a Michigan State University IRGP grant. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. Lecture Notes Contact: abbas@math.msu.edu Last Revised 09/11/12
{"url":"http://www.math.msu.edu/~abbas/","timestamp":"2014-04-18T08:03:26Z","content_type":null,"content_length":"3756","record_id":"<urn:uuid:dbbcc0f0-32f8-46da-8123-6bf12510d943>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics for Fun The Birthday problem Surprise - Surprise ! A challenge to your intutive sense but quite simple in a mathematical sense! (A) Let's start with an easy one first. At least how many people must be gathered so that we can be 100% certain that some of them share the same birthday? (B) Now comes the serious part ... How many people should there be in a class so that there is at least a 50% chance that some of them share the same birthday? Should it be half of what we have in (A) above? How about one third? say 180 people? 120 people? Think about your own class ( around 40 in size ), do you know any 2 people having the same birthday? Ask another class, is it common to have 2 or more people having the same birthday? (1) A computer simulation (2) How to approach the problem mathematically? (Assume there are only 365 different birthdays.) (a) Suppose there are 3 people in a class. Find the probability that (i) all have different probability (ii) at least some have same birthdays (b) Repeat (a) with 5 people in a class. How about n people? Guess how large should n be so that the probability that some of them have the same birthday is greater than 0.5. (3) A full explanation Does the answer agree with your intuition? The famous Monty Hall Problem * Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say number 1, and the host, who knows what's behind the doors, opens another door, say number 3, which has a goat. He says to you, "Do you want to pick door number 2?" Is it to your advantage to switch your choice of doors? If you were the contestant, which of the following would have a better chance to win the big prize? Strategy 1 (stick): Stick with the original door Strategy 2 (switch): Switch to the other door or it doesn't matter since the two strategies have equal chance of winning the big prize *This problem was named Monty Hall in honor of the long time host of the American television game show "Let's Make a Deal." During the show, contestants are shown three closed doors. One of the doors has a big prize behind it, and the other two have junk behind them. The contestants are asked to pick a door, which remains closed to them. Then the game show host, Monty, opens one of the other two doors and reveals the contents to the contestant. Monty always chooses a door with a gag gift behind it. The contestants are then given the option to stick with their original choice or to switch to the other unopened door. (1) A computer simulation for the Monty Hall problem - "Let's make a deal" applet just scroll down the page until you see 3 large doors #1, #2 and #3. Try the simulation repeatedly with your friends : one using the always STICK door strategy and the other using the always SWITCH door strategy. Do the statistics shown in small print below the 3 doors deviate significantly between the 2 strategies? Is there a clear winner and does that agree with your intuition? (2) The winning strategy is ... with FULL explanation below There is a 1/3 chance that you'll hit the prize door, and a 2/3 chance that you'll miss the prize. If you do not switch, 1/3 is your probability to get the prize. However, if you missed (and this with the probability of 2/3) then the prize is behind one of the remaining two doors. Furthermore, of these two, the host will open the empty one, leaving the prize door closed. Therefore, if you miss and then switch, you are certain to get the prize! Summing up, if you do not switch your chance of winning is 1/3 whereas if you do switch your chance of winning is 2/3! More discussion (3) History of the Monty Hall problem This problem has aroused a heated debate for quite some time when it first appeared in 1991 and has never failed to vex and amaze people with its counter-intuitive solution ever since. In 1991, Marylin Vos Savant** received the Monty Hall problem from Craig. F. Whitaker (Columbia, MD). She carefully explained the logic of the correct solution in a number of subsequent columns, but never completely convinced the doubters. Marylin's response caused an avalanche of correspondence, mostly from people who would not accept her solution (particularly advocates of "50-50" school). Eventually, she issued a call to Math teachers among her readers to organize experiments and send her the charts. Some readers with access to computers ran computer simulations. At long last, the truth was established and accepted. The matter continued at such length that it eventually became a notable news story in the New York Times and elsewhere. **Marylin Vos Savant ran the popular "Ask Marylin" question-and-answer column of the U.S. Parade magazine. According to Parade, Marilyn vos Savant was listed in the "Guinness Book of World Records Hall of Fame" for "Highest IQ" with IQ score of 228.
{"url":"http://mathforfun.tripod.com/","timestamp":"2014-04-20T01:52:10Z","content_type":null,"content_length":"157920","record_id":"<urn:uuid:bf5291a3-4172-4245-a245-515d8c506d1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of a Sphere In this Instructable we'll cover several ways to find the volume of a sphere - a locus of points that are equidistant to a fixed center in a 3D space. What you're going to possibly need: • A Sphere • Distance measuring tool (ruler, caliper) • Calculator/Pen+Paper • Graduated cylinder + water • A brain (you're on Instructables.com - you've already got this...I'm not going to stress common sense all that much) Step 1: The Volume Formula The volume of a sphere is (4/3)*pi*r • 4/3 is a constant • Pi is a constant that for our purposes will = 3.14 • r is the radius of the sphere, which is the distance from the center of the sphere to any point on that sphere's surface Step 2: Gathering Information Luckily, the only piece of information we need is the of the sphere. There are several ways we can find the radius of a sphere. The more practical ways include deriving the radius from the diameter or the circumference. You already know this, but: • Diameter - the longest segment that can exist within the sphere; this is 2*the radius • Circumference - the length of a circle projected by the sphere; this is 2*pi* the radius You can quickly approximate the of the sphere by holding a ruler against the longest part of the sphere. Match up the beginning of the ruler with your right eye closed; then take the measurement with your left eye closed. This gives you a terrible approximation, but an approximation nonetheless. After finding the diameter, simply divide it by 2 to yield the radius. You can find the circumference most easily by taking a string or a wire and wrapping it across the longest portion of the sphere. Then measure the length of the string. After finding the circumference, divide by 2*pi to yield the radius. Step 3: Plug'n'Play Once you've found the radius, it's only a matter of plugging it into the equation! Step 4: Finished...but there's more! =O Essentially, with a bit of common sense, it only takes the former 2 steps to find the volume of a sphere. However, maybe some of you wonder why the volume formula is the way it is. Dude, GREAT instructable! Still, more to do on your description of the Water Displacement method. You need to explain that the way to find the meniscus is to put you eye level to the flask (or in this case tube), and get the lowest point on the water. And, this method is almost always an estimate, because of the inaccuracy of the water level. That is the main reason that most math teachers don't have a tube and a sphere ready on your tests. It's pretty different with science teachers, because we(referring to me :) almost always have a flask ready, some marbles, and usually the water. There is also a method to do this with gas, but I did not include it in my "Volume of a sphere." Otherwise, yours is pretty good, from my standpoint. You can check mine out,
{"url":"http://www.instructables.com/id/Volume-of-a-Sphere-2/","timestamp":"2014-04-20T03:44:11Z","content_type":null,"content_length":"149462","record_id":"<urn:uuid:722fc376-0310-4736-9c35-ff341983374a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Reward to Risk In part 1 of the Reward to Risk series I covered what “risk” is and the 3 components you need to calculate your reward to risk ratio. In this article you will learn how we’re going to calculate this multiple. The risk, in this context is simply the difference between your entry price and your stop loss price. So, if your entry was $20.00 and your stop loss is $19.00, you have a total of $1.00 at risk. Your reward is the difference between your entry price and your target price. So, again, if the entry was $20.00 and the target was $25.00, your potential reward would be $5.00. In this case, we have a 5:1 reward to risk multiple. Earlier we discussed how you can’t just set your target by using a multiple of three times your risk, because that will set an unrealistic target price. It’s sort of a backwards way of doing things. The proper way to set your target is to use previous areas of support and resistance. Now, on this chart, you can clearly see areas of new highs that now act as resistance points. So these areas become excellent targets to set. On the next chart $93.00 would’ve been a good target area, at least for the first half of the trade. Looking at another example, you clearly see three red boxes for potential future resistance areas. The stock does break out and in all three occasions at these areas, so all three of these potential entries would have targets somewhere in these boxes, at least for the first half of the trade. If there’s no support and resistance on the chart, the next logical choice is to use the next largest whole numbers. So in this example, 13, 14, 15 and 16, these areas are good areas to set targets because they act as natural resistance points. Many people use their stop losses and their targets in the same areas and so that’s why these areas are very strong. Look at this chart we can clearly see whole numbers acting as resistance, here at 21, 23 and on 3 separate occasions at 27, the whole number acted as a resistance point. Setting targets at whole number, whether it be 25, 26, in this case 27 is where the resistance actually occurred. Now if you had more than one target, the best method is to average them out. If target 1 was $6.00 away, and target 2 was $3.00, target 3 was $1.00; you can simply add those up and divide them by 3, that will give you a target of $3.00. As long as your risk was 1 or less, you have the necessary 3:1 ratio that you require. What else do you need to consider when you’re calculating your reward to risk? Well the first thing is you need to know your actual average winner compared to your actual average loser. What exactly is that? Well if you add up the total results of your winners and divide it by the number of winning trades you have, that will tell you how much you win on average. It’s quite simple. If you make $500.00, $600.00, $1,000.00 and $1,500.00, you add the altogether and divide them by the number of winners and you’ll know how much you win on average. The same applies for your average losers. You add up the totals of how much you lose in all your trades and divide it by the number of losing trades you have and you’ll know exactly how much you lose on average. These numbers are important because now you need to know how that compares to your intended reward to risk. Remember, we discussed this before. If you only make $1.50 instead of the $3.00 you were expecting then you’re fooling yourself into thinking you have a good reward to risk ratio. Another example we see here, if you have $4.25 and risk $2.35, you now have a reward to risk ratio of 3.15 and then you need to ask if that fits into your trading plan. I recommend 3 or higher, but you might use a multiple of 4 or higher. It really depends on what you use in your trading plan. The third point is, are you able to maintain a high reward to risk multiple and a high winning percentage? It really is a balancing act between the two. I’d recommend 3 or higher for a reward multiple and a 40 percent win percentage. I will show you the numbers in a moment how, but if you can maintain those figures, you’ll be very successful. You need to focus as much attention on both of these as you can and keep them as high as you as possible. In part 3 of the reward to risk series I will cover the “nitty gritty” math behind the formula and how you winning percentage and reward multiple work in tandem to maximize your results. WAIT! Get My Money Management Calculator and Video Tutorials For FREE... We hate spam just as much as you 1. Bill McArthur says Thanks for the video! I like the logical way of selecting targets – thru support & resistance. Working backwards, I guess you can then determine if your (pending) trade is worth the risk. I look forward to your next release. Bill Mc 2. Fred Dirksz says Do you have the same specifically for the Forex? 3. Dave says Hello Fred, The same principles would apply with Forex. Determine your entry and stop this will give you your risk. For example 40 pips and then determine if there is a target based on a pivot or resistance point that will give you a 3 to 1 reward ie 120+ pips. Track your results and if you’re not getting a true 3 to 1 ratio you’ll be able to see that very quickly. You may find that you’re still successful with 2 to 1 if you have a good winning Leave a Reply Cancel reply
{"url":"http://www.davegagneblog.com/reward-to-risk-explained-pt2/","timestamp":"2014-04-19T14:29:20Z","content_type":null,"content_length":"44317","record_id":"<urn:uuid:10fd5ed1-d004-4fd1-aae2-c432549583da>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Level Set Methods for Tracking Shocks in Detonation Flows by Tariq D. Aslam Tariq D. Aslam Los Alamos National Laboratory Group DX-1, Detonation Science and Technology A level set algorithm for tracking discontinuities in hyperbolic conservation laws is presented. The algorithm uses a simple finite difference approach, analogous to the method of lines scheme presented in [1]. The zero of a level set function is used to specify the location of the discontinuity. Since a level set function is used to describe the front location, no extra data structures are needed to keep track of the location of the discontinuity. Also, two solution states are used at all computational nodes, one corresponding to the "real" state, and one corresponding to a "ghost node" state, analogous to the "Ghost Fluid Method" of [2]. High-order, point-wise convergence is demonstrated for linear and nonlinear scalar conservation laws, even at discontinuities and in multiple dimensions. The solutions are compared to standard high order shock capturing schemes. This presentation will focus on systems of conservation laws. In particular, results of fully resolved detonation flows in the Euler equations will be presented. It will be demonstrated that the method can be used effectively when very accurate results are required for problems involving shock waves. [1] C.-W. Shu and S. Osher, "Efficient Implementation of Essentially Non-oscillatory Shock-Capturing Schemes" Journal of Computational Physics, 77, 439-471, 1988. [2] R. P. Fedkiw, T. Aslam, B. Merriman and S. Osher, "A Non-Oscillatory Eulerian Approach to Interfaces in Multimaterial Flows (The Ghost Fluid Method)," Journal of Computational Physics, to appear, 152, 457-492, 1999. Back to High-Speed Combustion in Gaseous and Condensed-Phase Energetic Materials
{"url":"http://www.ima.umn.edu/reactive/abstract/aslam1.html","timestamp":"2014-04-21T15:15:03Z","content_type":null,"content_length":"16591","record_id":"<urn:uuid:2f175031-f5fd-4a64-a5ab-982cc24694c9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Building Math Skills with Halloween Candy Halloween candy seems like an unusual place to find math. But, it turns out, there are some great skill-building activities that can be done with these yummy treats....all in the context of something kids love – candy! Here are four fun and educational post-trick-or-treating activities: Candy sort: Have your child place all their candy in a pile. Ask her to sort the candy into groups. As she is sorting, ask her why she chose the groups she did. Then see if she can sort them another way. This is a building block to algebraic thinking as kids look for specific attributes that define each group. Counting and comparing: Once your child has sorted her candy, organize them into rows. Then count how many there are in each group. Help her write the number down on a small piece of paper or sticky note and label each group with its number. When all groups have been labeled, ask questions such as Which group has the most? Which group has the least? How many more Skittles are there than Milk Duds? How many more Kit Kats would you need to have the same number of Hersheys? Have your child help you order the sticky note numbers from smallest to greatest. Geometry: Discuss the different shapes you see in each piece of candy. For example, candy corn looks like a triangle, Whoppers look like spheres, and a Kit Kat bar is made up of rectangles. Graphing: Similar to organizing the candy into rows like we did above, your child will be using graph paper to turn those rows into a bar graph. The graph in the photo reflects vertical bars but you can also make the bars horizontal. Decide if you want to go big and graph all their candy, or keep it smaller and graph one small bag of a candy like Skittles or M&Ms. And, finally, subtraction! If I have six Milky Ways and I eat them all, what's left? ... a very upset tummy. Happy Halloween!
{"url":"http://www.ziggityzoom.com/print/content/building-math-skills-halloween-candy","timestamp":"2014-04-16T13:32:36Z","content_type":null,"content_length":"14454","record_id":"<urn:uuid:47c18af4-d37a-4152-ad81-a3a9f6eed075>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Reacting gas volume ratios moles reactants products Avogadro's Law Gay-Lussac's Law combining volumes gcse/igcse/O level chemistry calculations igcse KS4 science A level GCE AS A2 O Level practice questions exercises 10. Reacting gas volume ratios of reactants or products (Avogadro's Law, Gay-Lussac's Law) In the diagram above, if the volume on the left syringe is twice that of the gas volume in the right, then there are twice as many moles or actual molecules in the left-hand gas syringe. • REACTING GAS VOLUMES and MOLE RATIO □ Historically Gay-Lussac's Law of volumes states that 'gases combine with each other in simple proportions by volume'. □ Avogadro's Law states that 'equal volumes of gases at the same temperature and pressure contain the same number of molecules' or moles of gas. □ This means the molecule ratio of the equation or the relative moles of reactants and products automatically gives us the gas volumes ratio of reactants and products, if all the gas volumes are measured at the same temperature and pressure. □ These calculations only apply to gaseous reactants or products AND if they are all at the same temperature and pressure. □ The balanced equation can be read/interpreted in terms of either ... ☆ (i) a gas volume ratio, obviously for gaseous species only (g), AND at the same temperature and pressure. ☆ or (ii) a mole ratio, which applies to anything in the equation, whether (g), (l) or (s). ○ Note: If you have to convert from moles to volume or volume to moles, you need to know the molar volume at that temperature and pressure e.g. 24 dm^3 (litres) at 25^oC (298K) and 1 atm (101 kPa) pressure. ■ i.e. if the volume is in dm^3 (litres) at ~ room temperature and pressure ■ moles of gas = V[gas]/24 or Vgas = 24 x moles of gas ■ If the gas volume is given in cm3, then dm3 = V/1000 • Reacting gas volume ratio calculation Example 10.1 □ Given the equation: HCl[(g)] + NH[3(g)] ==> NH[4]Cl[(s)] □ 1 mole hydrogen chloride gas combines with 1 mole of ammonia gas to give 1 mole of ammonium chloride solid. □ 1 volume of hydrogen chloride will react with 1 volume of ammonia to form solid ammonium chloride □ e.g. 25cm^3 + 25cm^3 ==> solid product (no gas formed) □ or 400dm^3 + 400 dm^3 ==> solid product etc. □ so, if 50 cm^3 HCl reacts, you can predict 50 cm^3 of NH[3] will react etc. etc. ! □ The note the • Reacting gas volume ratio calculation Example 10.2 □ Given the equation: N[2(g)] + 3H[2(g)] ==> 2NH[3(g)] □ 1 mole of nitrogen gas combines with 3 mols of hydrogen gas to form 2 mol of a ammonia gas. □ 1 volume of nitrogen reacts with 3 volumes of hydrogen to produce 2 volumes of ammonia □ e.g. what volume of hydrogen reacts with 50 cm^3 nitrogen and what volume of ammonia will be formed? □ The ratio is 1 : 3 ==> 2, so you multiply equation ratio numbers by 50 giving ... □ 50 cm^3 nitrogen + 150 cm^3 hydrogen (3 x 50) ==> 100 cm^3 of ammonia (2 x 50) • Reacting gas volume ratio calculation Example 10.3 □ Given the equation: C[3]H[8(g)] + 5O[2(g)] ==> 3CO[2(g)] + 4H[2]O[(l)] □ Reading the balanced equation in terms of moles (or mole ratio) ... □ 1 mole of propane gas reacts with 5 mols of oxygen gas to form 3 moles of carbon dioxide gas and 4 mols of liquid water. □ (a) What volume of oxygen is required to burn 25cm^3 of propane, C[3]H[8]. ☆ Theoretical reactant volume ratio is C[3]H[8 ]: O[2] is 1 : 5 for burning the fuel propane. ☆ so actual ratio is 25 : 5x25, so 125cm^3 oxygen is needed. □ (b) What volume of carbon dioxide is formed if 5dm^3 of propane is burned? ☆ Theoretical reactant-product volume ratio is C[3]H[8 ]: CO[2] is 1 : 3 ☆ so actual ratio is 5 : 3x5, so 15dm^3 carbon dioxide is formed. □ (c) What volume of air (^1/[5]th oxygen) is required to burn propane at the rate of 2dm^3 per minute in a gas fire? ☆ Theoretical reactant volume ratio is C[3]H[8 ]: O[2] is 1 : 5 ☆ so actual ratio is 2 : 5x2, so 10dm^3 oxygen per minute is needed, ☆ therefore, since air is only ^1/[5]th O[2], 5 x 10 = 50dm^3 of air per minute is required • Reacting gas volume ratio calculation Example 10.4 □ Given the equation: 2H[2(g)] + O[2(g)] ==> 2H[2]O[(l)] □ If 40 dm^3 of hydrogen, (at 25^oC and 1 atm pressure) were burned completely ... ☆ a) What volume of pure oxygen is required for complete combustion? ○ From the balanced equation the reacting gas volume ratio is 2 : 1 for H[2] to O[2] ○ Therefore 20 dm^3 of pure oxygen is required (40 : 20 is a ratio of 2 : 1). ☆ b) What volume of air is required if air is ~20% oxygen? ○ ~20% is ~^1/[5], therefore you need five times more air than pure oxygen ○ Therefore volume of air needed = 5 x 20 = 100 dm^3 of air ☆ c) ○ The easiest way to solve this problem is to think of the water as being formed as a gas-vapour. ○ The theoretical gas volume ratio of reactant hydrogen to product water is 1 : 1 ○ Therefore, prior to condensation at room temperature and pressure, 40 dm^3 of water vapour is formed. ○ 1 mole of gas occupies 24 dm^3, and the relative molar mass of water is 18 g/mol ○ Therefore moles of water formed = 40/24 = 1.666 moles ○ Since moles = mass / formula mass ○ mass = moles x formula mass ○ mass water formed = 1.666 x 18 = 30g of H[2]O • Reacting gas volume ratio calculation Example 10.5 □ It was found that exactly 10 cm^3 of bromine vapour (Br[2(g)]) combined with exactly 30 cm^3 chlorine gas (Cl[2(g)]) to form a bromine-chlorine compound BrCl[x]. □ a) From the reacting gas volume ratio, what must be the value of x? and hence write the formula of the compound. ☆ Since both reactants have the same formula, i.e. both diatomic molecules, the ratio of bromine to chlorine atoms in the compound must be 1 : 3 because the reacting gas volume ratio is 1 : □ b) Write a balanced equation to show the formation of BrCl[x] ☆ The reacting gas volume ratio is 1 : 3, therefore we can write with certainty that 1 mole (or molecule) of bromine reacts with 3 moles (or molecules) of chlorine, and balancing the symbol equation, results in two moles (or molecules) of the bromine-chlorine compound being formed. • Reacting gas volume ratio calculation Example 10.6 • Reacting gas volume ratio calculation Example 10.7 □ - Self-assessment Quizzes [rgv] type in answer only or multiple choice only OTHER CALCULATION PAGES Revision KS4 Science Additional Science Triple Award Science Separate Sciences Courses aid to textbook revision GCSE/IGCSE/O level Chemistry Information Study Notes for revising for AQA GCSE Science, Edexcel GCSE Science/IGCSE Chemistry & OCR 21st Century Science, OCR Gateway Science WJEC gcse science chemistry CCEA/CEA gcse science chemistry O Level Chemistry (revise courses equal to US grade 8, grade 9 grade 10) A level Revision notes for GCE Advanced Subsidiary Level AS Advanced Level A2 IB Revise AQA GCE Chemistry OCR GCE Chemistry Edexcel GCE Chemistry Salters Chemistry CIE Chemistry, WJEC GCE AS A2 Chemistry, CCEA/CEA GCE AS A2 Chemistry revising courses for pre-university students (equal to US grade 11 and grade 12 and AP Honours/honors level for revising science chemistry courses revision guides Website content copyright © Dr Phil Brown 2000-2013 All rights reserved on revision notes, images, puzzles, quizzes, worksheets, x-words etc. * Copying of website material is not permitted * Alphabetical Index for Science Pages Content A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
{"url":"http://www.docbrown.info/page04/4_73calcs10rgv.htm","timestamp":"2014-04-20T21:38:56Z","content_type":null,"content_length":"40553","record_id":"<urn:uuid:759aa422-ab7e-4575-b8fa-ebc3034509c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
How to square a vector.. August 1st 2012, 07:02 AM #1 Aug 2012 How to square a vector.. Hello Everyone, I feel kind of silly asking this but, after numerous tries and ending up with weird results, I needed to make sure. In order to determine if i'm making a mistake in this step, or some other step of my calculations.. Please refer to this image:Attachment 24400 What i can't be sure of is how to square this vector.. Lets say I have three vectors, each with three members: C(1) = {a1,b1,c1} C(2) = {a2,b2,c2} C(3) = {a3,b3,c3} So if I wanted to find 'sd' using the equation in the image. Is this how i should calculate my vectors? C(1)^2 = {a1*a1 , b1*b1 , c1*c1} C(2)^2 = {a2*a2 , b2*b2 , c2*c2} C(3)^2 = {a3*a3 , b3*b3 , c3*c3} sd = { SquareRoot[ (a1*a1 + a2*a2 + a3*a3)/N ] , SquareRoot[ (b1*b1 + b2 * b2 + b3 * b3)/N ] , ... } I hope I was able to explain myself clearly.. I'd greatly appreciate any help. Best Regards, Last edited by mukunku; August 1st 2012 at 07:03 AM. Reason: Typo Re: How to square a vector.. There are three different kinds of "multiplication" defined for vectors: scalar multiplication, the dot product, and, for three dimensional vectors, the cross product. 1) Scalar multipication involves a scalar and a vector so there cannot be a "square". 2) The dot product gives a scalar result: <v1, v2, v3>.<u1, u2, u3>= v1u1+ v2u2+ v3u3 so that the square would be <v1, v2, v3>^2= v1^2+ v2^2+ v3^2 a scalar (number) not a vector. 3) The cross product of two vectors is a vector but it is "anti-commutative" so that the "square", the cross product of a vector with itself is always the 0 vector. Perhaps a special kind of "vector multiplication" is being defined but your link does not work. Re: How to square a vector.. Thanks for the reply! I know that vectors have a dot product. But with the equation above, i just couldn't make sense of it. By "link not working", do you mean the image attachment is not visible? Try this link: http://i50.tinypic.com/n5oc2g.jpg I'll check my results by getting the dot product again.. And see if I've made any mistakes. Last edited by mukunku; August 1st 2012 at 11:52 PM. Re: How to square a vector.. The dot product seems to be producing relatively accurate results. I guess theres nothing else to be done here =] Thanks again @HallsofIvy Note: Can a moderator please change the title prefix to [SOLVED]. I can't seem to edit it [= Best Regards, Last edited by mukunku; August 2nd 2012 at 02:47 AM. August 1st 2012, 07:41 AM #2 MHF Contributor Apr 2005 August 1st 2012, 11:40 PM #3 Aug 2012 August 2nd 2012, 02:44 AM #4 Aug 2012
{"url":"http://mathhelpforum.com/calculus/201604-how-square-vector.html","timestamp":"2014-04-17T13:35:48Z","content_type":null,"content_length":"41381","record_id":"<urn:uuid:1e6345f9-400d-4a77-b4a0-d4dcd8597d03>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectra and localizations of the category of topological spaces up vote 5 down vote favorite Can we construct the category of spectra (or maybe just its homotopy category) from the category of pointed topological spaces using some kind of localization combined with other categorical [The first part of the original question was wrong for a trivial reason pointed out by Reid Barton.] at.algebraic-topology stable-homotopy localization infinity-categories add comment 2 Answers active oldest votes [Removed a paragraph relating to an earlier version of the question] You can construct Spectra categorically by adjoining an inverse to the endofunctor Σ of Top as a presentable (∞,1)-category. Inverting an endofunctor is a very different operation than inverting maps! It's like the difference between forming ℤ[1/p] and ℤ/(p). up vote 8 Here is one way to verify the claim. To invert the endomorphism Σ of Top we should form the colimit, in the (∞,1)-category Pres of presentable categories and colimit-preserving functors, down vote of the sequence Top → Top → ... where all the functors in the diagram are Σ. A basic fact about Pres is that we can compute such a colimit by forming the diagram (on the opposite index accepted category) formed by the right adjoints of these functors, and taking its limit as a diagram of underlying (∞,1)-categories [HTT 5.5.3.18]. The functors in the limit cone will have left adjoints which are the functors to the colimit in Pres. In our case we obtain the sequence Top ← Top ← ... where the functors are Ω, and the limit of this sequence is precisely the classical definition of (Ω-)spectrum: a sequence of spaces X[n] with equivalences X[n] → ΩX[n+1]. Minor quible. Don't you only get connective spectra by starting with Top and inverting the suspension functor? Also a question: If you start with Top and invert the loops functor do you also get the category of (connective) spectra? – Chris Schommer-Pries Feb 24 '10 at 4:39 3 It's even worse than that, you only get the category of suspension spectra. Also, I had some other argument written here about inverting the loop functor which suffered from me accidentally getting an adjunction on the wrong side. Remember, kids: no Math Overflow late at night. – Tyler Lawson Feb 24 '10 at 5:23 3 I should have mentioned that I'm working in the world of presentable (∞,1)-categories. I'm pretty sure my new statement is correct. – Reid Barton Feb 24 '10 at 5:28 Is the operation of adjoining an inverse to an endofunctor explained somewhere in Lurie's papers or anywhere else? – Dmitri Pavlov Feb 24 '10 at 5:42 I don't know of a specific place where it is written down, but invertibility of an endofunctor is an (∞,1)-categorical (as opposed to (∞,2)-categorical) notion, so it's directly analogous to the situation in classical algebra. – Reid Barton Feb 24 '10 at 6:06 add comment I don't know the answer, but I have a related question. What if we let $f$ be the wedge of the maps $X\to \Omega \Sigma X$ (representing suspension) for all countable CW complexes X, and then apply Bousfield/Farjoun localization $L_f$? up vote 0 down vote It seems to me that, for the purposes of mapping in finite complexes, we have inverted the suspension operation. add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology stable-homotopy localization infinity-categories or ask your own question.
{"url":"http://mathoverflow.net/questions/16224/spectra-and-localizations-of-the-category-of-topological-spaces/16731","timestamp":"2014-04-20T13:42:16Z","content_type":null,"content_length":"61993","record_id":"<urn:uuid:e7a47a2e-05aa-4dd9-8078-611cf72d13ee>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Fields Institute Communications 1997; 277 pp; hardcover Volume: 14 ISBN-10: 0-8218-0524-X ISBN-13: 978-0-8218-0524-4 List Price: US$100 Member Price: US$80 Order Code: FIC/14 This book contains contributions from the proceedings at The Fields Institute workshop on Special Functions, \(q\)-Series and Related Topics that was held in June 1995. The articles cover areas from quantum groups and their representations, multivariate special functions, \(q\)-series, and symbolic algebra techniques as well as the traditional areas of single-variable special functions. The book contains both pure and applied topics and reflects recent trends of research in the various areas of special functions. Titles in this series are co-published with the Fields Institute for Research in Mathematical Sciences (Toronto, Ontario, Canada). Graduate students, research mathematicians, and scientists interested in special functions and its applications. • K. Alladi -- Refinements of Rogers-Ramanujan type identities • B. C. Berndt, H. H. Chan, and L.-C. Zhang -- Ramanujan's class invariants with applications to the values of \(q\)-continued fractions and theta functions • G. Gasper -- Elementary derivations of summation and transformation formulas for \(q\)-series • R. Wm. Gosper, Jr. -- \(\int ^{m/6}_{n/4} \ln \Gamma (z)dz\) • F. A. Grunbaum and L. Haine -- On a \(q\)-analogue of Gauss equation and some \(q\)-Riccati equations • R. A. Gustafson and C. Krattenthaler -- Determinant evaluations and \(U(n)\) extensions of Heine's \(_2\phi_1\)-transformations • M. E. H. Ismail, D. R. Masson, and S. K. Suslov -- Some generating functions for \(q\)-polynomials • E. Koelink -- Addition formulas for \(q\)-special functions • T. H. Koornwinder -- Special functions and \(q\)-commuting variables • M. Noumi, M. S. Dijkhuizen, and T. Sugitani -- Multivariable Askey-Wilson polynomials and quantum complex Grassmannians • P. Paule and A. Riese -- A Mathematica \(q\)-analogue of Zeilberger's algorithm based on an algebraically motivated approach to \(q\)-hypergeometric telescoping • W. Van Assche -- Orthogonal polynomials in the complex plane and on the real line • Y. Xu -- On orthogonal polynomials in several variables • D. R. Masson -- Appendix I: Program list of speakers and topics • D. R. Masson -- Appendix II: List of participants
{"url":"http://ams.org/bookstore?fn=20&arg1=ficseries&ikey=FIC-14","timestamp":"2014-04-20T15:58:40Z","content_type":null,"content_length":"16505","record_id":"<urn:uuid:a804a276-4eff-4b40-9293-0c5843016837>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Hometown, IL ACT Tutor Find a Hometown, IL ACT Tutor ...I always earned high grades and accolades and learned those concepts quickly and easily. When I work with students, first I make sure that student understands all related concepts then I spend time teaching strategies to solve the problems assigned. I prepare students for quizzes, tests, homework and ISAT or any other standardized tests. 23 Subjects: including ACT Math, chemistry, algebra 1, algebra 2 ...The students that I have tutored in the past have all achieved their stated goals and I'm very proud of that. I hope to see you soon. Joe I can teach all concepts of 9th grade Algebra. 2 Subjects: including ACT Math, algebra 1 ...Math should not the roadblock between you and your success/career aspirations. So I am here to serve you. I am very skilled at one-on-one tutoring, small group tutoring, and online math course 11 Subjects: including ACT Math, geometry, GRE, algebra 1 ...If you hire me, you will work with a professional, drawing on over 12 years of experience in management positions and 15 years of experience as a peer and professional tutor. I specialize in working with students averse to math and science. My methods are perfect for test preparation as well. 36 Subjects: including ACT Math, reading, English, chemistry ...After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin. I took pleasure in helping students understand concepts and succeed. My best students are those that desire to learn and I seek to cultivate that attitude of growth and learning through a zest and enthusiasm for learning. 26 Subjects: including ACT Math, chemistry, English, reading Related Hometown, IL Tutors Hometown, IL Accounting Tutors Hometown, IL ACT Tutors Hometown, IL Algebra Tutors Hometown, IL Algebra 2 Tutors Hometown, IL Calculus Tutors Hometown, IL Geometry Tutors Hometown, IL Math Tutors Hometown, IL Prealgebra Tutors Hometown, IL Precalculus Tutors Hometown, IL SAT Tutors Hometown, IL SAT Math Tutors Hometown, IL Science Tutors Hometown, IL Statistics Tutors Hometown, IL Trigonometry Tutors Nearby Cities With ACT Tutor Argo, IL ACT Tutors Burbank, IL ACT Tutors Chicago Ridge ACT Tutors Evergreen Park ACT Tutors Mc Cook, IL ACT Tutors Mccook, IL ACT Tutors Merrionette Park, IL ACT Tutors Oak Lawn ACT Tutors Palos Park ACT Tutors Riverside, IL ACT Tutors Robbins, IL ACT Tutors Summit Argo ACT Tutors Summit, IL ACT Tutors Willow Springs, IL ACT Tutors Worth, IL ACT Tutors
{"url":"http://www.purplemath.com/Hometown_IL_ACT_tutors.php","timestamp":"2014-04-18T13:37:52Z","content_type":null,"content_length":"23539","record_id":"<urn:uuid:37c1857c-6bae-4bf5-b201-f0656dad865c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Lacey Calculus Tutor With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including calculus, geometry, GRE, statistics ...I have been learning French for more than 6 years. I can also help with programming. I earned a Bachelor of Science in Computer Science and in Computer Engineering at UW in Tacoma. 16 Subjects: including calculus, chemistry, French, geometry ...My Ph.D. is in applied mathematics and linear algebra is used extensively throughout my research in applied physics. I also have tutored linear algebra both privately and at a tutoring center in college. I worked at a tutoring center as an undergraduate student for several years where I tutored all mathematics courses through differential equations and vector calculus. 18 Subjects: including calculus, physics, statistics, geometry Hello, Graduated from the University of Washington and Texas A n M University, I have lots of teaching experience in mathematics and statistics as I was a math/stat instructor at University of Alaska Anchorage and University of Maryland University College Asia. Wish to tutor you with my best knowle... 20 Subjects: including calculus, physics, statistics, geometry ...By far, my favorite subjects are math and science, but I love to show the tricks for test taking for ASVAB, TEAS, MCAT, Compass, SAT and ACT exam. I specialize in identifying the roadblocks to your success and getting you to your goal. I can also help students understand their style of learning and assist with study skills. 46 Subjects: including calculus, reading, English, algebra 1
{"url":"http://www.purplemath.com/Lacey_calculus_tutors.php","timestamp":"2014-04-18T01:06:24Z","content_type":null,"content_length":"23603","record_id":"<urn:uuid:e8d23f54-6245-40ba-b4ca-1099347d922f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: PDEs & Mathematica. [Date Index] [Thread Index] [Author Index] Re: PDEs & Mathematica. • To: mathgroup@smc.vnet.net • Subject: [mg10818] Re: PDEs & Mathematica. • From: Julian Stoev <stoev@SPAM-RE-MO-VER-usa.net> • Date: Tue, 10 Feb 1998 21:01:33 -0500 • Organization: Seoul National University, Republic of Korea • References: <199801270810.DAA01319@smc.vnet.net.> <6atasu$jlr$14@dragonfly.wolfram.com> <6ba9f1$bba@smc.vnet.net> On 4 Feb 1998, Lars Hohmuth wrote: |Actually, both DSolve and NDSolve have routines for handling certain |classes of partial differential equations. More specifically, DSolve |uses separation of variables and symmetry reduction, while NDSolve uses |the method of lines for 1+1 dimensional PDEs. |You usually specify initial conditions exactly like in the ODE case, but |keep in mind that solving PDEs is a much harder problem than ODEs. For |example, partial differential equation may not have a general solution. |SO it would be helpful to know exactly which equations you are trying |to solve. |Here is an example from the online documentation. It finds a numerical |solution to the wave equation with the initial condition |y[x,0]=Exp[-x^2]. The result is a two-dimensional interpolation |In[1]:NDSolve[{D[y[x, t], t, t] = D[y[x, t], x, x], | y[x, 0] = Exp[-x^2], Derivative[0,1][y][x, 0] = 0, | y[-5, t] = y[5, t]}, y, {x, -5, 5}, {t, 0, 5}] Out[1]{{y\[Rule]InterpolatingFunction[{{-5,5.},{0.,5.}},"<>"]}} |If general solutions don't exist, the standard package |Calculus`DSolveIntegrals` can be used to find complete integrals of the |PDE. Additionally, there are a couple of packages for calculating Lie |and Lie-Backlund symmetries available from www.mathsource.com. | |There are a number of books about solving differential equations with |Mathematica, take a look at |http://store.wolfram.com/catalog/books/de.html . |Some more information is available in sections 3.5.10 and 3.9.7 of the |Mathematica Book. |Lars Hohmuth |Wolfram Research, Inc. Since the question is about PDE and you seem to respond on this kind of messages from Wolfram Res., I would like to ask a question. It is not a secret, that many CAS can handle systems of PDE. I found good links about this on It seems, that Mathematica is far behind others in this field :-(. Can you give some hints (if not secret) in which directions Mathematica may develop in near future. This may be very important for the users. And a question to others. I was not able to find package for Mathematica doing something more, then Lie-Backlund for general PDEs. Are there some other tools simillar in functionality to DSolve working on systems of PDE (may be linear or other special forms)? I am not a matematician, I am engineer. I need a tool which I would be able to use after reading may be 1 book, but 3 books in pure heavy mathematics is too much time for me. Thank you! Julian Stoev <j.h.stoev@ieee.org> - Ph. D. Student Intelligent Information Processing Lab. - Seoul National University, Korea Office: 872-7283, Home: 880-4215 - http://poboxes.com/stoev !!!!! Use REPLY-TO: or remove "SPAMREMOVER" in my address
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Feb/msg00128.html","timestamp":"2014-04-16T13:26:05Z","content_type":null,"content_length":"37699","record_id":"<urn:uuid:7cf107b0-1feb-46e9-918d-31690345bdda>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Pattern Puzzle [Archive] - KnittingHelp.com Forum 12-04-2011, 08:15 AM Hi all, I have a bit of an unusual request for help :) Would any of you know of a pattern generator that will do the opposite i.e. allow you to input a knitting pattern and then give you a picture of how it would/should look when finished? I am trying to solve a Geocache puzzle and the co-ordinates are hidden in a knitting patern. So once you knit the pattern, you should be able to read them. I have not the first clue when it comes to knitting and neither does my wife so I was hoping that you pro's might be able to help :aww: Any suggestions would be really appreciated :) Jamie (The Irish Knit-Wit).
{"url":"http://www.knittinghelp.com/forum/archive/index.php/index.php?t-106912.html","timestamp":"2014-04-18T17:29:54Z","content_type":null,"content_length":"12735","record_id":"<urn:uuid:4e56a14e-1568-4e26-909f-20523d0b77e9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Leonid Berlyand, Mark Levi, Alexei Novikov. Title: Equations with random coefficients: Convergence to deterministic or stochastic limits and theory of correctors Seminar: Applied Analysis Seminar Speaker: Guillaume Bal, Columbia University The theory of homogenization for equations with random coefficients is now rather well-developed. What is less studied is the theory for the correctors to homogenization, which asymptotically characterize the randomness in the solution of the equation and as such are important to quantify in many areas of applied sciences. I will present some results in the theory of correctors for elliptic and parabolic problems. Homogenized (deterministic effective medium) solutions are not the only possible limits for solutions of equations with highly oscillatory random coefficients as the correlation length in the medium converges to zero. When fluctuations are sufficiently large, the limit may take the form of a stochastic equation and stochastic partial differential equations (SPDE) are routinely used to model small scale random forcing. In the specific setting of a parabolic equation with large, Gaussian, random potential, I will show the following result: in low spatial dimensions, the solution to the parabolic equation indeed converges to the solution of a SPDE with multiplicative noise, which needs to be written in a Stratonovich form; in high spatial dimension, the solution to the parabolic equation converges to a homogenized (hence deterministic) equation and randomness appears as a central limit- type corrector solution of a SPDE with additive noise. One of the possible corollaries for this result is that SPDE models may indeed be appropriate in low spatial dimensions but not necessarily in higher spatial dimensions. Room Reservation Information Room Number: MB113 Date: 04 / 29 / 2010 Time: 01:00pm - 02:00pm
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=8580","timestamp":"2014-04-21T08:38:44Z","content_type":null,"content_length":"4760","record_id":"<urn:uuid:bad80662-9f42-42da-a937-070ae33a4cad>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: hi what is the vertical retricemptote of 2x+x^2/2x-x^2 and vertical retricemptote of x^2-25/x^2+5x • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508f5687e4b0ad62053743fd","timestamp":"2014-04-18T19:18:04Z","content_type":null,"content_length":"34514","record_id":"<urn:uuid:09e8f763-baff-46dd-b86d-5f4fe0091139>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Will a GTX 770 bottleneck AMD FX-6100 September 20, 2013 4:22:57 PM Title says it all really, If so how much? Im upgrading to a 770 but my cpu is FX-6100. a b À AMD September 20, 2013 4:43:31 PM TheKiwi said: Title says it all really, If so how much? Im upgrading to a 770 but my cpu is FX-6100. Hi - Yes, the FX-6100 will be a bottleneck to a high end GPU like the gtx770. I don't know how much exactly tho. The FX-6100 is a tier 3 of AMD's gamer CPU hierarchy on Tom's HW rating. To get the most out of that GPU you should upgrade to a tier 1, such as FX-6350, and others, see chart: Can't find your answer ? Ask ! September 20, 2013 5:14:50 PM September 20, 2013 5:27:00 PM You might not need to take it to the full 4.5ghz depending on the resolution you are running at but I would say at the very least 4ghz. I'm assuming you haven't done this before so I won't go too in depth but to hit 4ghz just go into the bios by hitting delete the second the computer turns on. Navigate to the CPU settings which may be under a different name , could be under advanced or MIT if you have a gigabyte board. The setting you are looking for is CPU Multiplier. You want to set it 20 I believe to hit 4.0ghz. It should have something near it showing the clock increase as you raise the multiplier. You want it to be at 4ghz and you want to disable Turbo. This modest overclock should not require any increase in voltage you should not see much of an increase in temperatures either. See if you can get this to work. If you notice the computer shut off or restart randomly go back into the bios and lower the multiplier down to 3.8ghz and see if that is stable. The 6100's aren't known as great overclockers. I'm pretty sure if you had a good cooler and weren't scared of increasing the voltage you could get close to 4.2-4.4 but if you are new to overclocking thats not If you really don't want to upgrade your cpu you should just buy a lesss expensive graphics card. With that processor you wouldn't be able to tell the difference between a 200 dollar graphics card and that 770. I would just get something like this : which is a stellar deal right now and then just pocket the extra 200 dollars, or put that towards Which would help balance your build out. Either way if you choose to overclock the risk is on you but what I suggested is very mild you should have no trouble. September 20, 2013 6:22:58 PM fonzieguy said: You might not need to take it to the full 4.5ghz depending on the resolution you are running at but I would say at the very least 4ghz. I'm assuming you haven't done this before so I won't go too in depth but to hit 4ghz just go into the bios by hitting delete the second the computer turns on. Navigate to the CPU settings which may be under a different name , could be under advanced or MIT if you have a gigabyte board. The setting you are looking for is CPU Multiplier. You want to set it 20 I believe to hit 4.0ghz. It should have something near it showing the clock increase as you raise the multiplier. You want it to be at 4ghz and you want to disable Turbo. This modest overclock should not require any increase in voltage you should not see much of an increase in temperatures either. See if you can get this to work. If you notice the computer shut off or restart randomly go back into the bios and lower the multiplier down to 3.8ghz and see if that is stable. The 6100's aren't known as great overclockers. I'm pretty sure if you had a good cooler and weren't scared of increasing the voltage you could get close to 4.2-4.4 but if you are new to overclocking thats not If you really don't want to upgrade your cpu you should just buy a lesss expensive graphics card. With that processor you wouldn't be able to tell the difference between a 200 dollar graphics card and that 770. I would just get something like this : which is a stellar deal right now and then just pocket the extra 200 dollars, or put that towards Which would help balance your build out. Either way if you choose to overclock the risk is on you but what I suggested is very mild you should have no trouble. If i overclock will i not need to upgrade cpu? Not as badly, but its not a very strong CPU. If I'm you right now I'm only spending a max of 200 on a videocard period. That will let you play just about every game on high with good framerates. Save your money, eventually upgrade your CPU, then just add a second video card. That way you are getting the most value for your money every step of the way. Wasting 400 dollars on a 770 right now even though it will only perform half as strong because its cpu limited is just that , a waste. September 21, 2013 3:06:34 AM fonzieguy said: Not as badly, but its not a very strong CPU. If I'm you right now I'm only spending a max of 200 on a videocard period. That will let you play just about every game on high with good framerates. Save your money, eventually upgrade your CPU, then just add a second video card. That way you are getting the most value for your money every step of the way. Wasting 400 dollars on a 770 right now even though it will only perform half as strong because its cpu limited is just that , a waste. I have overclocked to 4.2 Ghz will the GTX 670 be bottlenecked? Can't find your answer ? Ask ! Read discussions in other Graphics & Displays categories
{"url":"http://www.tomshardware.com/answers/id-1809335/gtx-770-bottleneck-amd-6100.html","timestamp":"2014-04-18T10:09:41Z","content_type":null,"content_length":"139932","record_id":"<urn:uuid:92126f31-fa4b-4991-9a38-38ddb004fc6c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
SLAC Colloquium Colloquium Detail Glimpse of the Neutrino in Graphene Date: 5/18/2009 Hari Manoharan Stanford University Quantum electrodynamics and condensed-matter physics are experiencing a tantalizing convergence under the rubric of "Dirac materials"-crystals hosting charge carriers more aptly described by the Dirac equation than the Schrödinger equation. Dirac electrons comprise two-component wavefunctions and quantum symmetries intertwining spins and pseudospins with momentum; from this structure stems true electron chirality and a direct mapping to relativistic particles in a vacuum. Graphene is emerging as a prototype two-dimensional Dirac material and has been proposed as a condensed-matter nanolaboratory for relativistic particle experiments, but with exceptional accessibility since the effective speed of light is scaled down by a factor of about 300. This talk will survey our low-temperature experiments in which the unique symmetries of Dirac electrons are graphically revealed by scanning tunneling microscopy applied as a coherent nanoscale probe. By directly imaging wavefunctions, tracking quantum mechanical phase, and manipulating single atoms, we see quasiparticles surprisingly closer in phenomenology to massless relativistic neutrinos than to typical massive, non-relativistic electrons. For example, Berry's phase leads to a cancellation of backscattering and to a condensed-matter observation of the relativistic Mott scattering cross section for chiral particles. These ideas and measurements are now being extended to nanostructures and to three-dimensional topological materials.
{"url":"http://www2.slac.stanford.edu/colloquium/pasteventdetails.asp?EventID=258","timestamp":"2014-04-17T01:01:07Z","content_type":null,"content_length":"7522","record_id":"<urn:uuid:80c23175-fff0-4315-85f2-3d5180506065>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00272-ip-10-147-4-33.ec2.internal.warc.gz"}